CN102111621B - Video data recovery method, device and system - Google Patents

Video data recovery method, device and system Download PDF

Info

Publication number
CN102111621B
CN102111621B CN 200910243708 CN200910243708A CN102111621B CN 102111621 B CN102111621 B CN 102111621B CN 200910243708 CN200910243708 CN 200910243708 CN 200910243708 A CN200910243708 A CN 200910243708A CN 102111621 B CN102111621 B CN 102111621B
Authority
CN
China
Prior art keywords
video data
frame
video
data
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 200910243708
Other languages
Chinese (zh)
Other versions
CN102111621A (en
Inventor
黄晓伟
袁潮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN 200910243708 priority Critical patent/CN102111621B/en
Publication of CN102111621A publication Critical patent/CN102111621A/en
Application granted granted Critical
Publication of CN102111621B publication Critical patent/CN102111621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明实施例公开了一种恢复视频数据的方法,包括:获取与丢失的视频数据的视角不同的同一时刻的视频数据以及对应的标定摄像机参数;根据所述获取的视频数据以及对应的标定摄像机参数构造恢复数据,并使用所述恢复数据代替所述丢失的视频数据。本发明实施例提高了重建图像的质量。本发明实施例同样公开了一种应用上述方法的装置和系统。

The embodiment of the present invention discloses a method for recovering video data, including: acquiring video data at the same moment and corresponding calibration camera parameters at a different angle from the lost video data; parameters to construct recovery data and use the recovery data to replace the lost video data. The embodiments of the present invention improve the quality of reconstructed images. The embodiment of the present invention also discloses a device and a system for applying the above method.

Description

一种恢复视频数据的方法、装置和系统A method, device and system for restoring video data

技术领域 technical field

本发明涉及通信技术领域,尤其涉及一种恢复视频数据的方法、装置和系统。  The invention relates to the technical field of communication, in particular to a method, device and system for recovering video data. the

背景技术 Background technique

随着通信网络的发展,视频数据的编解码技术也取得了很大的进步。编码数据在播放时会先进行解码,然后才能进行正常播放,如果编码数据在存储或网络传输过程中发生丢失,会使解码发生错误,导致解码器崩溃并出现马赛克现象。  Along with the development of the communication network, the codec technology of the video data has also made great progress. The encoded data will be decoded before it can be played normally. If the encoded data is lost during storage or network transmission, decoding errors will occur, causing the decoder to crash and mosaic phenomenon. the

现有技术公开了一种对丢失数据的补偿方法,该补偿方法基于时域,需要将丢失数据的当前帧的前面帧的视频数据宏块整体复制到当前帧。具体地,如果数据流中存在数据丢失,则判断是否在软件处理层完成丢失数据预测估计及补偿,否则对视频数据进行播放;如果确定在软件处理层完成丢失数据的预测估计及补偿,则判断系统的实时性要求;如果系统对实时性要求高,则对丢失数据进行基于时域的预测估计及补偿,如果系统对实时性要求不高,则对丢失数据进行基于空域的预测估计和基于时域的补偿;进入解码器处理层,解码器对接收到的存在数据丢失的视频编码数据流进行基于空域的预测估计和基于时域的补偿;解码器对接收到的视频编码数据流进行解码,并发送给播放装置进行播放,基于上述判断选择,完成对丢失数据的补偿。  The prior art discloses a compensation method for lost data. The compensation method is based on the time domain, and needs to copy the entire video data macroblock of the previous frame of the current frame with the lost data to the current frame. Specifically, if there is data loss in the data stream, then judge whether to complete the prediction, estimation and compensation of the lost data at the software processing layer, otherwise play the video data; if it is determined that the prediction, estimation and compensation of the lost data are completed at the software processing layer, then judge The real-time requirements of the system; if the system has high requirements for real-time performance, the lost data will be estimated and compensated based on the time domain; Domain compensation; enter the decoder processing layer, the decoder performs spatial domain-based prediction and estimation and time-domain compensation on the received video coded data stream with data loss; the decoder decodes the received video coded data stream, And send it to the playback device for playback, and complete the compensation for the lost data based on the above judgment and selection. the

和普通视频编码数据一样,多视点视频编码数据在进行网络传输时也会出现丢包的情况。由于视频帧以帧为单位,所以某一帧数据的部分丢失很可能导致该帧数据无法解码。  Like ordinary video encoding data, multi-view video encoding data also suffers from packet loss during network transmission. Since video frames are in units of frames, a partial loss of a certain frame of data is likely to cause the frame of data to be undecodeable. the

发明人在实现本发明的过程中,发现现有技术至少存在以下缺陷:  In the process of realizing the present invention, the inventor finds that the prior art has at least the following defects:

现有技术中的丢失数据补偿方法基于时域,需要将丢失数据的视频帧前面帧的视频数据宏块整体复制到当前帧,重建的视频数据质量差,有较明显 的分块效应。当整帧丢失时,无法进行恢复,没有利用空间上不同视点的视频冗余信息。  The lost data compensation method in the prior art is based on the time domain, and it is necessary to copy the video data macroblock of the previous frame of the lost data video frame to the current frame as a whole. The quality of the reconstructed video data is poor, and there is a more obvious block effect. When the entire frame is lost, it cannot be recovered, and the video redundancy information of different viewpoints in space is not utilized. the

发明内容 Contents of the invention

本发明实施例提供了一种恢复视频数据的方法、装置和系统,用于提高重建图像的质量。  Embodiments of the present invention provide a method, device and system for restoring video data, which are used to improve the quality of reconstructed images. the

本发明实施例提供了一种恢复视频数据的方法,包括:  Embodiments of the present invention provide a method for recovering video data, including:

解码播放接收到的视频发送端发送的各个视角的视频数据,并实时检测所述视频数据中最新接收到的P帧;获取所述P帧的每个宏块的运动矢量,使用P帧图像的帧间预测模式下运动矢量为零的宏块所在的区域更新静态区域;  Decode and play the video data of each angle of view sent by the received video sender, and detect the latest received P frame in the video data in real time; obtain the motion vector of each macroblock of the P frame, and use the P frame image In the inter prediction mode, the area where the macroblock whose motion vector is zero is located updates the static area;

当检测到视频数据丢失时,获取与丢失的视频数据的视角不同的同一时刻的视频数据以及对应的标定摄像机参数;  When the loss of video data is detected, obtain the video data at the same moment as the angle of view of the lost video data and the corresponding calibration camera parameters;

根据所述获取的视频数据以及对应的标定摄像机参数构造恢复数据,并使用所述恢复数据代替所述丢失的视频数据。  Construct restoration data according to the acquired video data and corresponding calibration camera parameters, and use the restoration data to replace the lost video data. the

本发明实施例还提供了一种恢复视频数据的装置,包括:  Embodiments of the present invention also provide a device for recovering video data, including:

获取模块,用于获取与丢失的视频数据的视角不同的同一时刻的视频数据以及对应的标定摄像机参数;  The acquisition module is used to acquire the video data at the same moment as the angle of view of the lost video data and the corresponding calibration camera parameters;

恢复模块,用于根据所述获取模块获取的视频数据以及对应的标定摄像机参数构造恢复数据,并使用所述恢复数据代替所述丢失的视频数据;  A recovery module, configured to construct recovery data according to the video data acquired by the acquisition module and corresponding calibration camera parameters, and use the recovery data to replace the lost video data;

更新模块,在对接收到的视频发送端发送的视频数据进行解码播放时,实时检测所述视频数据中最新接收到的P帧;获取所述P帧的每个宏块的运动矢量,使用P帧图像的帧间预测模式下运动矢量为零的宏块所在的区域更新静态区域,供所述恢复模块使用。  The update module, when decoding and playing the video data sent by the received video sending end, detects the latest received P frame in the video data in real time; obtains the motion vector of each macroblock of the P frame, and uses P In the inter prediction mode of the frame image, the area where the macroblock whose motion vector is zero is located is updated as a static area for use by the recovery module. the

本发明实施例还提供了一种恢复视频数据的系统,包括:  Embodiments of the present invention also provide a system for recovering video data, including:

视频发送端,用于向视频接收端发送各个视角的视频数据和标定摄像机参数;  The video sending end is used to send video data of various angles of view and calibrate the camera parameters to the video receiving end;

视频接收端,用于接收来自所述视频发送端的各个视角的视频数据和标定摄像机参数,对接收到的视频数据进行解码播放,实时检测该视频数据中最新接收到的P帧,获取最新接收到的P帧的每个宏块的运动矢量,使用P帧图像的帧间预测模式下运动矢量为零的宏块所在的区域更新静态区域,当检测到视频数据丢失时,获取与所述丢失的视频数据的视角不同的同一时刻的视频数据,根据所述获取的视频数据以及对应的标定摄像机参数构造恢复数据,并使用所述恢复数据代替所述丢失的视频数据。  The video receiving end is used to receive video data and calibration camera parameters from various angles of view of the video sending end, decode and play the received video data, detect the latest received P frame in the video data in real time, and obtain the latest received The motion vector of each macroblock of the P frame, using the area where the macroblock whose motion vector is zero under the inter-frame prediction mode of the P frame image is used to update the static area, when the video data loss is detected, the acquisition and the lost For video data with different viewing angles at the same moment, recovery data is constructed according to the acquired video data and corresponding calibration camera parameters, and the lost video data is replaced by the recovery data. the

与现有技术相比,本发明实施例具有以下优点:本发明实施例利用各个视角的视频数据和标定摄像机参数构造恢复数据,并以得到的恢复数据代替丢失的视频数据,消除了恢复数据的分块效应,利用空间上不同视点的视频冗余信息,提高了重建图像质量。  Compared with the prior art, the embodiment of the present invention has the following advantages: the embodiment of the present invention utilizes the video data of various angles of view and the calibration camera parameters to construct recovery data, and replaces the lost video data with the obtained recovery data, eliminating the need for recovery data. The blocking effect, which utilizes the redundant information of the video from different viewpoints in space, improves the quality of the reconstructed image. the

附图说明 Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对本发明实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。  In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments of the present invention or the prior art. Obviously, the accompanying drawings in the following description These are only some embodiments of the present invention, and those skilled in the art can also obtain other drawings based on these drawings without any creative effort. the

图1为本发明实施例一中的一种恢复视频数据的方法流程图;  Fig. 1 is a kind of flow chart of the method for recovering video data in the embodiment of the present invention;

图2为本发明实施例二中的一种恢复视频数据的方法流程图;  Fig. 2 is a kind of flow chart of the method for recovering video data in the embodiment of the present invention two;

图3为本发明实施例三中的一种恢复视频数据的方法流程图;  Fig. 3 is a kind of flow chart of the method for recovering video data in the embodiment of the present invention three;

图4为本发明实施例四中的多视点系统的帧结构示意图;  FIG. 4 is a schematic diagram of a frame structure of a multi-view system in Embodiment 4 of the present invention;

图5为本发明实施例四中的一种恢复视频数据的方法流程图;  Fig. 5 is a kind of flow chart of the method for recovering video data in the embodiment of the present invention four;

图6为本发明实施例五中的一种恢复视频数据的装置结构示意图;  Fig. 6 is a schematic structural diagram of a device for recovering video data in Embodiment 5 of the present invention;

图7为本发明实施例六中的一种恢复视频数据的装置结构示意图;  Fig. 7 is a schematic structural diagram of a device for recovering video data in Embodiment 6 of the present invention;

图8为本发明实施例七中的一种恢复视频数据的装置结构示意图;  Fig. 8 is a schematic structural diagram of a device for recovering video data in Embodiment 7 of the present invention;

图9为本发明实施例八中的一种恢复视频数据的系统结构示意图。  FIG. 9 is a schematic structural diagram of a system for recovering video data in Embodiment 8 of the present invention. the

具体实施方式 Detailed ways

本发明实施例提供的技术方案中,其核心思想为当检测到视频数据丢失时,根据接收到的来自视频发送端的标定摄像机参数以及与丢失的视频数据视角不同的同一时刻的视频数据,使用中间视图生成方法构造恢复数据,并使用静态区域比较方法对构造的恢复数据进行优选,使用优选得到的恢复数据代替丢失的视频数据。  In the technical solution provided by the embodiment of the present invention, the core idea is that when video data loss is detected, according to the received calibration camera parameters from the video sending end and the video data at the same moment with a different angle of view from the lost video data, use the intermediate The view generation method constructs the restored data, and uses the static area comparison method to optimize the constructed restored data, and uses the optimized restored data to replace the lost video data. the

下面将结合本发明实施例中的附图,对本发明实施例的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。  The following will clearly and completely describe the technical solutions of the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention. the

如图1所示,为本发明实施例一中的一种恢复视频数据的方法流程图,包括以下步骤:  As shown in Figure 1, it is a flow chart of a method for recovering video data in Embodiment 1 of the present invention, including the following steps:

步骤101,获取与丢失的视频数据的视角不同的同一时刻的视频数据以及 对应的标定摄像机参数。  Step 101, obtain the video data at the same moment different from the angle of view of the lost video data and the corresponding calibration camera parameters. the

具体地,可以接收来自视频发送端的各个视角的视频数据和标定摄像机参数,当检测到视频数据丢失时,获取与丢失的视频数据的视角不同的同一时刻的视频数据以及对应的标定摄像机参数。其中,标定摄像机参数包括摄像机内部参数和摄像机外部参数,摄像机内部参数包括水平轴上的(透镜)归一化焦距、垂直轴上的(透镜)归一化焦距、水平轴非线性畸变参数、垂直轴非线性畸变参数、水平轴偏移量和垂直轴偏移量等参数;摄像机外部参数包括摄像机相对于外部世界的坐标参数,如x、y、z,偏转等,基本上是确定一个刚体所需要的参数。  Specifically, video data and calibration camera parameters of various viewing angles from the video sending end may be received, and when video data loss is detected, video data and corresponding calibration camera parameters at a different viewing angle from the lost video data are acquired at the same moment. Among them, the calibration camera parameters include camera internal parameters and camera external parameters, and the camera internal parameters include (lens) normalized focal length on the horizontal axis, (lens) normalized focal length on the vertical axis, nonlinear distortion parameters on the horizontal axis, vertical Axis nonlinear distortion parameters, horizontal axis offset and vertical axis offset and other parameters; camera external parameters include coordinate parameters of the camera relative to the external world, such as x, y, z, deflection, etc., which basically determine the position of a rigid body required parameters. the

步骤102,根据获取的视频数据以及对应的标定摄像机参数构造恢复数据,并使用该恢复数据代替所述丢失的视频数据。  Step 102: Construct restoration data according to the acquired video data and corresponding calibration camera parameters, and use the restoration data to replace the lost video data. the

具体地,上述根据获取的视频数据以及对应的标定摄像机参数构造恢复数据,具体包括:根据所述获取的视频数据以及对应的标定摄像机参数构造至少两个重建帧,并获取距离所述丢失的视频数据最近的能够正确解码的同一视角的视频帧;比较各个重建帧与所述获取的视频帧在静态区域的相似程度,将相似程度最高的重建帧作为所述恢复数据。  Specifically, the above constructing the recovery data according to the acquired video data and the corresponding calibration camera parameters specifically includes: constructing at least two reconstruction frames according to the acquired video data and the corresponding calibration camera parameters, and acquiring the distance from the lost video The latest video frame of the same viewing angle that can be correctly decoded; compare the similarity between each reconstructed frame and the acquired video frame in the static area, and use the reconstructed frame with the highest similarity as the restored data. the

上述比较各个重建帧与获取的视频帧在静态区域的相似程度,具体包括:计算所述各个重建帧与所述获取的视频帧在所述静态区域内的像素点的亮度值的绝对值差值之和,将所述绝对值差值之和最小的重建帧作为所述相似程度最高的重建帧。  The above-mentioned comparison of the similarity between each reconstructed frame and the acquired video frame in the static area specifically includes: calculating the absolute value difference between the brightness values of the pixel points in the static area between each reconstructed frame and the acquired video frame sum, and the reconstructed frame with the smallest sum of absolute value differences is taken as the reconstructed frame with the highest similarity. the

本发明实施例利用各个视角的视频数据和标定摄像机参数构造恢复数据,并以得到的恢复数据代替丢失的视频数据,消除了恢复数据的分块效应,利用空间上不同视点的视频冗余信息,提高了重建图像的质量。  The embodiment of the present invention utilizes the video data of various angles of view and the calibration camera parameters to construct recovery data, and replaces the lost video data with the obtained recovery data, which eliminates the block effect of the recovery data, and utilizes video redundancy information of different viewpoints in space, Improved the quality of reconstructed images. the

本发明实施例中的恢复视频数据的方法应用于包括视频发送端和视频接收端的系统,其中,视频发送端由摄像机采集装置和编码器构成,需要标定摄像机的参数,包括内部参数和外部参数;视频接收端通过网络接收视频数据,并对接收到的视频数据进行解码。  The method for recovering video data in the embodiment of the present invention is applied to a system including a video sending end and a video receiving end, wherein the video sending end is composed of a camera acquisition device and an encoder, and parameters of the camera need to be calibrated, including internal parameters and external parameters; The video receiving end receives video data through the network, and decodes the received video data. the

以下结合上述应用场景对本发明实施例中的恢复视频数据的方法进行详细、具体的描述。  The method for restoring video data in the embodiment of the present invention will be described in detail below in combination with the above application scenarios. the

如图2所示,为本发明实施例二中的一种恢复视频数据的方法流程图,包括以下步骤:  As shown in Figure 2, it is a flow chart of a method for recovering video data in Embodiment 2 of the present invention, including the following steps:

步骤201,视频发送端对每个视角的摄像机进行标定。  In step 201, the video sending end calibrates the cameras of each viewing angle. the

具体地,可以确定摄像机的输入输出关系,赋予摄像机的分度值;也可以确定摄像机的静态特性指标;还可以消除系统误差,改善摄像机或系统的正确度。  Specifically, the input-output relationship of the camera can be determined, and the graduation value of the camera can be assigned; the static characteristic index of the camera can also be determined; system errors can also be eliminated, and the accuracy of the camera or system can be improved. the

步骤202,视频发送端将每个视角的标定摄像机参数和视频数据发送到视频接收端。  In step 202, the video sending end sends the calibrated camera parameters and video data of each viewing angle to the video receiving end. the

步骤203,视频接收端检测到视频数据丢失,获取与丢失的视频数据的视角不同的同一时刻的视频数据,根据获取的视频数据以及对应的标定摄像机参数构造至少两个重建帧,并获取距离丢失的视频数据最近的能够正确解码的同一视角的视频帧。  Step 203, the video receiving end detects the loss of video data, obtains the video data at the same moment with a different viewing angle from the lost video data, constructs at least two reconstruction frames according to the obtained video data and the corresponding calibration camera parameters, and obtains the distance loss The video data is the closest correctly decoded video frame of the same view. the

具体地,根据摄像机投射投影的原理,实际物体的坐标可以转换到视频图像的坐标,即Xp=M1*M2*Xw,其中,Xp为图像的坐标,M1为摄像机内部参数,M2为摄像机外部参数,Xw实际物体坐标。本发明实施例使用中间视图生成方法构造重建帧,从与丢失的视频数据的视角不同的同一时刻的视频帧恢复出若干个视频帧,将该视频帧作为重建帧。  Specifically, according to the principle of camera projection, the coordinates of the actual object can be converted to the coordinates of the video image, that is, X p = M 1 *M 2 *X w , where X p is the coordinate of the image, and M 1 is the internal parameter of the camera , M 2 is the camera external parameters, X w the actual object coordinates. In the embodiment of the present invention, a reconstruction frame is constructed using an intermediate view generation method, and several video frames are recovered from the video frames at the same time at a different viewing angle from the lost video data, and the video frames are used as the reconstruction frame.

例如,已知视点1和视点3的图像,由于实际物体坐标和图像坐标存在影射关系,可以在获得标定摄像机参数的情况下,利用公式Xp=M1*M2*Xw计算出视点1和视点3之间的视点2的图像。  For example, given the images of viewpoint 1 and viewpoint 3, since there is a mapping relationship between the actual object coordinates and image coordinates, the viewpoint 1 can be calculated by using the formula X p = M 1 *M 2 *X w when the calibration camera parameters are obtained and the image of viewpoint 2 between viewpoint 3.

步骤204,视频接收端检测距离丢失的视频数据最近的同一视角的P帧,获取该P帧的每个宏块的运动矢量,将P帧图像的帧间预测模式下运动矢量为零的宏块所在的区域作为本视角的静态区域。  Step 204, the video receiving end detects the P frame of the same viewing angle closest to the lost video data, obtains the motion vector of each macroblock of the P frame, and stores the macroblock whose motion vector is zero in the inter prediction mode of the P frame image The area where it is located serves as the static area of this perspective. the

具体地,每帧图像可以进一步划分为宏块,包括P帧图像的帧内预测模式下的宏块和P帧图像的帧间预测模式下的宏块。对于P帧图像的帧内预测模式下的宏块,可以确定该宏块所在的区域不在静态区域内;对于P帧图像 的帧间预测模式下的宏块,在视频数据的解码过程中,可以获得每个宏块的运动矢量,当宏块的运动矢量为零时,说明该宏块和前一帧相同位置的宏块相同,可以确定该宏块的区域位于静态区域。  Specifically, each image frame may be further divided into macroblocks, including macroblocks in the intra-frame prediction mode of the P-frame image and macroblocks in the inter-frame prediction mode of the P-frame image. For the macroblock under the intra-frame prediction mode of the P frame image, it can be determined that the area where the macroblock is located is not in the static area; for the macroblock under the inter-frame prediction mode of the P frame image, during the decoding process of video data, it can be The motion vector of each macroblock is obtained. When the motion vector of the macroblock is zero, it means that the macroblock is the same as the macroblock at the same position in the previous frame, and it can be determined that the area of the macroblock is located in the static area. the

步骤205,视频接收端比较各个重建帧与获取的视频帧在静态区域的相似程度,将相似程度最高的重建帧作为恢复数据,并使用该恢复数据代替丢失的视频数据。  Step 205 , the video receiving end compares the degree of similarity between each reconstructed frame and the acquired video frame in the static area, takes the reconstructed frame with the highest degree of similarity as the recovery data, and uses the recovery data to replace the lost video data. the

具体地,静态区域的图像变化程度越小,则重建帧和本视角之前帧在静态区域的相似度越高,重建帧重建的图像质量越好,可以选择该重建帧作为最终的恢复数据代替丢失的视频数据。在具体的实现过程中,可以计算各个重建帧与获取的视频帧在静态区域内的像素点的亮度值的绝对值差值之和,将绝对值差值之和最小的重建帧作为相似程度最高的重建帧。  Specifically, the smaller the image change degree in the static area, the higher the similarity between the reconstructed frame and the previous frame of this view in the static area, and the better the reconstructed image quality of the reconstructed frame, which can be selected as the final restored data instead of the lost video data. In the specific implementation process, the sum of the absolute value differences between each reconstructed frame and the acquired video frame's pixel brightness values in the static area can be calculated, and the reconstructed frame with the smallest absolute value difference is regarded as the highest similarity reconstruction frame. the

本发明实施例利用空间上的冗余信息和时间上的图像信息构造恢复数据,并对构造的恢复数据进行优选,以优选得到的恢复数据代替丢失的视频数据,消除了恢复数据的分块效应,提高了重建图像的质量,且只在检测到有视频数据丢失的时候才需要选定静态区域,对现有技术改动较少。  The embodiment of the present invention utilizes spatial redundancy information and temporal image information to construct restoration data, optimizes the constructed restoration data, replaces lost video data with the optimized restoration data, and eliminates the block effect of restoration data , the quality of the reconstructed image is improved, and the static area needs to be selected only when video data loss is detected, and there are few changes to the prior art. the

如图3所示,为本发明实施例三中的一种恢复视频数据的方法流程图,包括以下步骤:  As shown in Figure 3, it is a flow chart of a method for recovering video data in Embodiment 3 of the present invention, including the following steps:

步骤301,视频发送端对每个视角的摄像机进行标定。  In step 301, the video sending end calibrates the cameras of each viewing angle. the

步骤302,视频发送端将每个视角的标定摄像机参数和视频数据发送到视频接收端。  In step 302, the video sending end sends the calibrated camera parameters and video data of each viewing angle to the video receiving end. the

步骤303,视频接收端对接收到的视频数据进行解码播放,实时检测该视频数据中最新接收到的P帧。  In step 303, the video receiving end decodes and plays the received video data, and detects the latest received P frame in the video data in real time. the

步骤304,视频接收端获取最新接收到的P帧的每个宏块的运动矢量,使用P帧图像的帧间预测模式下运动矢量为零的宏块所在的区域更新静态区域。  In step 304, the video receiving end acquires the motion vector of each macroblock of the newly received P frame, and updates the static area using the area where the macroblock whose motion vector is zero in the inter prediction mode of the P frame image is located. the

步骤305,视频接收端检测到视频数据丢失,获取与丢失的视频数据的视角不同的同一时刻的视频数据,根据获取的视频数据以及对应的标定摄像机参数构造至少两个重建帧,并获取距离丢失的视频数据最近的能够正确解码 的同一视角的视频帧。  Step 305, the video receiving end detects the loss of video data, obtains the video data at the same time as the angle of view of the lost video data, constructs at least two reconstruction frames according to the obtained video data and the corresponding calibration camera parameters, and obtains the distance loss The nearest video frame of the video data that can be correctly decoded from the same view. the

步骤306,视频接收端计算各个重建帧与获取的视频帧在静态区域内的像素点的亮度值的绝对值差值之和,将该绝对值差值之和最小的重建帧作为恢复数据,并使用该恢复数据代替丢失的视频数据。  Step 306, the video receiving end calculates the sum of the absolute value differences between each reconstructed frame and the acquired video frame in the pixel point brightness value in the static area, and takes the reconstructed frame with the smallest absolute value difference sum as the restored data, and Use this recovered data to replace lost video data. the

本发明实施例利用空间上的冗余信息和时间上的图像信息构造恢复数据,并对构造的恢复数据进行优选,以优选得到的恢复数据代替丢失的视频数据,消除了恢复数据的分块效应,提高了重建图像的质量。  The embodiment of the present invention utilizes spatial redundancy information and temporal image information to construct restoration data, optimizes the constructed restoration data, replaces lost video data with the optimized restoration data, and eliminates the block effect of restoration data , which improves the quality of the reconstructed image. the

如图4所示,为本发明实施例四中的多视点系统的帧结构示意图,包括5个视点,每个视点单独编解码,假设图中的P[4][3]帧丢失。  As shown in FIG. 4 , it is a schematic diagram of the frame structure of the multi-viewpoint system in Embodiment 4 of the present invention, including 5 viewpoints, and each viewpoint is coded and decoded separately, assuming that the P[4][3] frame in the figure is lost. the

如图5所示,为本发明实施例四中的一种恢复视频数据的方法流程图,包括以下步骤:  As shown in Figure 5, it is a flow chart of a method for recovering video data in Embodiment 4 of the present invention, including the following steps:

步骤401,视频发送端对5个视角的摄像机进行标定。  In step 401, the video sending end calibrates the cameras of 5 viewing angles. the

步骤402,视频发送端将5个视角的标定摄像机参数和视频数据发送到视频接收端。  Step 402, the video sending end sends the calibrated camera parameters and video data of the five viewing angles to the video receiving end. the

步骤403,视频接收端检测到P[4][3]帧丢失,根据标定摄像机参数以及与P[4][3]帧视角不同的同一时刻的P[1][3]帧、P[2][3]帧、P[3][3]帧和P[5][3]帧,构造重建帧。  Step 403, the video receiver detects that the P[4][3] frame is missing, and according to the calibration camera parameters and the P[1][3] frame, P[2 ][3] frame, P[3][3] frame and P[5][3] frame to construct a reconstructed frame. the

具体地,视频接收端可以利用P[3][3]和P[5][3]恢复出重建帧Pr35,利用P[2][3]和P[5][3]恢复出重建帧Pr25,利用P[1][3]和P[5][3]恢复出重建帧Pr15。  Specifically, the video receiver can use P[3][3] and P[5][3] to restore the reconstructed frame P r35 , and use P[2][3] and P[5][3] to restore the reconstructed frame P r25 , use P[1][3] and P[5][3] to restore the reconstructed frame P r15 .

步骤404,视频接收端检测距离P[4][3]帧最近的能够正确解码的同一视角的P帧,获取该P帧的每个宏块的运动矢量,将P帧图像的帧间预测模式下运动矢量为零的宏块所在的区域作为静态区域。  Step 404, the video receiving end detects the P frame of the same viewing angle that is closest to the P[4][3] frame and can be correctly decoded, obtains the motion vector of each macroblock of the P frame, and sets the inter-frame prediction mode of the P frame image The area where the macroblock whose lower motion vector is zero is located is the static area. the

步骤405,视频接收端比较各个重建帧与距离P[4][3]帧最近的能够正确解码的同一视角的视频帧在静态区域的相似程度,将相似程度最高的重建帧代替P[4][3]帧。  Step 405, the video receiving end compares the similarity of each reconstructed frame with the video frame of the same angle of view that is closest to the P[4][3] frame and can be correctly decoded in the static area, and replaces P[4] with the reconstructed frame with the highest similarity [3] frame. the

具体地,由于距离P[4][3]帧最近的能够正确解码的同一视角的视频帧为I[4][1]帧,因此,对于重建帧Pr35,可以计算Pr35和I[4][1]在静态区域内的像 素点的亮度值的绝对差值之和SADr35;对于重建帧Pr25,可以计算Pr25和I[4][1]在静态区域内的像素点的亮度值的绝对差值之和SADr25;对于重建帧Pr15,可以计算Pr15和I[4][1]在静态区域内的像素点的亮度值的绝对差值之和SADr15。  Specifically, since the video frame of the same viewing angle that is closest to the P[4][3] frame and can be correctly decoded is the I[4][1] frame, therefore, for the reconstructed frame P r35 , P r35 and I[4 ][1] The sum SAD r35 of the absolute difference of the brightness values of the pixels in the static area; for the reconstructed frame P r25 , the brightness of P r25 and I[4][1] pixels in the static area can be calculated The sum of absolute differences SAD r25 of values; for the reconstructed frame P r15 , the sum of absolute differences SAD r15 of brightness values of pixels in the static area between P r15 and I[4][1] can be calculated.

比较SADr15、SADr25和SADr35之间的大小关系,选择绝对值差值之和最小的重建帧作为相似程度最高的重建帧代替P[4][3]帧。  Compare the size relationship among SAD r15 , SAD r25 and SAD r35 , and select the reconstructed frame with the smallest sum of absolute value differences as the reconstructed frame with the highest similarity to replace the P[4][3] frame.

本发明实施例利用空间上的冗余信息和时间上的图像信息构造恢复数据,并对构造的恢复数据进行优选,以优选得到的恢复数据代替丢失的视频数据,消除了恢复数据的分块效应,提高了重建图像的质量,且只在检测到有视频数据丢失的时候才需要选定静态区域,对现有技术改动较少。  The embodiment of the present invention utilizes spatial redundancy information and temporal image information to construct restoration data, optimizes the constructed restoration data, replaces lost video data with the optimized restoration data, and eliminates the block effect of restoration data , the quality of the reconstructed image is improved, and the static area needs to be selected only when video data loss is detected, and there are few changes to the prior art. the

本发明实施例在上述实施方式中提供了恢复视频数据的方法和多种应用场景,相应地,本发明实施例还提供了应用上述恢复视频数据的方法的装置和系统。  Embodiments of the present invention provide a method for restoring video data and various application scenarios in the foregoing implementation manners. Correspondingly, embodiments of the present invention also provide devices and systems for applying the above method for restoring video data. the

如图6所示,为本发明实施例五中的一种恢复视频数据的装置结构示意图,包括:  As shown in Figure 6, it is a schematic structural diagram of a device for restoring video data in Embodiment 5 of the present invention, including:

获取模块510,用于获取与丢失的视频数据的视角不同的同一时刻的视频数据以及对应的标定摄像机参数。  The acquiring module 510 is configured to acquire the video data at the same moment and the corresponding calibration camera parameters at a different viewing angle from the lost video data. the

恢复模块520,用于根据获取模块510获取的视频数据以及对应的标定摄像机参数构造恢复数据,并使用所述恢复数据代替所述丢失的视频数据。  The recovery module 520 is configured to construct recovery data according to the video data acquired by the acquisition module 510 and corresponding calibration camera parameters, and use the recovery data to replace the lost video data. the

本发明实施例利用各个视角的视频数据和标定摄像机参数构造恢复数据,并以得到的恢复数据代替丢失的视频数据,消除了恢复数据的分块效应,利用空间上不同视点的视频冗余信息,提高了重建图像的质量。  The embodiment of the present invention utilizes the video data of various angles of view and the calibration camera parameters to construct recovery data, and replaces the lost video data with the obtained recovery data, which eliminates the block effect of the recovery data, and utilizes video redundancy information of different viewpoints in space, Improved the quality of reconstructed images. the

如图7所示,为本发明实施例六中的一种恢复视频数据的装置结构示意图,包括:  As shown in Figure 7, it is a schematic structural diagram of a device for restoring video data in Embodiment 6 of the present invention, including:

获取模块610,用于获取与丢失的视频数据的视角不同的同一时刻的视频数据以及对应的标定摄像机参数。  The acquiring module 610 is configured to acquire the video data at the same moment and the corresponding calibration camera parameters at a different viewing angle from the lost video data. the

恢复模块620,用于根据获取模块610获取的视频数据以及对应的标定摄像 机参数构造恢复数据,并使用所述恢复数据代替所述丢失的视频数据。  The restoration module 620 is configured to construct restoration data according to the video data acquired by the acquisition module 610 and corresponding calibration camera parameters, and use the restoration data to replace the lost video data. the

上述恢复模块620,具体包括:  The recovery module 620 mentioned above specifically includes:

构造子模块621,用于根据所述获取模块610获取的视频数据以及对应的标定摄像机参数构造至少两个重建帧。  The construction sub-module 621 is configured to construct at least two reconstruction frames according to the video data acquired by the acquisition module 610 and corresponding calibration camera parameters. the

获取子模块622,用于获取距离所述丢失的视频数据最近的能够正确解码的同一视角的视频帧。  The obtaining submodule 622 is configured to obtain a video frame of the same viewing angle that is closest to the lost video data and can be correctly decoded. the

比较子模块623,用于比较所述构造子模块621构造的各个重建帧与所述获取子模块622获取的视频帧在静态区域的相似程度。  The comparison sub-module 623 is used to compare the similarity of each reconstruction frame constructed by the construction sub-module 621 with the video frame obtained by the acquisition sub-module 622 in the static area. the

上述比较子模块623,具体用于计算所述各个重建帧与所述获取的视频帧在所述静态区域内的像素点的亮度值的绝对值差值之和,将所述绝对值差值之和最小的重建帧作为所述相似程度最高的重建帧。  The above comparison sub-module 623 is specifically configured to calculate the sum of the absolute value differences of the brightness values of the pixels in the static area between the respective reconstructed frames and the acquired video frame, and calculate the sum of the absolute value differences and the smallest reconstructed frame as the reconstructed frame with the highest similarity. the

恢复子模块624,用于将所述比较子模块623得到的相似程度最高的重建帧作为所述恢复数据,并使用所述恢复数据代替所述丢失的视频数据。  The restoration submodule 624 is configured to use the reconstructed frame with the highest similarity obtained by the comparison submodule 623 as the restoration data, and use the restoration data to replace the lost video data. the

上述恢复模块620,还包括:  The above recovery module 620 also includes:

检测子模块625,用于检测距离所述丢失的视频数据最近的同一视角的P帧,获取所述P帧的每个宏块的运动矢量,将P帧图像的帧间预测模式下运动矢量为零的宏块所在的区域作为本视角的静态区域,供所述比较子模块623使用。  The detection sub-module 625 is configured to detect the P frame of the same viewing angle closest to the lost video data, obtain the motion vector of each macroblock of the P frame, and convert the motion vector of the P frame image in the inter-frame prediction mode to The area where the zero macroblock is located is used as the static area of the viewing angle for use by the comparison sub-module 623 . the

本发明实施例利用空间上的冗余信息和时间上的图像信息构造恢复数据,并对构造的恢复数据进行优选,以优选得到的恢复数据代替丢失的视频数据,消除了恢复数据的分块效应,提高了重建图像的质量,且只在检测到有视频数据丢失的时候才需要选定静态区域,对现有技术改动较少。  The embodiment of the present invention utilizes spatial redundancy information and temporal image information to construct restoration data, optimizes the constructed restoration data, replaces lost video data with the optimized restoration data, and eliminates the block effect of restoration data , the quality of the reconstructed image is improved, and the static area needs to be selected only when video data loss is detected, and there are few changes to the prior art. the

如图8所示,为本发明实施例七中的一种恢复视频数据的装置结构示意图,包括:  As shown in Figure 8, it is a schematic structural diagram of a device for restoring video data in Embodiment 7 of the present invention, including:

获取模块710,用于获取与丢失的视频数据的视角不同的同一时刻的视频数据以及对应的标定摄像机参数。  The acquiring module 710 is configured to acquire the video data at the same moment and the corresponding calibration camera parameters at a different viewing angle from the lost video data. the

恢复模块720,用于根据获取模块710获取的视频数据以及对应的标定摄像 机参数构造恢复数据,并使用所述恢复数据代替所述丢失的视频数据。  The restoration module 720 is configured to construct restoration data according to the video data acquired by the acquisition module 710 and corresponding calibration camera parameters, and use the restoration data to replace the lost video data. the

上述恢复模块720,具体包括:  The recovery module 720 mentioned above specifically includes:

构造子模块721,用于根据所述获取模块710获取的视频数据以及对应的标定摄像机参数构造至少两个重建帧。  The construction sub-module 721 is configured to construct at least two reconstruction frames according to the video data acquired by the acquisition module 710 and corresponding calibration camera parameters. the

获取子模块722,用于获取距离所述丢失的视频数据最近的能够正确解码的同一视角的视频帧。  The obtaining sub-module 722 is configured to obtain a video frame of the same viewing angle that is closest to the lost video data and can be correctly decoded. the

比较子模块723,用于比较所述构造子模块721构造的各个重建帧与所述获取子模块722获取的视频帧在静态区域的相似程度。  The comparison sub-module 723 is used to compare the similarity of each reconstruction frame constructed by the construction sub-module 721 with the video frame obtained by the acquisition sub-module 722 in the static area. the

上述比较子模块723,具体用于计算所述各个重建帧与所述获取的视频帧在所述静态区域内的像素点的亮度值的绝对值差值之和,将所述绝对值差值之和最小的重建帧作为所述相似程度最高的重建帧。  The above comparison sub-module 723 is specifically configured to calculate the sum of the absolute value differences of the brightness values of the pixels in the static area between the respective reconstructed frames and the acquired video frames, and calculate the sum of the absolute value differences and the smallest reconstructed frame as the reconstructed frame with the highest similarity. the

恢复子模块724,用于将所述比较子模块723得到的相似程度最高的重建帧作为所述恢复数据,并使用所述恢复数据代替所述丢失的视频数据。  The restoration submodule 724 is configured to use the reconstructed frame with the highest similarity obtained by the comparison submodule 723 as the restoration data, and use the restoration data to replace the lost video data. the

更新模块730,在对所述视频数据进行解码播放时,实时检测所述视频数据中最新接收到的P帧;获取所述P帧的每个宏块的运动矢量,使用P帧图像的帧间预测模式下运动矢量为零的宏块所在的区域更新静态区域,供所述恢复模块720使用。  The updating module 730 detects the latest received P frame in the video data in real time when the video data is decoded and played; obtains the motion vector of each macroblock of the P frame, and uses the interframe of the P frame image In the prediction mode, the area where the macroblock whose motion vector is zero is located is updated as a static area for use by the restoration module 720 . the

本发明实施例利用空间上的冗余信息和时间上的图像信息构造恢复数据,并对构造的恢复数据进行优选,以优选得到的恢复数据代替丢失的视频数据,消除了恢复数据的分块效应,提高了重建图像的质量。  The embodiment of the present invention utilizes spatial redundancy information and temporal image information to construct restoration data, optimizes the constructed restoration data, replaces lost video data with the optimized restoration data, and eliminates the block effect of restoration data , which improves the quality of the reconstructed image. the

如图9所示,为本发明实施例八中的一种恢复视频数据的系统结构示意图,包括:  As shown in Figure 9, it is a schematic structural diagram of a system for recovering video data in Embodiment 8 of the present invention, including:

视频发送端810,用于向视频接收端820发送各个视角的视频数据和标定摄像机参数。  The video sending end 810 is configured to send video data of various viewing angles and calibration camera parameters to the video receiving end 820 . the

上述视频发送端810,还用于设定各个视角的摄像机。  The above-mentioned video sending end 810 is also used to set the cameras of various viewing angles. the

视频接收端820,用于接收来自视频发送端810的各个视角的视频数据和标定摄像机参数,当检测到视频数据丢失时,获取与所述丢失的视频数据的视角 不同的同一时刻的视频数据,根据所述获取的视频数据以及对应的标定摄像机参数构造恢复数据,并使用所述恢复数据代替所述丢失的视频数据。  The video receiving end 820 is used to receive video data and calibration camera parameters from various angles of view of the video sending end 810, and when the loss of video data is detected, obtain the video data at the same moment different from the angle of view of the lost video data, Construct restoration data according to the acquired video data and corresponding calibration camera parameters, and use the restoration data to replace the lost video data. the

本发明实施例利用空间上的冗余信息和时间上的图像信息构造恢复数据,并对构造的恢复数据进行优选,以优选得到的恢复数据代替丢失的视频数据,消除了恢复数据的分块效应,提高了重建图像的质量。  The embodiment of the present invention utilizes spatial redundancy information and temporal image information to construct restoration data, optimizes the constructed restoration data, replaces lost video data with the optimized restoration data, and eliminates the block effect of restoration data , which improves the quality of the reconstructed image. the

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本发明可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台终端设备(可以是手机,个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。  Through the description of the above embodiments, those skilled in the art can clearly understand that the present invention can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is a better implementation Way. Based on this understanding, the technical solution of the embodiment of the present invention is essentially or the part that contributes to the prior art can be embodied in the form of a software product. The computer software product is stored in a storage medium and includes several instructions for Make a terminal device (which may be a mobile phone, a personal computer, a server, or a network device, etc.) execute the methods described in various embodiments of the present invention. the

以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明实施例原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视本发明的保护范围。  The above descriptions are only preferred implementations of the present invention. It should be pointed out that for those skilled in the art, some improvements and modifications can be made without departing from the principle of the embodiments of the present invention. These improvements and Retouching should also be considered within the protection scope of the present invention. the

本领域技术人员可以理解实施例中的装置中的模块可以按照实施例描述进行分布于实施例的装置中,也可以进行相应变化位于不同于本实施例的一个或多个装置中。上述实施例的模块可以集成于一体,也可以分离部署;可以合并为一个模块,也可以进一步拆分成多个子模块。  Those skilled in the art can understand that the modules in the device in the embodiment can be distributed in the device in the embodiment according to the description in the embodiment, or can be located in one or more devices different from the embodiment according to corresponding changes. The modules in the above embodiments can be integrated or deployed separately; they can be combined into one module, or further split into multiple sub-modules. the

上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。  The serial numbers of the above embodiments of the present invention are for description only, and do not represent the advantages and disadvantages of the embodiments. the

以上公开的仅为本发明的几个具体实施例,但是,本发明并非局限于此,任何本领域的技术人员能思之的变化都应落入本发明的保护范围。  The above disclosures are only a few specific embodiments of the present invention, however, the present invention is not limited thereto, and any changes conceivable by those skilled in the art shall fall within the protection scope of the present invention. the

Claims (9)

1. a method of recovering video data is characterized in that, comprising:
The video data at each visual angle of the video sending end transmission that receives is play in decoding, and detects in real time the up-to-date P frame that receives in the described video data; Obtain the motion vector of each macro block of described P frame, using motion vector under the inter-frame forecast mode of P two field picture is the area update static region at zero macro block place;
When detecting video data loss, obtain the video data of the synchronization different from the visual angle of the video data of losing and corresponding calibrating camera parameters;
Recover data according to the described video data that obtains and corresponding calibrating camera parameters structure, and use described recovery data to replace the described video data of losing.
2. the method for claim 1 is characterized in that, the video data that described basis is obtained and corresponding calibrating camera parameters structure recover data, specifically comprise:
According to the described video data that obtains and corresponding at least two reconstruction frames of calibrating camera parameters structure, and obtain the frame of video at the nearest same visual angle that can be correctly decoded of the described video data of losing of distance;
Relatively each reconstruction frames and the described frame of video of obtaining are at the similarity degree of static region, and the reconstruction frames that similarity degree is the highest is as described recovery data.
3. method as claimed in claim 2 is characterized in that, each reconstruction frames of described comparison and the similarity degree of the frame of video of obtaining at static region specifically comprise:
Calculate the absolute value differences sum of the brightness value of described each reconstruction frames and the pixel of the described frame of video of obtaining in described static region, with the reconstruction frames of described absolute value differences sum minimum as the highest reconstruction frames of described similarity degree.
4. method as claimed in claim 2 is characterized in that, each reconstruction frames of described comparison and the frame of video obtained also comprised before the similarity degree of static region:
Detecting the P frame at the nearest same visual angle of the described video data of losing of distance, obtain the motion vector of each macro block of described P frame, is that the zone at zero macro block place is as the static region at this visual angle with motion vector under the inter-frame forecast mode of P two field picture.
5. a device that recovers video data is characterized in that, comprising:
Acquisition module is used for obtaining the video data of the synchronization different from the visual angle of the video data of losing and corresponding calibrating camera parameters;
Recover module, the video data and the corresponding calibrating camera parameters that are used for obtaining according to described acquisition module are constructed the recovery data, and use described recovery data to replace the described video data of losing;
Update module when the video data that the video sending end that receives is sent is decoded broadcast, detects the up-to-date P frame that receives in the described video data in real time; Obtain the motion vector of each macro block of described P frame, using motion vector under the inter-frame forecast mode of P two field picture is the area update static region at zero macro block place, for described recovery module.
6. device as claimed in claim 5 is characterized in that, described recovery module specifically comprises:
The constructor module is for the video data that obtains according to described acquisition module and corresponding at least two reconstruction frames of calibrating camera parameters structure;
Obtain submodule, be used for obtaining the frame of video at the nearest same visual angle that can be correctly decoded of the described video data of losing of distance;
Comparison sub-module is used for each reconstruction frames of more described constructor module structure and describedly obtains frame of video that submodule obtains at the similarity degree of static region;
Recover submodule, be used for the highest reconstruction frames of similarity degree that described comparison sub-module is obtained as described recovery data, and use described recovery data to replace the described video data of losing.
7. device as claimed in claim 6 is characterized in that,
Described comparison sub-module, the concrete absolute value differences sum that is used for calculating the brightness value of described each reconstruction frames and the pixel of the described frame of video of obtaining in described static region, with the reconstruction frames of described absolute value differences sum minimum as the highest reconstruction frames of described similarity degree.
8. device as claimed in claim 6 is characterized in that, described recovery module also comprises:
Detection sub-module, P frame for detection of the nearest same visual angle of the described video data of losing of distance, obtain the motion vector of each macro block of described P frame, be that the zone at zero macro block place is as the static region at this visual angle, for described comparison sub-module with motion vector under the inter-frame forecast mode of P two field picture.
9. a system of recovering video data is characterized in that, comprising:
Video sending end is for the video data from each visual angle to video receiver and the calibrating camera parameters that send;
Video receiver, be used for receiving video data and calibrating camera parameters from each visual angle of described video sending end, to the video data that the receives broadcast of decoding, detect in real time the up-to-date P frame that receives in this video data, obtain the motion vector of each macro block of the up-to-date P frame that receives, using motion vector under the inter-frame forecast mode of P two field picture is the area update static region at zero macro block place, when detecting video data loss, obtain the video data of the synchronization different from the visual angle of the described video data of losing, recover data according to the described video data that obtains and corresponding calibrating camera parameters structure, and use described recovery data to replace the described video data of losing.
CN 200910243708 2009-12-23 2009-12-23 Video data recovery method, device and system Active CN102111621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910243708 CN102111621B (en) 2009-12-23 2009-12-23 Video data recovery method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910243708 CN102111621B (en) 2009-12-23 2009-12-23 Video data recovery method, device and system

Publications (2)

Publication Number Publication Date
CN102111621A CN102111621A (en) 2011-06-29
CN102111621B true CN102111621B (en) 2013-03-13

Family

ID=44175627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910243708 Active CN102111621B (en) 2009-12-23 2009-12-23 Video data recovery method, device and system

Country Status (1)

Country Link
CN (1) CN102111621B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105721740B (en) * 2016-01-25 2018-12-07 四川长虹电器股份有限公司 The compensation method of flat panel TV moving image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101080741A (en) * 2005-07-15 2007-11-28 松下电器产业株式会社 Device and method for composing image
CN101142824A (en) * 2005-03-31 2008-03-12 世宗大学校产学协力团 Apparatus and method for encoding and generating multi-viewpoint video using camera parameters and recording medium storing program for implementing the method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101142824A (en) * 2005-03-31 2008-03-12 世宗大学校产学协力团 Apparatus and method for encoding and generating multi-viewpoint video using camera parameters and recording medium storing program for implementing the method
CN101080741A (en) * 2005-07-15 2007-11-28 松下电器产业株式会社 Device and method for composing image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
费跃等.基于自动多视点显示的立体视频错误隐藏算法.《杭州电子科技大学学报》.2007,第27卷(第05期),73-79. *

Also Published As

Publication number Publication date
CN102111621A (en) 2011-06-29

Similar Documents

Publication Publication Date Title
CN111837397B (en) Error-cancelling code stream indication in view-dependent video coding based on sub-picture code streams
TW480885B (en) Fast motion-compensated video frame interpolator
US20190273929A1 (en) De-Blocking Filtering Method and Terminal
US20080285654A1 (en) Multiview coding with geometry-based disparity prediction
TW201742435A (en) Fisheye rendering with lens distortion correction for 360-degree video
CN112465698B (en) Image processing method and device
US9736498B2 (en) Method and apparatus of disparity vector derivation and inter-view motion vector prediction for 3D video coding
CN106331703A (en) Video encoding and decoding method, video encoding and decoding device
KR20080049789A (en) Method and apparatus for motion projection error concealment technique in block-based video
JP2014176034A (en) Video transmission device
CN101102511A (en) Video Error Concealment Method Based on Macroblock Level and Pixel Level Motion Estimation
CN108924568B (en) Depth video error concealment method based on 3D-HEVC framework
US9036707B2 (en) Method and apparatus for finding a motion vector
US20090147851A1 (en) Motion vector field projection dealing with covering and uncovering
JP5531881B2 (en) Moving picture decoding apparatus, moving picture decoding method, and integrated circuit
TW201924352A (en) Method and encoder for encoding video streams in a video coding format supporting an auxiliary frame
CN101931820A (en) Spatial Error Concealment Method
CN102984525B (en) A kind of video code flow error concealing method
CN102055968A (en) Method, system and device for restoring lost video data in multi view point video
Ling et al. A novel spatial and temporal correlation integrated based motion-compensated interpolation for frame rate up-conversion
CN102111621B (en) Video data recovery method, device and system
CN107483936B (en) A kind of light field video inter-prediction method based on macro pixel
KR100548316B1 (en) Video error correction method and device
Vishwanath et al. Motion compensated prediction for translational camera motion in spherical video coding
CN101404769A (en) Video encoding/decoding method, apparatus and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant