WO2015085637A1 - Method for supplementarily drawing content at station logo region in video - Google Patents

Method for supplementarily drawing content at station logo region in video Download PDF

Info

Publication number
WO2015085637A1
WO2015085637A1 PCT/CN2013/090978 CN2013090978W WO2015085637A1 WO 2015085637 A1 WO2015085637 A1 WO 2015085637A1 CN 2013090978 W CN2013090978 W CN 2013090978W WO 2015085637 A1 WO2015085637 A1 WO 2015085637A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
video
logo area
frame image
Prior art date
Application number
PCT/CN2013/090978
Other languages
French (fr)
Chinese (zh)
Inventor
张磊
肖煜东
索津莉
张永兵
戴琼海
Original Assignee
清华大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学深圳研究生院 filed Critical 清华大学深圳研究生院
Publication of WO2015085637A1 publication Critical patent/WO2015085637A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Definitions

  • the present invention relates to the field of computer video processing, and in particular to a method for complementing a logo area in a video.
  • the current common method is to perform mosaic processing in the logo area or use the image processing tool to blur the logo area, but these methods have poor effect.
  • the artificial processing traces have seriously affected the video quality and video viewing effects.
  • the technical problem to be solved by the present invention is: to make up for the deficiencies of the above prior art, and to propose a content complementing method for the logo area in the video, which effectively improves the complementing accuracy of the video logo area and has low computational complexity.
  • the technical problem of the present invention is solved by the following technical solution - a method for compiling a content of a logo area in a video, comprising the following steps: 1) detecting a logo area in a video and marking a logo area; 2) video
  • the image of each frame in the frame is as follows: 21)
  • the global motion estimation method is used to estimate the L+K. global reference image of the current frame image; wherein, L and K are both An integer greater than or equal to 0, and L and K are not 0 at the same time.
  • the specific value is set by the 3 ⁇ 4 household according to the retouching accuracy requirement; 22) marking the foreground and background of the logo area in the current frame image; 23) Drawing the logo area of the current frame image: for the background pixel, using the information of the corresponding pixel in the global reference image to complement the background pixel; for the foreground pixel, using the local estimation based on the front L frame and the back K frame image
  • the method estimates L+K local reference images of the current frame image, and complements the foreground pixel points by using information of corresponding pixel points in the L ⁇ K local reference images; 24) determining the logo area of the current frame image Whether the field is completely completed, if yes, proceed to step 26); if not, the agent proceeds to step 25) ; 25) using the spatial correlation of the current frame image to complement the remaining unpainted areas in the logo area; 26) Ends the interpolation of the current frame image.
  • the beneficial effects of the present invention compared to the prior art are:
  • the content complementing method of the logo area in the video of the present invention uses the global motion estimation method to estimate the current frame image from the L+K frame domain image before and after the time, and uses the pixel point information in the domain image to view the current frame image.
  • the front spot of the logo area is different from the background point.
  • After using the time correlation re-painting if there is an unpainted area, continue to use spatial correlation to supplement the re-painting.
  • the front spot and the background point are complementarily painted, and a large amount of redundant information existing in the video image frame sequence is fully utilized, thereby effectively improving the complementing accuracy of the video logo area.
  • the known image information in the neighborhood image is used to draw the logo area in the current frame, so the number of iterations in the process of composing Less, ensuring less computational complexity.
  • FIG. 1 is a flowchart of a method for compiling a content of a logo area in a video according to an embodiment of the present invention
  • the idea of the present invention is: researching the temporal correlation between video image sequences by global motion estimation and local motion estimation, and preferentially using the temporal correlation of the video image sequence to complement the logo area in the current image frame, when utilizing time
  • the spatial correlation of the current ⁇ image is used for the port painting.
  • time information firstly, global motion estimation is performed on adjacent 3 ⁇ 4 L+K frame images, and the L+ frame neighborhood image is aligned with the current frame image to obtain a L+K frame global reference image. Then, the 'empty neighborhood data of each pixel to be complemented in the logo area is analyzed, and the front point and the background point in the current image frame logo area are marked.
  • the information in the aligned image frame of the global motion compensation is used to complement the image; for the unknown foreground pixel, the local motion is compensated and then combined with the information in the neighborhood image frame. Make a repaint.
  • the input video image processed by the complement drawing method is visible in the logo area, and the video data format is not limited.
  • the supplementary drawing method includes the following steps -
  • the detection method may select a plurality of methods for detecting the video logo area, for example, may include algorithm detection and user manual tick identification, and mark the logo area.
  • the method of automatic detection by algorithm can be used but not It is limited to the frame difference based station detection and extraction algorithm proposed by Kartrin Meisinger et al.
  • the input parameters indicating the logo area can be prepared in advance or the user operation interface can be provided for the user to mark the logo area.
  • the station logo area of the user check mark is directly received to complete the detection.
  • the station mark area can be identified. The easiest way is to mark the pixels in the logo area as 0, which is black. Or, the uniform identifier is 255, which is rendered white. Other identification methods are also available, and are not limited to the above two.
  • step U2 Perform the following supplementary operations on each frame image in the video, specifically as step U21) - U26 in Figure 1.
  • the global motion estimation if method is used to estimate the L+K global reference images of the current frame image.
  • K is a positive integer, which is set by 3 ⁇ 4 households according to the accuracy requirements.
  • Global motion refers to motion in a sequence of video images resulting from the motion of the camera.
  • the global motion estimation is to estimate the spatial displacement between the pixel points in the current frame and the corresponding pixel points in the front and back domain frames in the current sequence, usually defined by the motion parameter model.
  • the current frame image may be estimated from the domain frame image to obtain an estimated image of the current frame; or the adjacent frame image may be estimated by the current frame image to obtain an adjacent frame. The image of the image is estimated.
  • X 1 denote the coordinates of the pixel points in the video image frame at time i
  • denote the coordinates of the pixel points in the video image frame at time j
  • the operation of estimating the image of the image frame at time i from the image frame at time j is ⁇ ', that is, aligning the image frame at time j to the image at time i.
  • L and K are 3, that is, the current frame image is estimated from the three frames of images before and after, and six global reference images are obtained. If you find that the offset accuracy is not satisfactory, you can return to reset the values of L and K to a larger value, thus improving the precision of the fill.
  • L and K can be taken as other values, and L can be taken as 0, and only the information of the K frame image is used; or K is 0, and only the information of the previous L frame image is used, and the specific value combination is not limited, as long as there is a phase
  • the information of the adjacent frame can be.
  • the L window is selected before and after the L-frame, and the alignment of the video image frame in the time window can be prepared for the use of the inter-correlation of the video sequence in the subsequent steps.
  • the normal foreground background marking method can be used to process the logo area of the current frame image.
  • the information of the L+K global reference images obtained in step U21) can be used to perform the foreground background marking, so that the reference image of the response time correlation information is obtained by global motion estimation, and the foreground background can be improved.
  • the accuracy of the markup which in turn improves the accuracy of subsequent fills.
  • each sub-block is a square having a length and a width of 2 W which are centered at the pixel point I x, y).
  • the set ⁇ is a set of pixels of L+K+1 sub-blocks, so this step calculates the variance of the pixel values of the pixels in the set.
  • step U223 setting a threshold, comparing the threshold to the variance corresponding to each pixel point calculated in step U222), if the variance is less than the threshold, marking the pixel as the background; if the variance is greater than the threshold, marking the pixel as the foreground .
  • a threshold is set, and the threshold is set by the user, and can be set as an empirical value according to the compensation accuracy requirement.
  • the background pixel is complemented by the information of the corresponding pixel in the L+K global reference images; and for the foreground pixel, the local estimation method is used according to the pre-L frame and the K-frame image.
  • the L+K. local reference image of the current frame image is complemented by the information of the corresponding pixel in the L+K local reference images.
  • the background and the foreground are respectively complemented.
  • the information in the already-aligned neighborhood image that is, the information of the reference image is directly used for the complement.
  • the local motion compensation is introduced to compensate the foreground pixel according to the information of the local reference image obtained after the local motion estimation. Since the first L frame and the last K frame are taken as the time window, the L ⁇ K in the time window can be referred to the fishing image.
  • the other is to first calculate the error between the IK reference image and the current frame image, and use the pixel value of the corresponding pixel in the reference image with the smallest error as the pixel value of the pixel to be painted.
  • step U24 It is judged whether the logo area of the current frame image is completely filled, if yes, the process proceeds to step U26); if not, the process proceeds to step U25).
  • step U25 Fill in the remaining unpainted areas in the logo area with the spatial correlation of the current frame image.
  • step U26) Ends the interpolation of the current frame image. If it is judged by the step U24) that the contents of the logo area in the video have been complemented, the processing proceeds to step U26) to end the complement of the current frame image. Enter the process of drawing the next frame of image.
  • the re-painting process in step U23) mainly utilizes the temporal correlation between video frames. If the motion in the video is very slow, some of the pixels in the logo area may not be able to find the corresponding pixel information from the adjacent video image frame.
  • This step U24) judges that there is a pixel that is not recompressed, so That is, proceed to step U25) to perform supplementary re-painting, and mainly use the spatial correlation of the current frame image to make a supplementary drawing.
  • Block matching algorithm based on sample feeding When performing spatial correlation compensation, specifically include the T step -
  • the pixel point II is filled with the information of the corresponding pixel point II' in the matching block, and the pixel point 12 in the current block is complemented by the information of the corresponding pixel point 12' in the matching block, and then the analogy is correspondingly plotted.
  • step U21) - U26 After the above steps U21) - U26), the complement of the logo area in the current frame image is completed. Repeat step U21) - U26) to complete the reconstruction of each frame image in the video, thus completing the task of removing the logo after the logo is removed.
  • the content and the foreground of the content of the logo area in the video are respectively to be drawn, and the time information carried in the image of the domain frame is fully utilized, so that a large amount of redundant information is fully used in the process of composing Therefore, the precision of the complement of the video logo area is effectively improved, and the logo area is reconstructed with high precision.
  • the supplement is spatially correlated and drawn to ensure that the entire logo area is complemented.
  • the number of iterations in the process of composing Less make sure you don't add more computational complexity.
  • the re-painting method of the embodiment further includes a step U3) (not illustrated in FIG. 1): integrating all the inaccurate images of all the re-paintings in time sequence to remove the logo area and extracting Video. That is, through step U3), each image frame is integrated into a video, that is, the 3 ⁇ 4 video is obtained after the station logo is removed and the station logo area is reconstructed and reconstructed.

Abstract

Disclosed is a method for supplementarily drawing content at a station logo region in a video. The method comprises the following steps: 1) detecting a station logo region in a video, and marking the station logo region; and 2) performing the following supplementary drawing operations on each image in the video: 21) estimating, according to front L images and rear K images, (L+K) global reference images of a current image by using a global motion estimation method; 22) marking a foreground and a background of the station logo region in the current image; 23) supplementarily drawing a station logo region of the current image; 24) determining whether the station logo region of the current image is completely drawn, if so, entering step 26), otherwise, entering step 25); and 25) supplementarily drawing the rest undrawn region in the station logo region by using spatial correlation of the current image; and 26) ending the supplementary drawing of the current image. The method for supplementarily drawing the content at a station logo region in a video in the present invention can effectively improve the accuracy of supplementarily drawing content at the station logo region in the video by fully using time information between adjacent images.

Description

种视频中的台标区域的内容补绘方法  Method for compiling the content of the logo area in the video
【 技术领域 】 【Technical field 】
本发明涉及计算机视频处理领域, 特别是涉及一种视频中的台标区域补绘方法。 The present invention relates to the field of computer video processing, and in particular to a method for complementing a logo area in a video.
【 背景技术 】 【 Background technique 】
台标作为一种知识产权所有权的声明, 几乎所有的商业电视频道都有自己的台标。 当新的台标覆盖原始的台标重叠在一起的时候, 台标也可以作为一种授权转播的标志。 但是与此同时, 这些台标的存在既降低了视频的观赏舒适度, 也增加了视频之间交流 的困难。 因此, 如何去除台标以及精确补绘台标区域是一个亟待解决的关键问题。  As a statement of intellectual property ownership, almost all commercial TV channels have their own logo. When the new logo is overlaid with the original logo, the logo can also be used as a sign of authorized broadcast. At the same time, however, the presence of these logos both reduces the viewing comfort of the video and increases the difficulty of communication between the videos. Therefore, how to remove the logo and accurately retouch the logo area is a key issue to be solved.
为了去除视频中的台标并对台标区域进行精确补绘, 目前常用的方法是在台标区 域进行马赛克处理或者使用图像处理工具将台标区域变得模糊, 但这些方法补绘效果 较差, 人为处理痕迹明显 严重影响了视频质量和视频观赏效果。  In order to remove the logo in the video and accurately compensate the logo area, the current common method is to perform mosaic processing in the logo area or use the image processing tool to blur the logo area, but these methods have poor effect. The artificial processing traces have seriously affected the video quality and video viewing effects.
【 发明内容 】  [Content of the Invention]
本发明所要解决的技术问题是: 弥补上述现有技术的不足, 提出一种视频中的台 标区域的内容补绘方法, 有效提高视频台标区域的补绘精度 且计算复杂度较低。  The technical problem to be solved by the present invention is: to make up for the deficiencies of the above prior art, and to propose a content complementing method for the logo area in the video, which effectively improves the complementing accuracy of the video logo area and has low computational complexity.
本发明的技术问题通过以下的技术方案予以解决 - 种视频中的台标区域的内容补绘方法, 包括以下步骤: 1 )检測视频中的台标区 域, 并标记台标区域; 2 ) 对视频中的各帧图像进行如下补绘操作: 21 ) 根据前 L帧、 后 K帧图像, 采用全局运动估计方法估计得到当前帧图像的 L+K.个全局参考图像; 其 中, L和 K均为大于等于 0的整数, 且 L和 K不同时为 0, 具体取值由 ]¾户根据补绘 精度要求进行设定; 22 ) 标记当前帧图像中的台标区域的前景和背景; 23 ) 补绘当前 帧图像的台标区域: 对于背景像素点, 利用 个全局参考图像中相应像素点的信息 补绘该背景像素点; 对于前景像素点, 根据前 L帧、 后 K帧图像, 采用局部估计方法 估计得到当前帧图像的 L+K个局部参考图像,利用 L÷K个局部参考图像中相应像素点 的信息补绘该前景像素点; 24) 判断当前帧图像的台标区域是否补绘完全, 如果是, 则进入步骤 26 ); 如果否, 劑进入歩骤 25 ); 25 )利用当前帧图像的空间相关性补绘台 标区域中剩余未补绘的区域; 26 ) 结束当前帧图像的补绘。 The technical problem of the present invention is solved by the following technical solution - a method for compiling a content of a logo area in a video, comprising the following steps: 1) detecting a logo area in a video and marking a logo area; 2) video The image of each frame in the frame is as follows: 21) According to the pre-L frame and the K-frame image, the global motion estimation method is used to estimate the L+K. global reference image of the current frame image; wherein, L and K are both An integer greater than or equal to 0, and L and K are not 0 at the same time. The specific value is set by the 3⁄4 household according to the retouching accuracy requirement; 22) marking the foreground and background of the logo area in the current frame image; 23) Drawing the logo area of the current frame image: for the background pixel, using the information of the corresponding pixel in the global reference image to complement the background pixel; for the foreground pixel, using the local estimation based on the front L frame and the back K frame image The method estimates L+K local reference images of the current frame image, and complements the foreground pixel points by using information of corresponding pixel points in the L÷K local reference images; 24) determining the logo area of the current frame image Whether the field is completely completed, if yes, proceed to step 26); if not, the agent proceeds to step 25) ; 25) using the spatial correlation of the current frame image to complement the remaining unpainted areas in the logo area; 26) Ends the interpolation of the current frame image.
本发明与现有技术对比的有益效果是: 本发明的视频中的台标区域的内容补绘方法, 利用全局运动估计方法, 从时间上 前后 L+K帧领域图像估 if当前帧图像, 利用领域图像中的像素点信息对当前帧图像中 的台标区域的前景点与背景点区别补绘。 在采用时间相关性补绘后, 如存在未补绘的 区域, 继续使用空间相关性补充补绘。 补绘过程中, 对前景点和背景点区分补绘, Ά 视频图像帧序列中存在的大量 ¾ί·间冗余信息得到充分利用 从而有效地提高了视频台 标区域的补绘精度。 而对于每一帧图像而言, 因其主要通过对视频图像序列进行运动 分树 利用邻域图像祯中钓已知图像信息来衿绘当前帧中的台标区域, 因此补绘过程 中迭代次数较少, 确保计算复杂度较低。 The beneficial effects of the present invention compared to the prior art are: The content complementing method of the logo area in the video of the present invention uses the global motion estimation method to estimate the current frame image from the L+K frame domain image before and after the time, and uses the pixel point information in the domain image to view the current frame image. The front spot of the logo area is different from the background point. After using the time correlation re-painting, if there is an unpainted area, continue to use spatial correlation to supplement the re-painting. In the process of composing, the front spot and the background point are complementarily painted, and a large amount of redundant information existing in the video image frame sequence is fully utilized, thereby effectively improving the complementing accuracy of the video logo area. For each frame of image, because it mainly performs motion tree segmentation on the video image sequence, the known image information in the neighborhood image is used to draw the logo area in the current frame, so the number of iterations in the process of composing Less, ensuring less computational complexity.
I 酎图说明 1  I 酎 diagram description 1
图 1是本发明具体实施方式的视频中的台标区域的内容补绘方法的流程图; 图 2是本发明具体实施方式的补绘方法中以 Κ=3时为例说明构建领域块集合的过 程示意图。  1 is a flowchart of a method for compiling a content of a logo area in a video according to an embodiment of the present invention; FIG. 2 is a diagram illustrating a method for constructing a domain block by using Κ=3 as an example in the complementary drawing method according to an embodiment of the present invention; Process schematic.
【 具体实施方式 】  【 detailed description 】
下面结合具体实施方式并对照附图对本发明做进一步详细说明。  The present invention will be further described in detail below in conjunction with the specific embodiments and with reference to the accompanying drawings.
本发明的构思是: 通过全局运动估计和局部运动估 研究视频图像序列之间的时 间相关性, 优先利用视频图像序列的时间相关性对当前图像帧中的台标区域进行补绘, 当利用时间相关性补绘后仍然存在未补绘到的区域时, 才采用当前 ^图像的空间相关 性进行朴绘。 使用时间信息朴绘时, 首先对相邻¾ L+K帧图像进行全局运动估计 使 L+ 帧邻域图像与当前帧图像对齐, 得到 L+K帧全局参考图像。 然后对台标区域每个 待补绘像素点的 '空邻域数据进行分析, 标记当前图像帧台标区域内的前景点和背景 点。 最后, 对于未知的背景像素点, 使用全局运动补偿后已对齐的领域图像帧内的信 息进行补绘; 对于未知的前景像素点, 则进行局部运动待偿后再结合邻域图像帧内的 信息进行补绘。  The idea of the present invention is: researching the temporal correlation between video image sequences by global motion estimation and local motion estimation, and preferentially using the temporal correlation of the video image sequence to complement the logo area in the current image frame, when utilizing time When there is still an area that has not been redrawn after the correlation is added, the spatial correlation of the current ^ image is used for the port painting. When using time information, firstly, global motion estimation is performed on adjacent 3⁄4 L+K frame images, and the L+ frame neighborhood image is aligned with the current frame image to obtain a L+K frame global reference image. Then, the 'empty neighborhood data of each pixel to be complemented in the logo area is analyzed, and the front point and the background point in the current image frame logo area are marked. Finally, for the unknown background pixel, the information in the aligned image frame of the global motion compensation is used to complement the image; for the unknown foreground pixel, the local motion is compensated and then combined with the information in the neighborhood image frame. Make a repaint.
如图 〗 所示, 为本具体实施方式中视频中的台标区域的内容补绘方法的流程图。 补绘方法处理的输入的视频图像, 其台标区域可见, 视频的数据格式不限。 补绘方法 包括以下歩骤- As shown in the figure, it is a flowchart of a method for compiling the content of the logo area in the video in the specific embodiment. The input video image processed by the complement drawing method is visible in the logo area, and the video data format is not limited. The supplementary drawing method includes the following steps -
U1 ): 检测视频中的台标区域, 并标记台标区域。 U1): Detects the logo area in the video and marks the logo area.
该步骤中, 检测方法可选用多种有关视频台标区域检测的方法, 例如, 可包括算 法检测和用户手动勾选识别, 并标记台标区域。 用算法来自动检测的方法可采用但不 局限于 Kartrin Meisinger等人提出的基于帧差的台标检测和提取算法。通过用户手工勾 选的方法则可以事先准备能标示台标区域的输入参数或者提供 户操作界面让用户将 台标区域标记出来。 此时, 检测时, 直接接收用户勾选标记的台标区域完成检測。 检 测标记后, 即可对台标区域识别出来。 最简便的方法, 即是台标区域内的像素点被统 一标识为 0, 即呈现为黑色。 抑或, 统一标识为 255, 即呈现为白色。 也可采 其它标 识方法, 不局限为上述两种。 In this step, the detection method may select a plurality of methods for detecting the video logo area, for example, may include algorithm detection and user manual tick identification, and mark the logo area. The method of automatic detection by algorithm can be used but not It is limited to the frame difference based station detection and extraction algorithm proposed by Kartrin Meisinger et al. By manually checking the user's method, the input parameters indicating the logo area can be prepared in advance or the user operation interface can be provided for the user to mark the logo area. At this time, when detecting, the station logo area of the user check mark is directly received to complete the detection. After the mark is detected, the station mark area can be identified. The easiest way is to mark the pixels in the logo area as 0, which is black. Or, the uniform identifier is 255, which is rendered white. Other identification methods are also available, and are not limited to the above two.
U2) 对视频中的各帧图像进行如下补绘操作, 具体为图 1中步骤 U21 ) - U26)。  U2) Perform the following supplementary operations on each frame image in the video, specifically as step U21) - U26 in Figure 1.
U21 ) 根据前 L帧、 后 K帧图像, 采用全局运动估 if方法估 if得到当前帧图像的 L+K个全局参考图像。 其中, K为正整数, 由 ¾户根据补绘精度要求进行设定。  U21) According to the first L frame and the last K frame image, the global motion estimation if method is used to estimate the L+K global reference images of the current frame image. Among them, K is a positive integer, which is set by 3⁄4 households according to the accuracy requirements.
全局运动是指由于摄像机的运动所产生的视频图像序列中的运动。 全局运动估^ 即是估 if当前帧中像素点与 间序列上前后领域帧中相应像素点之间的空间位移, 通 常通过运动参数模型来定义。 全局运动估计方法中, 定义出运动参数模型后, 即可以 由领域帧图像估计出当前帧图像, 得到当前帧的估计图像; 也可以 ffl当前帧图像估^ 出相邻帧图像, 得到相邻帧图像的估^图像。 例如, 设 X1表示 i时刻视频图像帧内像 素点的坐标, ^表示 j时刻视频图像帧内像素点的坐标, 表示将 j时刻的图像帧与 i 时刻图像帧对齐时所需的运动参数模型, 则由 j时刻 ¾图像帧估计得到 i时刻的图像帧 的估计图像的运算为 .χ' 也即是将 j时刻的图像帧向 i时刻的图像愤对齐。 Global motion refers to motion in a sequence of video images resulting from the motion of the camera. The global motion estimation is to estimate the spatial displacement between the pixel points in the current frame and the corresponding pixel points in the front and back domain frames in the current sequence, usually defined by the motion parameter model. In the global motion estimation method, after the motion parameter model is defined, the current frame image may be estimated from the domain frame image to obtain an estimated image of the current frame; or the adjacent frame image may be estimated by the current frame image to obtain an adjacent frame. The image of the image is estimated. For example, let X 1 denote the coordinates of the pixel points in the video image frame at time i, ^ denote the coordinates of the pixel points in the video image frame at time j, and represent the motion parameter model required when aligning the image frame at time j with the i-time image frame. Then, the operation of estimating the image of the image frame at time i from the image frame at time j is χ', that is, aligning the image frame at time j to the image at time i.
该步骤中, 用户设定 L和 K值后, 例如取 L和 K均为 3, 即是由前后 3帧图像估 计当前帧图像, 得到 6幅全局参考图像。 后续如发现补绘精度不满意, 可返回重新设 定 L和 K的值为更大值, 从而提高补绘精度。 当然也可取 L和 K为其它值, 还可取 L 为 0, 仅利用后面 K帧图像的信息; 或者取 K为 0, 仅利用前面 L帧图像的信息, 具 体取值组合不限, 只要有相邻帧的信息即可。 本步骤中, 选取前后 L- 帧时阆窗口, 对时间窗口内视频图像帧对齐可为后续步骤中利用视频序列的 '间相关性来补绘台标 区域傲准备。  In this step, after the user sets the L and K values, for example, both L and K are 3, that is, the current frame image is estimated from the three frames of images before and after, and six global reference images are obtained. If you find that the offset accuracy is not satisfactory, you can return to reset the values of L and K to a larger value, thus improving the precision of the fill. Of course, L and K can be taken as other values, and L can be taken as 0, and only the information of the K frame image is used; or K is 0, and only the information of the previous L frame image is used, and the specific value combination is not limited, as long as there is a phase The information of the adjacent frame can be. In this step, the L window is selected before and after the L-frame, and the alignment of the video image frame in the time window can be prepared for the use of the inter-correlation of the video sequence in the subsequent steps.
U22) 标记当前帧图像中的台标区域的前景和背景。  U22) Marks the foreground and background of the logo area in the current frame image.
该步骤中, 可采 ]¾通常的前景背景标记方法对当前帧图像的台标区域进行处理。 优选地, 可利用歩骤 U21 ) 中得到 L+K个全局参考图像的信息迸行前景背景标记, 这 样, 通过全局运动估计得到反应时阆相关性信息的参考图像来进行标记, 可提高前景 背景标记的精确度, 进而提高后续补绘时的精确度。具体地, 包括步骤 U221 ) - U223 ): U221 ) 对当前帧图像中的台标区域中的每一个像素点 I (x, y) , 构建像素点的领 域块集合 ψ,所述领域块集合 ψ包括分别从当前帧图像和 L+K个全局参考图像中取出的 L+K+1个子块 ψο, 子块 ψ。= { ϊ ( i, j ) I i≡ (x- W, x+W), j fc≡ (y W, y+W) }, 其中, W是空间窗口大小, 由 ffi户根据补绘精度和运算量的要求进行设定。 In this step, the normal foreground background marking method can be used to process the logo area of the current frame image. Preferably, the information of the L+K global reference images obtained in step U21) can be used to perform the foreground background marking, so that the reference image of the response time correlation information is obtained by global motion estimation, and the foreground background can be improved. The accuracy of the markup, which in turn improves the accuracy of subsequent fills. Specifically, the method includes the step U221) - U223) : U221) constructing a domain block set 像素 of the pixel points for each pixel point I (x, y) in the logo area in the current frame image, the domain block set ψ including the current frame image and L+K respectively L+K+1 sub-blocks ψο, sub-block 取出 taken out from the global reference image. = { ϊ ( i, j ) I i≡ (x- W, x+W), j fc≡ (y W, y+W) }, where W is the spatial window size, which is based on the accuracy of the ffi The calculation of the amount of calculation is required.
该步骤中, 如图 2所示, 以 L和 K均为 3时的情形为例说明构建领域块集合 ψ的 过程。 即分别从 6个全局参考图像 R1〜R6中取出 6个子块以及从实际的当前帧图像 C 中取出 1个子块, 总计 7个子块的像素点构成集合 ψ。 各子块是以像素点 I x, y) 为中 心的长宽分别为 2W的方块。  In this step, as shown in Fig. 2, the case where the L and K are both 3 is taken as an example to illustrate the process of constructing the domain block set ψ. That is, six sub-blocks are taken out from the six global reference pictures R1 to R6, and one sub-block is taken out from the actual current frame image C, and a total of seven sub-blocks constitute a set 像素. Each sub-block is a square having a length and a width of 2 W which are centered at the pixel point I x, y).
U222) 计算各像素点的领域块集合 ψ内的像素点的方差。  U222) Calculate the variance of the pixel points in the field block set of each pixel.
如上所述, 集合 ψ是 L+K+1个子块的像素点构成集合, 因此该步骤即 ^算该集合 内像素点的像素值的方差。  As described above, the set ψ is a set of pixels of L+K+1 sub-blocks, so this step calculates the variance of the pixel values of the pixels in the set.
U223 )设定阈值, 比较阈值与步骤 U222) 中计算得到的各像素点对应的方差, 如 果方差小于阈值, 则将该像素点标记为背景; 如果方差大于阈值, 则将该像素点标记 为前景。  U223) setting a threshold, comparing the threshold to the variance corresponding to each pixel point calculated in step U222), if the variance is less than the threshold, marking the pixel as the background; if the variance is greater than the threshold, marking the pixel as the foreground .
由于是利用步骤 U21 ) 中全局运动估计后反应 '间信息的结果, 所以对属于视频 背景区域的像素点, 其运动位移较小, 因此其领域块集合 ψ内像素点的像素值的方差就 相对较小; 反之, 对于属于视频前景区域的像素点, 由于前景存在局部运动, 其运动 位移较大, 因此其领域块集合 ψ内像素点的像素值的方差就会相对较大。 该歩骤中, 即 是设置一个阈值, 阈值由用户设定, 可根据补绘精度要求设定为经验值。 通过阈值与 各像素点对应的领域块集合 Ψ的方差迸行比较, 便可区分各像素点是属于前景还是背 景, 认而完成当前帧视频图像的台标区域像素点的前景-背景标记。 Since the result of the reaction between the global motion estimation in step U21) is used, the motion displacement of the pixel belonging to the background area of the video is small, so the variance of the pixel value of the pixel in the domain block set is relatively Smaller; conversely, for a pixel belonging to the foreground area of the video, since the foreground has local motion, the motion displacement is large, so the variance of the pixel values of the pixel points in the domain block set is relatively large. In this step, a threshold is set, and the threshold is set by the user, and can be set as an empirical value according to the compensation accuracy requirement. By comparing the threshold value with the variance of the domain block set Ψ corresponding to each pixel point, it is possible to distinguish whether each pixel point belongs to the foreground or the background, and complete the foreground-background mark of the pixel area of the station frame of the current frame video image.
U23) 补绘当前帧图像的台标区域。 具体地: 对于背景像素点, 利用 L+K个全局 参考图像中相应像素点的信息补绘该背景像素点; 对于前景像素点, 根据前 L帧、 后 K帧图像, 采用局部估计方法估计得到当前帧图像的 L+K.个局部参考图像, 利用 L+K 个局部参考图像中相应像素点的信息补绘该前景像素点。  U23) Complement the logo area of the current frame image. Specifically, for the background pixel, the background pixel is complemented by the information of the corresponding pixel in the L+K global reference images; and for the foreground pixel, the local estimation method is used according to the pre-L frame and the K-frame image. The L+K. local reference image of the current frame image is complemented by the information of the corresponding pixel in the L+K local reference images.
该步骤中, 对于台标区域的像素点, 区分背景和前景分别补绘。 对于当前帧图像 内台标区域的背景像素点 直接利用已经对齐后的邻域图像内的信息, 也即参考图像 的信息进行补绘。 对于当前帧图像内台标区域的前景像素点, 引入局部运动补偿 根 据局部运动估计后得到的局部参考图像的信息补绘该前景像素点。 由于取前 L帧、后 K幀作为时间窗口, 因此时间窗口内存在 L÷K 可参考钓图像 喊。利用 L÷K个全局参考图像补绘背景像素点或 L+K个局部参考图像补绘前景像素点 时, 有多种补绘方式。 如下列举两种补绘方式: In this step, for the pixel points of the logo area, the background and the foreground are respectively complemented. For the background pixel of the logo area in the current frame image, the information in the already-aligned neighborhood image, that is, the information of the reference image is directly used for the complement. For the foreground pixel of the logo area in the current frame image, the local motion compensation is introduced to compensate the foreground pixel according to the information of the local reference image obtained after the local motion estimation. Since the first L frame and the last K frame are taken as the time window, the L÷K in the time window can be referred to the fishing image. When L÷K global reference images are used to complement background pixels or L+K local reference images to complement foreground pixels, there are various re-painting methods. Here are two ways to complement:
一种, 即是以 L+K个参考图像中相应像素点的像素值的平均值作为当前待补绘像 素点的像素值进行补绘,, 例如, 补绘当前帧图像中像素点 I ( X, y) 时, 取 L+K个参 考图像中中相应的像素点 I ( X, ν) , 用 L+K个像素点的像素值的平均值作为当前帧 内像素点 I ( X, y) 的像素值  One is to use the average value of the pixel values of the corresponding pixel points in the L+K reference images as the pixel value of the pixel to be complemented, for example, to complement the pixel point I in the current frame image (X , y), take the corresponding pixel point I ( X, ν) in the L + K reference images, and use the average value of the pixel values of the L + K pixel points as the current intra-frame pixel point I ( X, y) Pixel value
另一种, 即是先计算 I K个参考图像与当前幀图像的误差, 以误差最小的一个参 考图像中的相应像素点的像素值作为当前待待绘像素点的像素值进行补绘。 计算误差 时, 有多种方式, 一种方式, 即是在当前帧图像中取待补绘像素点 I ( X, y)周围 Ν 个像素点, 记为 Ik (x, y) , k=l, 2, ..., Ν; 对应在参考图像中取该 Ν个像素点相 应的像素点, 记为 】'k (x, y) , k==l, 2, ..。, N; 然后计算相应像素点的像素值的差 值的平方和开根, ί| /Α(χ,_ν) /' xj))2作为该参考图僳的误差值。 当然, 还可 以选择其他定义误差值的方式 上述举例仅为示意性。 依次计算 L÷K 参考图像的误 差值, 比较误差值, 从而得到误差值最小的一个参考图像, 优先利用误差值最小的邻 域图像^内的信息对当前帧图像中的像素点进行补绘。 The other is to first calculate the error between the IK reference image and the current frame image, and use the pixel value of the corresponding pixel in the reference image with the smallest error as the pixel value of the pixel to be painted. When calculating the error, there are a variety of ways, one way, i.e., in the current image is to be taken inpainting pixels I (X, y) pixel points around Ν, denoted as I k (x, y), k = l, 2, ..., Ν; corresponding to the pixel corresponding to the pixel in the reference image, denoted as ' k (x, y), k == l, 2, .. , N; Then calculate the square root of the difference of the pixel values of the corresponding pixel, and ί| / Α (χ, _ν) / 'xj)) 2 as the error value of the reference map 。. Of course, other ways of defining the error value may also be selected. The above examples are merely illustrative. The error value of the L÷K reference image is calculated in turn, and the error value is compared to obtain a reference image with the smallest error value, and the information in the neighborhood image is minimized by using the information in the neighborhood image with the smallest error value.
U24) 判断当前帧图像的台标区域是否补绘完全, 如果是, 则进入步骤 U26); 如 果否, 则进入歩骤 U25)。 U24) It is judged whether the logo area of the current frame image is completely filled, if yes, the process proceeds to step U26); if not, the process proceeds to step U25).
U25) 利用当前帧图像的空间相关性补绘台标区域中剩余未补绘的区域。 U26) 结束当前帧图像的补绘。 如步骤 U24)判断 ', 视频中的台标区域的内容均已被补绘, 则迸入步骤 U26)结 束当前帧图像的补绘。 进入下一帧图像的待绘过程。 但 于歩骤 U23) 的补绘过程主 要利用了视频帧间的时阆相关性。 如果视频中运动非常缓慢时, 对台标区域其中一些 像素可能无法.从相邻的视频图像帧中找到对应的像素信息, 这^步骤 U24) 判断 '即 存在未被补绘的像素点, 因此即进入步骤 U25) 进行补充补绘, 主要釆用当前帧图像 的空间相关性进行补绘。 U25) Fill in the remaining unpainted areas in the logo area with the spatial correlation of the current frame image. U26) Ends the interpolation of the current frame image. If it is judged by the step U24) that the contents of the logo area in the video have been complemented, the processing proceeds to step U26) to end the complement of the current frame image. Enter the process of drawing the next frame of image. However, the re-painting process in step U23) mainly utilizes the temporal correlation between video frames. If the motion in the video is very slow, some of the pixels in the logo area may not be able to find the corresponding pixel information from the adjacent video image frame. This step U24) judges that there is a pixel that is not recompressed, so That is, proceed to step U25) to perform supplementary re-painting, and mainly use the spatial correlation of the current frame image to make a supplementary drawing.
利用空间相关性进行补绘时, 常用的即是采用静态图像修复算法进行补绘, 包括 基于偏微分方程的数字修复算法、 基于样例的块匹配算法。 以基于样飼的块匹配算法 进行空间相关性补偿时, 具体包括如 T步骤-When using spatial correlation for re-painting, it is commonly used to use the static image restoration algorithm for re-painting, including digital repair algorithm based on partial differential equations and block-based matching algorithm based on the sample. Block matching algorithm based on sample feeding When performing spatial correlation compensation, specifically include the T step -
251 ) 确定剩余未补绘的区域。 251) Determine the remaining unpainted areas.
252) 提取待补绘区域的边界。  252) Extract the boundary of the area to be complemented.
253)对于边界上的每一个像素点 计算其块匹配优先权, 选择优先权值最大的像 素点作为当前待衿绘块的中心点。  253) Calculate the block matching priority for each pixel on the boundary, and select the pixel point with the largest priority value as the center point of the current to-be-drawn block.
254 )在已补绘区域中检测与当前待朴绘块距离最近的匹配块。  254) detecting a matching block that is closest to the current to-be-painted block in the re-mapped area.
255 )釆用所述匹配块中各像素点的信息对当前待朴绘块中各像素点进行补绘。 当前块中像素点 II用匹配块中对应的像素点 II' 的信息补绘 当前块中像素点 12用匹 配块中对应的像素点 12' 的信息补绘, 依次类推 一一对应地补绘.  255) using the information of each pixel in the matching block to complement each pixel in the current to-be-painted block. In the current block, the pixel point II is filled with the information of the corresponding pixel point II' in the matching block, and the pixel point 12 in the current block is complemented by the information of the corresponding pixel point 12' in the matching block, and then the analogy is correspondingly plotted.
256 )返回步骤 251 ), 直至不存在未补绘的区域时结束。  256) Return to step 251) and end until there is no unpainted area.
经过上述歩骤 U21 ) -U26), 即完成了对当前帧图像中台标区域的补绘。 重复歩骤 U21 ) - U26), 完成对视频中各帧图像的补绘, 从而完成去除台标后朴绘重建任务。  After the above steps U21) - U26), the complement of the logo area in the current frame image is completed. Repeat step U21) - U26) to complete the reconstruction of each frame image in the video, thus completing the task of removing the logo after the logo is removed.
本具体实施方式中, 对视频中的台标区域的内容区分背景和前景分别进行待绘, 且充分利用领域帧图像中携带的时阆信息, 使大量 ^间冗余信息充分用于补绘过程, 从而有效地提高了视频台标区域的补绘精度, 高精度补绘重建台标区域。 在时间信息 无法补绘完全时, 补充采用空间相关性进行 、绘, 从而确保整个台标区域得到补绘。 另夕卜, 对于每一帧图像而言, 因其主要通过对视频图像序列进行运动分析 利用领域 图像愤中的已知图像信息来补绘当前帧中的台标区域 因此补绘过程中迭代次数较少, 确保不会增加较多的 算复杂度。  In this embodiment, the content and the foreground of the content of the logo area in the video are respectively to be drawn, and the time information carried in the image of the domain frame is fully utilized, so that a large amount of redundant information is fully used in the process of composing Therefore, the precision of the complement of the video logo area is effectively improved, and the logo area is reconstructed with high precision. When the time information cannot be completely rewritten, the supplement is spatially correlated and drawn to ensure that the entire logo area is complemented. In addition, for each frame of image, because it mainly uses the known image information of the domain image inversion to perform motion analysis on the video image sequence to complement the logo area in the current frame, the number of iterations in the process of composing Less, make sure you don't add more computational complexity.
优选地, 本具体实施方式的补绘方法还包括步骤 U3) (图 1中未示意出): 将所有 补绘完成的各愤图像接照时间顺序进行整合 输出去除台标区域经补绘后得到的视频。 即通过步骤 U3 ), 将各图像帧整合成视频, 即得到去除台标且台标区域补绘重建后 ¾ 视频。  Preferably, the re-painting method of the embodiment further includes a step U3) (not illustrated in FIG. 1): integrating all the inaccurate images of all the re-paintings in time sequence to remove the logo area and extracting Video. That is, through step U3), each image frame is integrated into a video, that is, the 3⁄4 video is obtained after the station logo is removed and the station logo area is reconstructed and reconstructed.
以上内容是结合具体的优选实施方式对本发明所诈的进一步详细说 不能认定 本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说, 在不脱离本发明构思的前提下做出若千替代或明显变型, 而旦性能或用途相同, 都应 当视为属于本发明的保护范围。 The above is a further detail of the present invention in combination with the specific preferred embodiments. It is not to be understood that the specific embodiments of the present invention are limited to the description. It will be apparent to those skilled in the art that the present invention may be practiced without departing from the spirit and scope of the invention.

Claims

权 利 要 求 书  Claims
L -一种视频中的台标区域的内容补绘方法, 其特征在于: 包括以下步骤-L - a method for compiling the content of the logo area in the video, characterized in that: the following steps are included -
1 ) 检測视频中的台标区域, 并标记台标区域; 1) Detect the logo area in the video and mark the logo area;
2) 对视频中的各帧图像进行如下补绘操作- 21 )根据前 L帧、后 K帧图像,采用全局运动估计方法估计得到当前幀图像的 L+K 个全局参考图像; 其中, L和 K均为大于等于 0的整数, 且 L和 K不同时为 0, 具体 取值由用户根据补绘精度要求进行设定- 2) Perform the following supplementary image operations on each frame image in the video - 21) According to the front L frame and the back K frame image, the global motion estimation method is used to estimate the L+K global reference images of the current frame image; wherein, L and K is an integer greater than or equal to 0, and L and K are not 0 at the same time, and the specific value is set by the user according to the requirements of the supplementary drawing accuracy -
22) 标记当前帧图像中的台标区域的前景和背景-22) Mark the foreground and background of the logo area in the current frame image -
23) 补绘当前帧图像的台标区域; 对于背景像素点, 利用 L÷K个全局参考图像中 相应像素点的信息补绘该背景像素点; 对于前景像素点, 根据前 L幀、 后 K帧图像, 采用局部估计方法估计得到当前帧图像的 L÷K个局部参考图像,利用 L+K个局部参考 图像中相应像素点的信息补绘该前景像素点; 23) Complement the station label area of the current frame image; for the background pixel point, use the information of the corresponding pixel point in the L÷K global reference images to complement the background pixel point; for the foreground pixel point, according to the front L frame, the back K The frame image is estimated by using a local estimation method to obtain L÷K local reference images of the current frame image, and the foreground pixel points are complemented by information of corresponding pixels in the L+K local reference images;
24)判断当前帧图像的台标区域是否补绘完全, 如果是,则进入步骤 26); 如果否, 则进入步骤 25);  24) judging whether the logo area of the current frame image is full, if yes, proceed to step 26); if not, proceed to step 25);
25) 利用当前帧图像的空间相关性补绘台标区域中剩余未补绘的区域;  25) using the spatial correlation of the current frame image to complement the remaining unpainted areas in the logo area;
26) 结束当前帧图像的补绘。  26) End the redraw of the current frame image.
2.根据权利要求 1所述的视频中的台标区域的内容补绘方法, 其特征在于: 所述 步骤 22)中利用 L+K个全局参考图像的信息标记当前帧图像中的台标区域的前景和背  The method for compiling the content of the logo area in the video according to claim 1, wherein: in step 22), the information of the L+K global reference images is used to mark the logo area in the current frame image. Prospect and back
3.根据权利要求 2所述的视频中的台标区域的内容补绘方法, 其特征在于: 所述 步骤 22)具体为; 221 )对当前帧图像中的台标区域中的每一个像素点 I (x, y), 构建 像素点的领域块集合 ψ,所述领域块集合 ψ包括分别从当前幀图像和 K个全局参考图像 中取出的 L+K-H个子块 ψ0, 子块 ψ0 {ΐ (ΐ, j ) i i≡ (x-W? x+W), j e (y-W, y+W) }, 其中, W是空间窗口大小, 由用户根据 绘精度和运算量的要求进行设定; 222)计算 各像素点的领域块集合 ψ内的像素点的方差; 223 ) 设定阈值, 比较阈值与步骤 222 ) 中†算得到的各像素点对应的方差, 如果方差小于阈值, 则将该像素点标记为背景; 如果方差大于阈值, 剣将该像素点标记为前景。 The content complementing method of the logo area in the video according to claim 2, wherein: the step 22) is specifically; 221) each pixel point in the logo area in the current frame image I (x, y), a domain block set 构建 constructing a pixel point, the domain block set ψ including L+KH sub-blocks ψ 0 , sub-block ψ 0 {from the current frame image and the K global reference images respectively ΐ (ΐ, j ) ii≡ (xW ? x+W), je (yW, y+W) }, where W is the size of the spatial window, which is set by the user according to the requirements of drawing precision and calculation amount; 222) Calculating a variance of the pixel points in the domain block set 各 of each pixel point; 223) setting a threshold value, comparing the threshold value to the variance corresponding to each pixel point obtained in step 222), if the variance is less than the threshold value, then the pixel point Mark as background; if the variance is greater than the threshold, mark the pixel as foreground.
4.根据权利要求 i所述的视频中的台标区域的内容补绘方法, 其特征在于: 所述 步骤 23) 中, 利用 L+K个全局参考图像或 L+K个局部参考图像中相应像素点信息补 绘像素点时, 以 L+K个参考图像中相应像素点的像素值的平均值作为当前待补绘像素 点的像素值进行衿绘。 The method for compiling the content of the logo area in the video according to claim 1, wherein: in the step 23), the corresponding one of the L+K global reference images or the L+K local reference images is used. When the pixel point information is used to fill the pixel point, the average value of the pixel value of the corresponding pixel point in the L+K reference image is used as the pixel value of the pixel to be complemented.
5.根据权利要求 1所述的视频中的台标区域的内容补绘方法, 其特征在于: 所述 步骤 23) 中, 禾拥 L+K个全局参考图像或 L+K个局部参考图像中相应像素点信息补 绘像素点时, 先计算 L+K个参考图像与当前帧图像的误差, 以误差最小的一个参考图 像中的相应像素点的像素值作为当前待补绘像素点的像素值进行待绘。  The method for compiling a content of a logo area in a video according to claim 1, wherein: in the step 23), the L+K global reference images or the L+K local reference images are included in the method. When the corresponding pixel point information is used to complement the pixel point, the error of the L+K reference image and the current frame image is first calculated, and the pixel value of the corresponding pixel point in the reference image with the smallest error is taken as the pixel value of the current pixel to be complemented. Do the painting.
6.根据权利要求 1所述的视频中的台标区域的内容补绘方法, 其特征在于: 所述 歩骤 25) 中釆用静态图像修复算法进行补绘。  The method for compiling a content of a logo area in a video according to claim 1, wherein: the step 25) is performed by using a still image repair algorithm.
7.根据权利要求 1所述的视频中的台标区域补绘方法, 其特征在于: 所述静态图 像修复算法包括基于偏微分方程的数字修复算法、 基于样例的块匹配算法。  The method for composing a logo area in a video according to claim 1, wherein: the static image repair algorithm comprises a digital repair algorithm based on a partial differential equation, and a block matching algorithm based on a sample.
8.根据权利要求 7所述的视频中的台标区域的内容补绘方法, 其特征在于: 所述 步骤 25)采用基于样例的块匹配算法时, 具体包括如下步骤: 251 )确定剩余未补绘的 区域; 252 ) 提取待补绘区域的边界; 253 ) 对于边界上的每一个像素点, 计算其块匹 配优先权, 选择优先杈值最大的像素点作为当前待补绘块的中心点; 254 )在已朴绘区 域中检测与当前待补绘块距离最近的匹配块; 255 )釆用所述匹配块中各像素点的信息 对当前待补绘块中各像素点进行朴绘; 256 )返回步骤 251 ), 直至不存在未补绘的区域 时结束。  The method for composing the content of the logo area in the video according to claim 7, wherein: when the step 25) adopts the block matching algorithm based on the sample, the method specifically includes the following steps: 251) determining remaining 252) extracting the boundary of the area to be complemented; 253) calculating the block matching priority for each pixel on the boundary, and selecting the pixel with the highest priority value as the center point of the current to-be-composed block 254) detecting a matching block that is closest to the current to-be-drawn block in the already-painted area; 255) using the information of each pixel in the matching block to perform a simple drawing on each pixel in the current to-be-composed block; 256) Returning to step 251), it ends when there is no unpainted area.
9.根据权利要求 1所述的视频中的台标区域的内容补绘方法, 其特征在于; 还包 括步骤 3 ): 将所有补绘完成的各帧图像按照时间顺序进行整合, 输出去除台标区域经 补绘后得到的视频。  The method for compiling the content of the logo area in the video according to claim 1, further comprising the step 3): integrating all the frames of the completion of the re-painting in time sequence, and outputting the logo removed. The video obtained after the area is complemented.
PCT/CN2013/090978 2013-12-09 2013-12-30 Method for supplementarily drawing content at station logo region in video WO2015085637A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310665267.8A CN103618905A (en) 2013-12-09 2013-12-09 Content drawing method for station caption area in video
CN201310665267.8 2013-12-09

Publications (1)

Publication Number Publication Date
WO2015085637A1 true WO2015085637A1 (en) 2015-06-18

Family

ID=50169609

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/090978 WO2015085637A1 (en) 2013-12-09 2013-12-30 Method for supplementarily drawing content at station logo region in video

Country Status (2)

Country Link
CN (1) CN103618905A (en)
WO (1) WO2015085637A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898322A (en) * 2015-07-24 2016-08-24 乐视云计算有限公司 Video watermark removing method and device
CN105025361B (en) * 2015-07-29 2018-07-17 西安交通大学 A kind of real-time station symbol removing method
CN105376462B (en) * 2015-11-10 2018-05-25 清华大学深圳研究生院 A kind of content benefit of Polluted area in video paints method
CN105678685B (en) * 2015-12-29 2019-02-22 小米科技有限责任公司 Image processing method and device
CN108470326B (en) * 2018-03-27 2022-01-11 北京小米移动软件有限公司 Image completion method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010040640A1 (en) * 2000-01-17 2001-11-15 Kim Myung Ja Television caption data and method for composing television caption data
CN101739561A (en) * 2008-11-11 2010-06-16 中国科学院计算技术研究所 TV station logo training method and identification method
CN101917644A (en) * 2010-08-17 2010-12-15 李典 Television, system and method for accounting audience rating of television programs thereof
CN101950366A (en) * 2010-09-10 2011-01-19 北京大学 Method for detecting and identifying station logo
CN102436575A (en) * 2011-09-22 2012-05-02 Tcl集团股份有限公司 Method for automatically detecting and classifying station captions
CN102982350A (en) * 2012-11-13 2013-03-20 上海交通大学 Station caption detection method based on color and gradient histograms
CN103258187A (en) * 2013-04-16 2013-08-21 华中科技大学 Television station caption identification method based on HOG characteristics

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635047A (en) * 2009-03-25 2010-01-27 湖南大学 Texture synthesis and image repair method based on wavelet transformation
CN102496145A (en) * 2011-11-16 2012-06-13 湖南大学 Video repairing method based on moving periodicity analysis
CN103051893B (en) * 2012-10-18 2015-05-13 北京航空航天大学 Dynamic background video object extraction based on pentagonal search and five-frame background alignment
CN103336954B (en) * 2013-07-08 2016-09-07 北京捷成世纪科技股份有限公司 A kind of TV station symbol recognition method and apparatus in video

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010040640A1 (en) * 2000-01-17 2001-11-15 Kim Myung Ja Television caption data and method for composing television caption data
CN101739561A (en) * 2008-11-11 2010-06-16 中国科学院计算技术研究所 TV station logo training method and identification method
CN101917644A (en) * 2010-08-17 2010-12-15 李典 Television, system and method for accounting audience rating of television programs thereof
CN101950366A (en) * 2010-09-10 2011-01-19 北京大学 Method for detecting and identifying station logo
CN102436575A (en) * 2011-09-22 2012-05-02 Tcl集团股份有限公司 Method for automatically detecting and classifying station captions
CN102982350A (en) * 2012-11-13 2013-03-20 上海交通大学 Station caption detection method based on color and gradient histograms
CN103258187A (en) * 2013-04-16 2013-08-21 华中科技大学 Television station caption identification method based on HOG characteristics

Also Published As

Publication number Publication date
CN103618905A (en) 2014-03-05

Similar Documents

Publication Publication Date Title
WO2015085637A1 (en) Method for supplementarily drawing content at station logo region in video
CN102113015B (en) Use of inpainting techniques for image correction
KR101281961B1 (en) Method and apparatus for editing depth video
CN103051908B (en) Disparity map-based hole filling device
EP2180695B1 (en) Apparatus and method for improving frame rate using motion trajectory
EP2811423A1 (en) Method and apparatus for detecting target
CN104081765B (en) Image processing apparatus and image processing method thereof
KR100953076B1 (en) Multi-view matching method and device using foreground/background separation
CN103514582A (en) Visual saliency-based image deblurring method
CN103996174A (en) Method for performing hole repair on Kinect depth images
WO2010083713A1 (en) Method and device for disparity computation
WO2010073177A1 (en) Image processing
CN104378619B (en) A kind of hole-filling algorithm rapidly and efficiently based on front and back's scape gradient transition
CN111127376B (en) Digital video file repairing method and device
CN104065946A (en) Cavity filling method based on image sequence
JP2008113446A5 (en)
CN102542541B (en) Deep image post-processing method
Pushpalwar et al. Image inpainting approaches-a review
CN112233049A (en) Image fusion method for improving image definition
CN105791795B (en) Stereoscopic image processing method, device and Stereoscopic Video Presentation equipment
CN104952089B (en) A kind of image processing method and system
CN104778673A (en) Improved depth image enhancing algorithm based on Gaussian mixed model
CN108805841B (en) Depth map recovery and viewpoint synthesis optimization method based on color map guide
KR102248352B1 (en) Method and device for removing objects in video
CN103729657B (en) Method and device for constructing station caption sample library and method and device for identifying station caption

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13899299

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13899299

Country of ref document: EP

Kind code of ref document: A1