WO2015085637A1 - Procédé pour dessiner un contenu supplémentaire à une zone de logo de station dans une vidéo - Google Patents

Procédé pour dessiner un contenu supplémentaire à une zone de logo de station dans une vidéo Download PDF

Info

Publication number
WO2015085637A1
WO2015085637A1 PCT/CN2013/090978 CN2013090978W WO2015085637A1 WO 2015085637 A1 WO2015085637 A1 WO 2015085637A1 CN 2013090978 W CN2013090978 W CN 2013090978W WO 2015085637 A1 WO2015085637 A1 WO 2015085637A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
video
logo area
frame image
Prior art date
Application number
PCT/CN2013/090978
Other languages
English (en)
Chinese (zh)
Inventor
张磊
肖煜东
索津莉
张永兵
戴琼海
Original Assignee
清华大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学深圳研究生院 filed Critical 清华大学深圳研究生院
Publication of WO2015085637A1 publication Critical patent/WO2015085637A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Definitions

  • the present invention relates to the field of computer video processing, and in particular to a method for complementing a logo area in a video.
  • the current common method is to perform mosaic processing in the logo area or use the image processing tool to blur the logo area, but these methods have poor effect.
  • the artificial processing traces have seriously affected the video quality and video viewing effects.
  • the technical problem to be solved by the present invention is: to make up for the deficiencies of the above prior art, and to propose a content complementing method for the logo area in the video, which effectively improves the complementing accuracy of the video logo area and has low computational complexity.
  • the technical problem of the present invention is solved by the following technical solution - a method for compiling a content of a logo area in a video, comprising the following steps: 1) detecting a logo area in a video and marking a logo area; 2) video
  • the image of each frame in the frame is as follows: 21)
  • the global motion estimation method is used to estimate the L+K. global reference image of the current frame image; wherein, L and K are both An integer greater than or equal to 0, and L and K are not 0 at the same time.
  • the specific value is set by the 3 ⁇ 4 household according to the retouching accuracy requirement; 22) marking the foreground and background of the logo area in the current frame image; 23) Drawing the logo area of the current frame image: for the background pixel, using the information of the corresponding pixel in the global reference image to complement the background pixel; for the foreground pixel, using the local estimation based on the front L frame and the back K frame image
  • the method estimates L+K local reference images of the current frame image, and complements the foreground pixel points by using information of corresponding pixel points in the L ⁇ K local reference images; 24) determining the logo area of the current frame image Whether the field is completely completed, if yes, proceed to step 26); if not, the agent proceeds to step 25) ; 25) using the spatial correlation of the current frame image to complement the remaining unpainted areas in the logo area; 26) Ends the interpolation of the current frame image.
  • the beneficial effects of the present invention compared to the prior art are:
  • the content complementing method of the logo area in the video of the present invention uses the global motion estimation method to estimate the current frame image from the L+K frame domain image before and after the time, and uses the pixel point information in the domain image to view the current frame image.
  • the front spot of the logo area is different from the background point.
  • After using the time correlation re-painting if there is an unpainted area, continue to use spatial correlation to supplement the re-painting.
  • the front spot and the background point are complementarily painted, and a large amount of redundant information existing in the video image frame sequence is fully utilized, thereby effectively improving the complementing accuracy of the video logo area.
  • the known image information in the neighborhood image is used to draw the logo area in the current frame, so the number of iterations in the process of composing Less, ensuring less computational complexity.
  • FIG. 1 is a flowchart of a method for compiling a content of a logo area in a video according to an embodiment of the present invention
  • the idea of the present invention is: researching the temporal correlation between video image sequences by global motion estimation and local motion estimation, and preferentially using the temporal correlation of the video image sequence to complement the logo area in the current image frame, when utilizing time
  • the spatial correlation of the current ⁇ image is used for the port painting.
  • time information firstly, global motion estimation is performed on adjacent 3 ⁇ 4 L+K frame images, and the L+ frame neighborhood image is aligned with the current frame image to obtain a L+K frame global reference image. Then, the 'empty neighborhood data of each pixel to be complemented in the logo area is analyzed, and the front point and the background point in the current image frame logo area are marked.
  • the information in the aligned image frame of the global motion compensation is used to complement the image; for the unknown foreground pixel, the local motion is compensated and then combined with the information in the neighborhood image frame. Make a repaint.
  • the input video image processed by the complement drawing method is visible in the logo area, and the video data format is not limited.
  • the supplementary drawing method includes the following steps -
  • the detection method may select a plurality of methods for detecting the video logo area, for example, may include algorithm detection and user manual tick identification, and mark the logo area.
  • the method of automatic detection by algorithm can be used but not It is limited to the frame difference based station detection and extraction algorithm proposed by Kartrin Meisinger et al.
  • the input parameters indicating the logo area can be prepared in advance or the user operation interface can be provided for the user to mark the logo area.
  • the station logo area of the user check mark is directly received to complete the detection.
  • the station mark area can be identified. The easiest way is to mark the pixels in the logo area as 0, which is black. Or, the uniform identifier is 255, which is rendered white. Other identification methods are also available, and are not limited to the above two.
  • step U2 Perform the following supplementary operations on each frame image in the video, specifically as step U21) - U26 in Figure 1.
  • the global motion estimation if method is used to estimate the L+K global reference images of the current frame image.
  • K is a positive integer, which is set by 3 ⁇ 4 households according to the accuracy requirements.
  • Global motion refers to motion in a sequence of video images resulting from the motion of the camera.
  • the global motion estimation is to estimate the spatial displacement between the pixel points in the current frame and the corresponding pixel points in the front and back domain frames in the current sequence, usually defined by the motion parameter model.
  • the current frame image may be estimated from the domain frame image to obtain an estimated image of the current frame; or the adjacent frame image may be estimated by the current frame image to obtain an adjacent frame. The image of the image is estimated.
  • X 1 denote the coordinates of the pixel points in the video image frame at time i
  • denote the coordinates of the pixel points in the video image frame at time j
  • the operation of estimating the image of the image frame at time i from the image frame at time j is ⁇ ', that is, aligning the image frame at time j to the image at time i.
  • L and K are 3, that is, the current frame image is estimated from the three frames of images before and after, and six global reference images are obtained. If you find that the offset accuracy is not satisfactory, you can return to reset the values of L and K to a larger value, thus improving the precision of the fill.
  • L and K can be taken as other values, and L can be taken as 0, and only the information of the K frame image is used; or K is 0, and only the information of the previous L frame image is used, and the specific value combination is not limited, as long as there is a phase
  • the information of the adjacent frame can be.
  • the L window is selected before and after the L-frame, and the alignment of the video image frame in the time window can be prepared for the use of the inter-correlation of the video sequence in the subsequent steps.
  • the normal foreground background marking method can be used to process the logo area of the current frame image.
  • the information of the L+K global reference images obtained in step U21) can be used to perform the foreground background marking, so that the reference image of the response time correlation information is obtained by global motion estimation, and the foreground background can be improved.
  • the accuracy of the markup which in turn improves the accuracy of subsequent fills.
  • each sub-block is a square having a length and a width of 2 W which are centered at the pixel point I x, y).
  • the set ⁇ is a set of pixels of L+K+1 sub-blocks, so this step calculates the variance of the pixel values of the pixels in the set.
  • step U223 setting a threshold, comparing the threshold to the variance corresponding to each pixel point calculated in step U222), if the variance is less than the threshold, marking the pixel as the background; if the variance is greater than the threshold, marking the pixel as the foreground .
  • a threshold is set, and the threshold is set by the user, and can be set as an empirical value according to the compensation accuracy requirement.
  • the background pixel is complemented by the information of the corresponding pixel in the L+K global reference images; and for the foreground pixel, the local estimation method is used according to the pre-L frame and the K-frame image.
  • the L+K. local reference image of the current frame image is complemented by the information of the corresponding pixel in the L+K local reference images.
  • the background and the foreground are respectively complemented.
  • the information in the already-aligned neighborhood image that is, the information of the reference image is directly used for the complement.
  • the local motion compensation is introduced to compensate the foreground pixel according to the information of the local reference image obtained after the local motion estimation. Since the first L frame and the last K frame are taken as the time window, the L ⁇ K in the time window can be referred to the fishing image.
  • the other is to first calculate the error between the IK reference image and the current frame image, and use the pixel value of the corresponding pixel in the reference image with the smallest error as the pixel value of the pixel to be painted.
  • step U24 It is judged whether the logo area of the current frame image is completely filled, if yes, the process proceeds to step U26); if not, the process proceeds to step U25).
  • step U25 Fill in the remaining unpainted areas in the logo area with the spatial correlation of the current frame image.
  • step U26) Ends the interpolation of the current frame image. If it is judged by the step U24) that the contents of the logo area in the video have been complemented, the processing proceeds to step U26) to end the complement of the current frame image. Enter the process of drawing the next frame of image.
  • the re-painting process in step U23) mainly utilizes the temporal correlation between video frames. If the motion in the video is very slow, some of the pixels in the logo area may not be able to find the corresponding pixel information from the adjacent video image frame.
  • This step U24) judges that there is a pixel that is not recompressed, so That is, proceed to step U25) to perform supplementary re-painting, and mainly use the spatial correlation of the current frame image to make a supplementary drawing.
  • Block matching algorithm based on sample feeding When performing spatial correlation compensation, specifically include the T step -
  • the pixel point II is filled with the information of the corresponding pixel point II' in the matching block, and the pixel point 12 in the current block is complemented by the information of the corresponding pixel point 12' in the matching block, and then the analogy is correspondingly plotted.
  • step U21) - U26 After the above steps U21) - U26), the complement of the logo area in the current frame image is completed. Repeat step U21) - U26) to complete the reconstruction of each frame image in the video, thus completing the task of removing the logo after the logo is removed.
  • the content and the foreground of the content of the logo area in the video are respectively to be drawn, and the time information carried in the image of the domain frame is fully utilized, so that a large amount of redundant information is fully used in the process of composing Therefore, the precision of the complement of the video logo area is effectively improved, and the logo area is reconstructed with high precision.
  • the supplement is spatially correlated and drawn to ensure that the entire logo area is complemented.
  • the number of iterations in the process of composing Less make sure you don't add more computational complexity.
  • the re-painting method of the embodiment further includes a step U3) (not illustrated in FIG. 1): integrating all the inaccurate images of all the re-paintings in time sequence to remove the logo area and extracting Video. That is, through step U3), each image frame is integrated into a video, that is, the 3 ⁇ 4 video is obtained after the station logo is removed and the station logo area is reconstructed and reconstructed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé pour dessiner un contenu supplémentaire à une zone de logo de station dans une vidéo. Le procédé comprend les étapes suivantes consistant : 1) à détecter une zone de logo de station dans une vidéo et à marquer la zone de logo de station ; et 2) à exécuter les opérations de dessin supplémentaire suivantes sur chaque image de la vidéo : 21) estimer, d'après L images avant et K images arrière, (L+K) images de référence globales d'une image actuelle au moyen d'un procédé d'estimation de mouvement global ; 22) marquer un avant-plan et un arrière-plan de la zone de logo de station dans l'image actuelle ; 23) dessiner une zone de logo de station supplémentaire de l'image actuelle ; 24) déterminer si la zone de logo de station de l'image actuelle est entièrement dessinée et, si c'est le cas, passer à l'étape 26), sinon, passer à l'étape 25) ; et 25) dessiner le reste de la zone non dessinée dans la zone de logo de station au moyen d'une corrélation spatiale de l'image actuelle ; et 26) terminer le dessin supplémentaire de l'image actuelle. Le procédé pour dessiner un contenu supplémentaire à une zone de logo de station dans une vidéo, selon la présente invention, peut améliorer significativement la précision de dessin d'un contenu supplémentaire à la zone de logo de station dans la vidéo en utilisant à plein des informations temporelles entre des images adjacentes.
PCT/CN2013/090978 2013-12-09 2013-12-30 Procédé pour dessiner un contenu supplémentaire à une zone de logo de station dans une vidéo WO2015085637A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310665267.8A CN103618905A (zh) 2013-12-09 2013-12-09 一种视频中的台标区域的内容补绘方法
CN201310665267.8 2013-12-09

Publications (1)

Publication Number Publication Date
WO2015085637A1 true WO2015085637A1 (fr) 2015-06-18

Family

ID=50169609

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/090978 WO2015085637A1 (fr) 2013-12-09 2013-12-30 Procédé pour dessiner un contenu supplémentaire à une zone de logo de station dans une vidéo

Country Status (2)

Country Link
CN (1) CN103618905A (fr)
WO (1) WO2015085637A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898322A (zh) * 2015-07-24 2016-08-24 乐视云计算有限公司 一种视频去水印方法及装置
CN105025361B (zh) * 2015-07-29 2018-07-17 西安交通大学 一种实时台标消除方法
CN105376462B (zh) * 2015-11-10 2018-05-25 清华大学深圳研究生院 一种视频中的污染区域的内容补绘方法
CN105678685B (zh) * 2015-12-29 2019-02-22 小米科技有限责任公司 图片处理方法及装置
CN108470326B (zh) * 2018-03-27 2022-01-11 北京小米移动软件有限公司 图像补全方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010040640A1 (en) * 2000-01-17 2001-11-15 Kim Myung Ja Television caption data and method for composing television caption data
CN101739561A (zh) * 2008-11-11 2010-06-16 中国科学院计算技术研究所 一种电视台标训练方法和识别方法
CN101917644A (zh) * 2010-08-17 2010-12-15 李典 电视台及其电视节目的收视率统计系统及其统计方法
CN101950366A (zh) * 2010-09-10 2011-01-19 北京大学 一种台标检测和识别的方法
CN102436575A (zh) * 2011-09-22 2012-05-02 Tcl集团股份有限公司 一种台标的自动检测和分类方法
CN102982350A (zh) * 2012-11-13 2013-03-20 上海交通大学 一种基于颜色和梯度直方图的台标检测方法
CN103258187A (zh) * 2013-04-16 2013-08-21 华中科技大学 一种基于hog特征的电视台标识别方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635047A (zh) * 2009-03-25 2010-01-27 湖南大学 基于小波变换的纹理合成图像修复方法
CN102496145A (zh) * 2011-11-16 2012-06-13 湖南大学 基于运动周期性分析的视频修复方法
CN103051893B (zh) * 2012-10-18 2015-05-13 北京航空航天大学 基于五边形搜索及五帧背景对齐的动背景视频对象提取
CN103336954B (zh) * 2013-07-08 2016-09-07 北京捷成世纪科技股份有限公司 一种视频中的台标识别方法和装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010040640A1 (en) * 2000-01-17 2001-11-15 Kim Myung Ja Television caption data and method for composing television caption data
CN101739561A (zh) * 2008-11-11 2010-06-16 中国科学院计算技术研究所 一种电视台标训练方法和识别方法
CN101917644A (zh) * 2010-08-17 2010-12-15 李典 电视台及其电视节目的收视率统计系统及其统计方法
CN101950366A (zh) * 2010-09-10 2011-01-19 北京大学 一种台标检测和识别的方法
CN102436575A (zh) * 2011-09-22 2012-05-02 Tcl集团股份有限公司 一种台标的自动检测和分类方法
CN102982350A (zh) * 2012-11-13 2013-03-20 上海交通大学 一种基于颜色和梯度直方图的台标检测方法
CN103258187A (zh) * 2013-04-16 2013-08-21 华中科技大学 一种基于hog特征的电视台标识别方法

Also Published As

Publication number Publication date
CN103618905A (zh) 2014-03-05

Similar Documents

Publication Publication Date Title
WO2015085637A1 (fr) Procédé pour dessiner un contenu supplémentaire à une zone de logo de station dans une vidéo
CN102113015B (zh) 使用修补技术进行图像校正
KR101281961B1 (ko) 깊이 영상 편집 방법 및 장치
CN103051908B (zh) 一种基于视差图的空洞填充装置
EP2180695B1 (fr) Appareil et procédé pour améliorer la fréquence d'image en utilisant la trajectoire du mouvement
EP2811423A1 (fr) Procédé et appareil pour la détection de cible
CN104081765B (zh) 图像处理设备及其图像处理方法
KR100953076B1 (ko) 객체 또는 배경 분리를 이용한 다시점 정합 방법 및 장치
CN103514582A (zh) 基于视觉显著的图像去模糊方法
CN103996174A (zh) 一种对Kinect深度图像进行空洞修复的方法
WO2010083713A1 (fr) Procédé et dispositif de calcul de disparité
CN111127376B (zh) 一种数字视频文件修复方法及装置
WO2010073177A1 (fr) Traitement d'images
CN104378619B (zh) 一种基于前后景梯度过渡的快速高效的空洞填补算法
CN104065946A (zh) 基于图像序列的空洞填充方法
CN102542541B (zh) 深度图像后处理的方法
CN112233049A (zh) 一种用于提升图像清晰度的图像融合方法
CN105791795B (zh) 立体图像处理方法、装置以及立体视频显示设备
CN104952089B (zh) 一种图像处理方法及系统
CN104778673A (zh) 一种改进的高斯混合模型深度图像增强算法
CN108805841B (zh) 一种基于彩色图引导的深度图恢复及视点合成优化方法
KR102248352B1 (ko) 영상에서의 객체 제거 방법 및 그 장치
EP3127087B1 (fr) Estimation d'un champ de demouvement
Yang et al. RIFO: Restoring images with fence occlusions
US20130286289A1 (en) Image processing apparatus, image display apparatus, and image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13899299

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13899299

Country of ref document: EP

Kind code of ref document: A1