CN103618905A - Content drawing method for station caption area in video - Google Patents
Content drawing method for station caption area in video Download PDFInfo
- Publication number
- CN103618905A CN103618905A CN201310665267.8A CN201310665267A CN103618905A CN 103618905 A CN103618905 A CN 103618905A CN 201310665267 A CN201310665267 A CN 201310665267A CN 103618905 A CN103618905 A CN 103618905A
- Authority
- CN
- China
- Prior art keywords
- pixel
- station symbol
- frame image
- symbol region
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
Abstract
The invention discloses a content drawing method for a station caption area in a video. The method comprises the following steps: (1) detecting the station caption area in the video, and marking the station caption area; and (2) performing drawing operation on each frame image in the video, namely (21) estimating to acquire (L+K) global reference images of a current frame image according to the front L frame and rear K frame images by adopting a global motion estimation method; (22) marking the foreground and background of the station caption area in the current frame image; (23) drawing a station caption area of the current frame image; (24) judging whether the station caption area of the current frame image is completely drawn or not, if so, entering a step (26), otherwise entering a step (25); (25) drawing the rest undrawn area in the station caption area by utilizing spatial correlation of the current frame image; and (26) ending drawing of the current frame image. According to the content drawing method for the station caption area in the video, temporal information between adjacent frame images is fully utilized for drawing, and the drawing accuracy of the station caption area in the video can be effectively improved.
Description
[technical field]
The present invention relates to computer video process field, the method for painting is mended in the station symbol region particularly relating in a kind of video.
[background technology]
Station symbol is as the proprietorial statement of a kind of intellectual property, and nearly all commercial TV channel has the station symbol of oneself.When new station symbol covers original station symbol and overlaps, station symbol also can be used as a kind of sign of authorizing relay.But meanwhile,, the existence of these station symbols had both reduced the comfort level of viewing and admiring of video, had also increased the difficulty exchanging between video.Therefore, how to remove station symbol and accurately mend that to paint station symbol region be a key issue urgently to be resolved hurrily.
In order to remove the station symbol in video and station symbol region to be carried out accurately mending and painting, conventional method is in station symbol region, to carry out mosaic processing or use image handling implement that station symbol region is thickened at present, but it is poor that these methods benefits are painted effect, artificial processing vestige is obvious, has had a strong impact on video quality and video appreciation effect.
[summary of the invention]
Technical problem to be solved by this invention is: make up above-mentioned the deficiencies in the prior art, the content benefit that proposes the station symbol region in a kind of video is painted method, effectively improve the benefit in video station symbol region and paint precision, and computation complexity is lower.
Technical problem of the present invention is solved by following technical scheme:
The content in the station symbol region in video is mended and is painted a method, comprises the following steps: 1) detect the station symbol region in video mark station symbol region; 2) each two field picture in video is carried out to following benefit and paint operation: 21) according to front L frame, rear K two field picture, adopt global motion estimating method to estimate to obtain L+K overall reference picture of current frame image; Wherein, L and K are the integer that is more than or equal to 0, and L is 0 when different with K, and concrete value is painted required precision by user according to benefit and set; 22) prospect and the background in the station symbol region in mark current frame image; 23) mend the station symbol region of painting current frame image: for background pixel point, utilize the information benefit of respective pixel point in L+K overall reference picture to paint this background pixel point; For foreground pixel point, according to front L frame, rear K two field picture, adopt partial estimation method to estimate to obtain L+K local reference picture of current frame image, utilize the information benefit of respective pixel point in L+K local reference picture to paint this foreground pixel point; 24) whether mend in the station symbol region that judges current frame image is painted completely, if so, enters step 26); If not, enter step 25); 25) utilize the spatial coherence benefit of current frame image to paint the residue region that benefit is painted in station symbol region; 26) benefit that finishes current frame image is painted.
The beneficial effect that the present invention is compared with the prior art is:
The content in the station symbol region in video of the present invention is mended the method for painting, utilize global motion estimating method, L+K frame field Image estimation current frame image before and after from the time, utilizes the pixel information in the image of field to paint the foreground point in the station symbol region in current frame image and background dot difference benefit.After adopting temporal correlation benefit to paint, as existed, do not mend the region of painting, continuation usage space correlation is supplemented benefit and is painted.Benefit is painted in process, foreground point and background dot are distinguished to mend and paint, and the plenty of time redundant information existing in video image frame sequence is fully used, thereby the benefit that has effectively improved video station symbol region is painted precision.And for each two field picture, because it is mainly by carrying out motion analysis to sequence of video images, therefore utilize the known image information in Neighborhood Graph picture frame to mend the station symbol region of painting in present frame, mend that to paint in process iterations less, guarantee that computation complexity is lower.
[accompanying drawing explanation]
Fig. 1 is that the content in the station symbol region in the video of the specific embodiment of the invention is mended the flow chart of the method for painting;
Fig. 2 is that the benefit of the specific embodiment of the invention is painted while take K=3 in method the process schematic diagram as example explanation structure field set of blocks.
[embodiment]
Below in conjunction with embodiment and contrast accompanying drawing the present invention is described in further details.
Design of the present invention is: by the temporal correlation between overall motion estimation and local motion estimation research sequence of video images, preferentially utilize the temporal correlation of sequence of video images that the station symbol region in current image frame is mended and painted, after utilizing temporal correlation to mend to paint, still exist while not mending the region of painting, just adopt the spatial coherence of current frame image to mend and paint.Service time, information was mended while painting, and first adjacent L+K two field picture was carried out to overall motion estimation, and L+K frame neighborhood image is alignd with current frame image, obtained L+K frame overall situation reference picture.Then the space-time adjacent region data of each portrait vegetarian refreshments to be mended of station symbol region is analyzed foreground point and background dot in mark current image frame station symbol region.Finally, for unknown background pixel point, the information in the field picture frame having alignd after use global motion compensation is mended and is painted; For unknown foreground pixel point, carry out after local motion compensated in conjunction with the information in Neighborhood Graph picture frame, mending and painting again.
As shown in Figure 1, for the content in the station symbol region in video in this embodiment, mend the flow chart of the method for painting.The video image of the input that the benefit method of painting is processed, its station symbol region is visible, and the data format of video is not limit.The benefit method of painting comprises the following steps:
U1): detect the station symbol region in video, and mark station symbol region.
In this step, the method that detection method can select multiple relevant video station symbol region to detect, for example, can comprise that algorithm detects and user manually chooses identification, and mark station symbol region.Automatically the method detecting with algorithm can adopt but be not limited to detecting and extraction algorithm based on the poor station symbol of frame of people's propositions such as Kartrin Meisinger.By the manual method of choosing of user, can prepare in advance can indicate the input parameter in station symbol region or provide user interface to allow user by station symbol zone marker out.Now, during detection, directly receive the station symbol region that user chooses mark and complete detection.After certification mark, can identify station symbol region.The easiest method, is that pixel in station symbol region is unified and is designated 0, is rendered as black.Or unification is designated 255, is rendered as white.Also can adopt other identification method, be not limited to above-mentioned two kinds.
U2) each two field picture in video is carried out to following benefit and paints operation, be specially step U21 in Fig. 1)-U26).
U21), according to front L frame, rear K two field picture, adopt global motion estimating method to estimate to obtain L+K overall reference picture of current frame image.Wherein, K is positive integer, paints required precision set by user according to benefit.
Global motion refers to the motion in the sequence of video images that the motion due to video camera produces.Overall motion estimation is to estimate in present frame on pixel and time series the space displacement between respective pixel point in the frame of field, front and back, conventionally by kinematic parameter model, defines.In global motion estimating method, define after kinematic parameter model, can estimate current frame image by field two field picture, obtain the estimated image of present frame; Also can estimate consecutive frame image by current frame image, obtain the estimated image of consecutive frame image.For example, establish x
irepresent the i coordinate of the interior pixel of video frame image constantly, x
jrepresent the j coordinate of the interior pixel of video frame image constantly,
required kinematic parameter model when expression is alignd j picture frame constantly with i time chart picture frame, by the computing of estimated image that j picture frame constantly estimates to obtain the picture frame in the i moment is
that is to say j picture frame constantly to i picture frame alignment constantly.
In this step, user sets after L and K value, for example, get L and K is 3, is to estimate current frame images by front and back 3 two field pictures, obtains 6 width overall situation reference pictures.The follow-up precision of painting as find mended is dissatisfied, and can return to the value that resets L and K be larger value, thereby improve benefit, paints precision.Certainly also desirable L and K are other value, and also desirable L is 0, only utilizes the information of K two field picture below; Or getting K is 0, only utilizes the information of L two field picture above, concrete value combination is not limit, as long as there is the information of consecutive frame.In this step, L+K frame time window before and after choosing, to time window inner video image frame, alignment can be in subsequent step, to utilize the temporal correlation of video sequence to mend to paint station symbol region and prepares.
U22) prospect and the background in the station symbol region in mark current frame image.
In this step, can adopt common prospect background labeling method to process the station symbol region of current frame image.Preferably, can utilize step U21) in obtain L+K overall reference picture information carry out prospect background mark, like this, the reference picture that obtains reaction time correlation information by overall motion estimation carries out mark, can improve the accuracy of prospect background mark, and then the accuracy when improving follow-up benefit and painting.Particularly, comprise step U221)-U223):
U221) to each pixel I(x, y in the station symbol region in current frame image), the field set of blocks ψ of structure pixel, described field set of blocks ψ comprises L+K+1 the sub-block ψ taking out from current frame image and L+K overall reference picture respectively
0, sub-block ψ
0=I(i, j) and ︱ i ∈ (x-W, x+W), j ∈ (y-W, y+W) }, wherein, W is spatial window size, the requirement of painting precision and operand according to benefit by user is set.
In this step, as shown in Figure 2, take L and K to be the process that the situation of 3 o'clock is example explanation structure field set of blocks ψ.From 6 overall reference image R 1~R6, take out respectively 6 sub-blocks and from actual current frame image C, take out 1 sub-block, the pixel that amounts to 7 sub-blocks forms set ψ.Each sub-block is with pixel I(x, y) centered by length and width be respectively the square of 2W.
U222) calculate the variance of the pixel in the field set of blocks ψ of each pixel.
As mentioned above, set ψ is that the pixel of L+K+1 sub-block forms set, so this step calculates the variance of the pixel value of pixel in this set.
U223) setting threshold, compare threshold and step U222) in variance corresponding to each pixel that calculate, if variance is less than threshold value, this pixel is labeled as to background; If variance is greater than threshold value, this pixel is labeled as to prospect.
Because being utilizes step U21) in the result of reaction time information after overall motion estimation, so to belonging to the pixel in video background region, its moving displacement is less, so in the set of blocks ψ of its field, the variance of the pixel value of pixel is just relatively little; Otherwise for the pixel that belongs to video foreground region, because prospect exists local motion, its moving displacement is larger, so the variance of the pixel value of the interior pixel of its field set of blocks ψ will be relatively large.In this step, be that a threshold value is set, threshold value is set by the user, and can paint required precision according to benefit and be set as empirical value.The variance of the field set of blocks ψ that passing threshold is corresponding with each pixel compares, and just can distinguish each pixel and belong to prospect or background, thereby complete the foreground-background mark of the station symbol area pixel point of current frame video image.
U23) mend the station symbol region of painting current frame image.Particularly: for background pixel point, utilize the information benefit of respective pixel point in L+K overall reference picture to paint this background pixel point; For foreground pixel point, according to front L frame, rear K two field picture, adopt partial estimation method to estimate to obtain L+K local reference picture of current frame image, utilize the information benefit of respective pixel point in L+K local reference picture to paint this foreground pixel point.
In this step, for the pixel in station symbol region, distinguish background and prospect and mend and paint respectively.For the background pixel point in station symbol region in current frame image, the direct information in the neighborhood image of utilization after having alignd, is also that the information of reference picture is mended and painted.For the foreground pixel point in station symbol region in current frame image, introduce local motion compensatedly, according to the information of the local reference picture obtaining after local motion estimation, mend and paint this foreground pixel point.
Owing to getting front L frame, rear K frame as time window, so in time window, there is the picture frame that L+K can reference.While utilizing the overall reference picture of L+K to mend to paint background pixel point or L+K local reference picture benefit to paint foreground pixel point, there is multiple benefit to paint mode.Enumerate as follows two kinds of benefits and paint mode:
A kind of, be to using the mean value of the pixel value that respective pixel is selected in L+K reference picture to mend and paint as the pixel value of portrait vegetarian refreshments current to be mended.For example, mend and to paint pixel I(x, y in current frame image) time, get in L+K reference picture in corresponding pixel I(x, y), the mean value of the pixel value of L+K pixel of use is as pixel I(x, y in present frame) pixel value.
Another kind, is the error of first calculating L+K reference picture and current frame image, and the pixel value that the respective pixel of usining in a reference picture of error minimum is selected is mended and painted as the pixel value of portrait vegetarian refreshments current to be mended.During the error of calculation, have various ways, a kind of mode, is in current frame image, to get portrait vegetarian refreshments I(x, y to be mended) N pixel around, be designated as I
k(x, y), k=1,2 ..., N; Correspondence is got the corresponding pixel of this N pixel in reference picture, is designated as I '
k(x, y), k=1,2 ..., N; Then calculate the quadratic sum of difference of the pixel value of respective pixel point and open root,
error amount as this reference picture.Certainly, can also select the mode of other definition error amounts, above-mentioned is only schematic for example.Calculate successively the error amount of L+K reference picture, relative error value, thus obtain a reference picture of error amount minimum, preferentially utilize the information in the Neighborhood Graph picture frame of error amount minimum that the pixel in current frame image is mended and painted.
Whether mend in the station symbol region that U24) judges current frame image is painted completely, if so, enters step U26); If not, enter step U25).
U25) utilize the spatial coherence benefit of current frame image to paint the residue region that benefit is painted in station symbol region.
U26) benefit that finishes current frame image is painted.
As step U24) when judgement, the content in the station symbol region in video is all painted by benefit, enters step U26) benefit that finishes current frame image paints.The benefit that enters next frame image is painted process.But due to step U23) the benefit process of painting mainly utilized the temporal correlation between frame of video.When if in video, motion is very slow; to station symbol region some of them pixel, possibly cannot from adjacent video frame image, find corresponding Pixel Information; at this moment step U24) there is the pixel of not painted by benefit during judgement; therefore enter step U25) supplement and paint, mainly adopt the spatial coherence of current frame image to mend and paint.
Utilize spatial coherence to mend while painting, conventional is to adopt still image reparation algorithm to mend to paint, and comprises that the numeral based on partial differential equation is repaired algorithm, the block matching algorithm based on sample.While carrying out spatial coherence compensation with the block matching algorithm based on sample, specifically comprise the steps:
251) determine the residue region that benefit is painted.
252) extract and wait to mend the border of painting region.
253) for borderline each pixel, calculate its piece coupling priority, the pixel of selecting priority value maximum waits to mend as current the central point of painting piece.
254) mend detect in painting region with current wait to mend paint the nearest match block of piece.
255) information that adopts each pixel in described match block to current wait to mend paint in piece each pixel and mend and paint.Be that in current block, pixel I1 mends and paints by the information of pixel I1 ' corresponding in match block, in current block, pixel I2 mends and paints by the information of pixel I2 ' corresponding in match block, the like, mend and paint correspondingly.
256) return to step 251), until finish while there is not the region that benefit is not painted.
Through above-mentioned steps U21)-U26), completed the benefit in station symbol region in current frame image painted.Repeating step U21)-U26), complete the benefit of each two field picture in video is painted, thereby after completing removal station symbol, benefit is painted reconstruction tasks.
In this embodiment, content differentiation background and prospect to the station symbol region in video are mended and are painted respectively, and make full use of the temporal information of carrying in the two field picture of field, make plenty of time redundant information fully for mending the process of painting, thereby the benefit that has effectively improved video station symbol region is painted precision, high accuracy is mended to paint and is rebuild station symbol region.In temporal information, cannot mend and paint when complete, supplement and adopt spatial coherence to mend to paint, thereby guarantee that whole station symbol region obtains benefit and paints.In addition, for each two field picture, because it is mainly by carrying out motion analysis to sequence of video images, utilize the known image information in the picture frame of field to mend the station symbol region of painting in present frame, therefore mend that to paint in process iterations less, guaranteeing can not increase more computation complexity.
Preferably, the benefit of this embodiment is painted method and is also comprised step U3) (in Fig. 1, not illustrating): each two field picture that all benefits have been painted is integrated according to time sequencing, and the video that station symbol region obtains after benefit is painted is removed in output.By step U3), each picture frame is integrated into video, obtain removing station symbol and station symbol region and mend and paint the video after reconstruction.
Above content is in conjunction with concrete preferred implementation further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.For general technical staff of the technical field of the invention, make without departing from the inventive concept of the premise some substituting or obvious modification, and performance or purposes identical, all should be considered as belonging to protection scope of the present invention.
Claims (9)
1. the content in the station symbol region in video is mended and is painted a method, it is characterized in that: comprise the following steps:
1) detect the station symbol region in video, and mark station symbol region;
2) each two field picture in video is carried out to following benefit and paints operation:
21), according to front L frame, rear K two field picture, adopt global motion estimating method to estimate to obtain L+K overall reference picture of current frame image; Wherein, L and K are the integer that is more than or equal to 0, and L is 0 when different with K, and concrete value is painted required precision by user according to benefit and set;
22) prospect and the background in the station symbol region in mark current frame image;
23) mend the station symbol region of painting current frame image: for background pixel point, utilize the information benefit of respective pixel point in L+K overall reference picture to paint this background pixel point; For foreground pixel point, according to front L frame, rear K two field picture, adopt partial estimation method to estimate to obtain L+K local reference picture of current frame image, utilize the information benefit of respective pixel point in L+K local reference picture to paint this foreground pixel point;
24) whether mend in the station symbol region that judges current frame image is painted completely, if so, enters step 26); If not, enter step 25);
25) utilize the spatial coherence benefit of current frame image to paint the residue region that benefit is painted in station symbol region;
26) benefit that finishes current frame image is painted.
2. the content in the station symbol region in video according to claim 1 is mended and to be painted method, it is characterized in that: prospect and the background of described step 22), utilizing the station symbol region in the information flag current frame image of L+K overall reference picture.
3. the content in the station symbol region in video according to claim 2 is mended the method for painting, it is characterized in that: described step 22) be specially: 221) to each the pixel I(x in the station symbol region in current frame image, y), build the field set of blocks ψ of pixel, described field set of blocks ψ comprises L+K+1 the sub-block ψ taking out from current frame image and K overall reference picture respectively
0, sub-block ψ
0=I(i, j) and ︱ i ∈ (x-W, x+W), j ∈ (y-W, y+W) }, wherein, W is spatial window size, the requirement of painting precision and operand according to benefit by user is set; 222) calculate the variance of the pixel in the field set of blocks ψ of each pixel; 223) setting threshold, compare threshold and step 222) in variance corresponding to each pixel that calculate, if variance is less than threshold value, this pixel is labeled as to background; If variance is greater than threshold value, this pixel is labeled as to prospect.
4. the content in the station symbol region in video according to claim 1 is mended the method for painting, it is characterized in that: described step 23), utilize in a L+K overall reference picture or L+K local reference picture when respective pixel dot information is mended portrait vegetarian refreshments, the mean value of the pixel value that respective pixel is selected in L+K reference picture of usining is mended and is painted as the pixel value of portrait vegetarian refreshments current to be mended.
5. the content in the station symbol region in video according to claim 1 is mended the method for painting, it is characterized in that: described step 23), utilize in a L+K overall reference picture or L+K local reference picture when respective pixel dot information is mended portrait vegetarian refreshments, first calculate the error of L+K reference picture and current frame image, the pixel value that the respective pixel of usining in a reference picture of error minimum is selected is mended and is painted as the pixel value of portrait vegetarian refreshments current to be mended.
6. the content in the station symbol region in video according to claim 1 is mended and is painted method, it is characterized in that: described step 25), adopt still image reparation algorithm to mend and paint.
7. the station symbol region in video according to claim 1 is mended and is painted method, it is characterized in that: described still image is repaired algorithm and comprised numeral reparation algorithm, the block matching algorithm based on sample based on partial differential equation.
8. the content in the station symbol region in video according to claim 7 is mended and is painted method, it is characterized in that: while described step 25) adopting the block matching algorithm based on sample, specifically comprise the steps: 251) determine to remain and do not mend the region of painting; 252) extract and wait to mend the border of painting region; 253) for borderline each pixel, calculate its piece coupling priority, the pixel of selecting priority value maximum waits to mend as current the central point of painting piece; 254) mend detect in painting region with current wait to mend paint the nearest match block of piece; 255) information that adopts each pixel in described match block to current wait to mend paint in piece each pixel and mend and paint; 256) return to step 251), until finish while there is not the region that benefit is not painted.
9. the content in the station symbol region in video according to claim 1 is mended the method for painting, it is characterized in that: also comprise step step 3): each two field picture that all benefits have been painted is integrated according to time sequencing, the video that station symbol region obtains after benefit is painted is removed in output.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310665267.8A CN103618905A (en) | 2013-12-09 | 2013-12-09 | Content drawing method for station caption area in video |
PCT/CN2013/090978 WO2015085637A1 (en) | 2013-12-09 | 2013-12-30 | Method for supplementarily drawing content at station logo region in video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310665267.8A CN103618905A (en) | 2013-12-09 | 2013-12-09 | Content drawing method for station caption area in video |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103618905A true CN103618905A (en) | 2014-03-05 |
Family
ID=50169609
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310665267.8A Pending CN103618905A (en) | 2013-12-09 | 2013-12-09 | Content drawing method for station caption area in video |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN103618905A (en) |
WO (1) | WO2015085637A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105025361A (en) * | 2015-07-29 | 2015-11-04 | 西安交通大学 | Real-time station caption eliminating method |
CN105376462A (en) * | 2015-11-10 | 2016-03-02 | 清华大学深圳研究生院 | Content supplement drafting method for pollution area in video |
CN105678685A (en) * | 2015-12-29 | 2016-06-15 | 小米科技有限责任公司 | Picture processing method and apparatus |
CN105898322A (en) * | 2015-07-24 | 2016-08-24 | 乐视云计算有限公司 | Video watermark removing method and device |
CN108470326A (en) * | 2018-03-27 | 2018-08-31 | 北京小米移动软件有限公司 | Image completion method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101635047A (en) * | 2009-03-25 | 2010-01-27 | 湖南大学 | Texture synthesis and image repair method based on wavelet transformation |
CN101950366A (en) * | 2010-09-10 | 2011-01-19 | 北京大学 | Method for detecting and identifying station logo |
CN102436575A (en) * | 2011-09-22 | 2012-05-02 | Tcl集团股份有限公司 | Method for automatically detecting and classifying station captions |
CN102496145A (en) * | 2011-11-16 | 2012-06-13 | 湖南大学 | Video repairing method based on moving periodicity analysis |
CN103051893A (en) * | 2012-10-18 | 2013-04-17 | 北京航空航天大学 | Dynamic background video object extraction based on pentagonal search and five-frame background alignment |
CN103336954A (en) * | 2013-07-08 | 2013-10-02 | 北京捷成世纪科技股份有限公司 | Identification method and device of station caption in video |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010075754A (en) * | 2000-01-17 | 2001-08-11 | 구자홍 | method for capture data in television and composition method of the same |
CN101739561B (en) * | 2008-11-11 | 2012-06-13 | 中国科学院计算技术研究所 | TV station logo training method and identification method |
CN101917644A (en) * | 2010-08-17 | 2010-12-15 | 李典 | Television, system and method for accounting audience rating of television programs thereof |
CN102982350B (en) * | 2012-11-13 | 2015-10-28 | 上海交通大学 | A kind of station caption detection method based on color and histogram of gradients |
CN103258187A (en) * | 2013-04-16 | 2013-08-21 | 华中科技大学 | Television station caption identification method based on HOG characteristics |
-
2013
- 2013-12-09 CN CN201310665267.8A patent/CN103618905A/en active Pending
- 2013-12-30 WO PCT/CN2013/090978 patent/WO2015085637A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101635047A (en) * | 2009-03-25 | 2010-01-27 | 湖南大学 | Texture synthesis and image repair method based on wavelet transformation |
CN101950366A (en) * | 2010-09-10 | 2011-01-19 | 北京大学 | Method for detecting and identifying station logo |
CN102436575A (en) * | 2011-09-22 | 2012-05-02 | Tcl集团股份有限公司 | Method for automatically detecting and classifying station captions |
CN102496145A (en) * | 2011-11-16 | 2012-06-13 | 湖南大学 | Video repairing method based on moving periodicity analysis |
CN103051893A (en) * | 2012-10-18 | 2013-04-17 | 北京航空航天大学 | Dynamic background video object extraction based on pentagonal search and five-frame background alignment |
CN103336954A (en) * | 2013-07-08 | 2013-10-02 | 北京捷成世纪科技股份有限公司 | Identification method and device of station caption in video |
Non-Patent Citations (1)
Title |
---|
聂栋栋: "数字图像和视频修复理论及其算法研究", 《中国博士学位论文全文数据库-信息科技辑》, 15 June 2008 (2008-06-15) * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105898322A (en) * | 2015-07-24 | 2016-08-24 | 乐视云计算有限公司 | Video watermark removing method and device |
WO2017016294A1 (en) * | 2015-07-24 | 2017-02-02 | 乐视控股(北京)有限公司 | Method and apparatus for removing watermark from video |
CN105025361A (en) * | 2015-07-29 | 2015-11-04 | 西安交通大学 | Real-time station caption eliminating method |
CN105025361B (en) * | 2015-07-29 | 2018-07-17 | 西安交通大学 | A kind of real-time station symbol removing method |
CN105376462A (en) * | 2015-11-10 | 2016-03-02 | 清华大学深圳研究生院 | Content supplement drafting method for pollution area in video |
CN105376462B (en) * | 2015-11-10 | 2018-05-25 | 清华大学深圳研究生院 | A kind of content benefit of Polluted area in video paints method |
CN105678685A (en) * | 2015-12-29 | 2016-06-15 | 小米科技有限责任公司 | Picture processing method and apparatus |
CN105678685B (en) * | 2015-12-29 | 2019-02-22 | 小米科技有限责任公司 | Image processing method and device |
CN108470326A (en) * | 2018-03-27 | 2018-08-31 | 北京小米移动软件有限公司 | Image completion method and device |
CN108470326B (en) * | 2018-03-27 | 2022-01-11 | 北京小米移动软件有限公司 | Image completion method and device |
Also Published As
Publication number | Publication date |
---|---|
WO2015085637A1 (en) | 2015-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103473780B (en) | The method of portrait background figure a kind of | |
CN103618905A (en) | Content drawing method for station caption area in video | |
CN102332157B (en) | Method for eliminating shadow | |
CN105405142A (en) | Edge defect detection method and system for glass panel | |
CN103559237A (en) | Semi-automatic image annotation sample generating method based on target tracking | |
CN103439348B (en) | Remote controller key defect detection method based on difference image method | |
CN104156704A (en) | Novel license plate identification method and system | |
CN103413288A (en) | LCD general defect detecting method | |
CN102831582A (en) | Method for enhancing depth image of Microsoft somatosensory device | |
CN104766344B (en) | Vehicle checking method based on movement edge extractor | |
CN102063727B (en) | Covariance matching-based active contour tracking method | |
CN107480603B (en) | Synchronous mapping and object segmentation method based on SLAM and depth camera | |
CN101246593B (en) | Color image edge detection method and apparatus | |
CN107067595B (en) | State identification method and device of indicator light and electronic equipment | |
CN104408741A (en) | Video global motion estimation method with sequential consistency constraint | |
CN104717400A (en) | Real-time defogging method of monitoring video | |
CN103400386A (en) | Interactive image processing method used for video | |
CN101765019A (en) | Stereo matching algorithm for motion blur and illumination change image | |
CN104966274A (en) | Local fuzzy recovery method employing image detection and area extraction | |
CN108021857B (en) | Building detection method based on unmanned aerial vehicle aerial image sequence depth recovery | |
CN104253994B (en) | A kind of night monitoring video real time enhancing method merged based on sparse coding | |
CN103065312B (en) | Foreground extraction method in gesture tracking process | |
CN103093202B (en) | Vehicle-logo location method and vehicle-logo location device | |
CN102521582A (en) | Human upper body detection and splitting method applied to low-contrast video | |
CN102313740B (en) | Solar panel crack detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140305 |