CN102263925A - Occlusion processing method - Google Patents

Occlusion processing method Download PDF

Info

Publication number
CN102263925A
CN102263925A CN2010101915601A CN201010191560A CN102263925A CN 102263925 A CN102263925 A CN 102263925A CN 2010101915601 A CN2010101915601 A CN 2010101915601A CN 201010191560 A CN201010191560 A CN 201010191560A CN 102263925 A CN102263925 A CN 102263925A
Authority
CN
China
Prior art keywords
shaded areas
covered
interpolation
frame
processing according
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010101915601A
Other languages
Chinese (zh)
Inventor
郑朝钟
黄泳霖
赖彦杰
孙维廷
陈滢如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Himax Technologies Ltd
Himax Media Solutions Inc
Original Assignee
Himax Technologies Ltd
Himax Media Solutions Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Himax Technologies Ltd, Himax Media Solutions Inc filed Critical Himax Technologies Ltd
Priority to CN2010101915601A priority Critical patent/CN102263925A/en
Publication of CN102263925A publication Critical patent/CN102263925A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides an occlusion processing method which comprises the following steps: providing a reference frame and a present frame, and determining at least one prospect object; determining at least an occlusion area relative to the prospect object or at least a non occlusion area relative to the prospect object; interpolating the occlusion area only according to the present frame, or interpolating the non occlusion area only according to the reference frame. According to the invention, the occlusion area is determined and corrected, thus motion compensation can be carried out correctly.

Description

The method that processing is covered
Technical field
The present invention relates to a kind of motion compensation (motion compensation), particularly relate to the method that (occlusion) covered in a kind of processing, its applicable on the frame rate conversion (frame rate upconversion, FRUC).
Background technology
Conversion (frame rate up conversion on the frame rate, FRUC) technology is through being usually used in digital image display, Digital Television for example, in order to (to claim picture frame again at two consecutive frames, this paper all is called frame) between produce a frame or multiframe interpolation frame, thereby the raising frame rate, for example, increase to 120Hz from 60Hz.The generation of interpolation frame generally is to use the interpositioning of motion compensation (motion compensation).
Yet some zones in the video signal frame may not be present in the middle of former frame or the present frame.If carry out interpolation, then can make the mistake according to non-existent pixel or block.Fig. 1 is the schematic diagram that shows the example that covers, and wherein, object 10 is positioned at the left side of former frame, and this object 10 has moved to the right side in present frame.Because the left field of former frame is subjected to cover (occlude) of object 10, therefore, the zone 12 that the left side of present frame is masked does not promptly have pixel or the block corresponding to former frame.The illustrated wrong interpolation of covering in the time of to cause motion compensation of Fig. 1.
This shows that the interpositioning of the video signal frame during above-mentioned existing motion compensation often causes distortion because of capture-effect, in method and use, obviously still have inconvenience and defective, and demand urgently further being improved.In order to solve the problem of above-mentioned existence, relevant manufacturer there's no one who doesn't or isn't seeks solution painstakingly, but do not see always that for a long time suitable design finished by development, and conventional method does not have appropriate method to address the above problem, this obviously is the problem that the anxious desire of relevant dealer solves.Therefore how to found the method that a kind of new processing is covered, effectively and correctly to handle capture-effect, belong to one of current important research and development problem in fact, also becoming the current industry utmost point needs improved target.
Summary of the invention
The objective of the invention is to, the defective that the interpositioning of the video signal frame when overcoming existing motion compensation exists, and the method that provides a kind of new processing to cover, technical problem to be solved is to make it can determine and proofread and correct shaded areas, thereby correctly carry out motion compensation, be very suitable for practicality.
The object of the invention to solve the technical problems realizes by the following technical solutions.According to the method that a kind of processing of the present invention's proposition is covered, it may further comprise the steps: reference frame and present frame at first are provided, and determine at least one prospect object.Decision is with respect at least one shaded areas or at least one non-shaded areas of prospect object.Then, unique according to present frame with the interpolation shaded areas, or unique according to reference frame with the non-shaded areas of interpolation.
The object of the invention to solve the technical problems also can be applied to the following technical measures to achieve further.
The method that aforesaid processing is covered, wherein said reference frame are former frame.
The method that aforesaid processing is covered, the decision of wherein said prospect object are according to a corresponding actions vector.
The method that aforesaid processing is covered, the action vector of wherein said prospect object is greater than the action vector of a background.
The method that aforesaid processing is covered, wherein said shaded areas are covered by this prospect object in this reference frame, but then not crested in this present frame; This non-shaded areas is covered by this prospect object in this present frame, but then not crested in this reference frame.
The method that aforesaid processing is covered, wherein said shaded areas is positioned at the zone of the opposite moving direction of this prospect object, and this non-shaded areas is positioned at the zone of the identical moving direction of this prospect object.
The method that aforesaid processing is covered, the interpolation of wherein said shaded areas are by the respective background zone of duplicating in this present frame.
The method that aforesaid processing is covered, the interpolation of wherein said non-shaded areas are by the respective background zone of duplicating in this reference frame.
The method that aforesaid processing is covered, the interpolation of wherein said shaded areas or non-shaded areas are to use an action vectogram.
The method that aforesaid processing is covered, the wherein said action vector of proofreading and correct shaded areas or non-shaded areas that before this shaded areas of interpolation or non-shaded areas, also can comprise.
The method that aforesaid processing is covered, this shaded areas of wherein said correction or non-shaded areas the action vector comprise: at least one huge collection block of present frame is oppositely mapped to interpolation frame.Selecting in the interpolation frame at least two huge collection blocks near the huge collection block of interpolation is candidate (candidate).In the middle of candidate, the action of huge collection block vector and the vectorial mutual consistent person of the action of the adjacent huge collection block of the huge collection block of interpolation are then selected the action vector as the huge collection block of interpolation.
The method that aforesaid processing is covered, wherein said interpolation are pixel-wise (pixel-based).
The method that aforesaid processing is covered, wherein said interpolation are block mode (block-based).
The present invention compared with prior art has tangible advantage and beneficial effect.By technique scheme, the present invention handles the method for covering and has following advantage and beneficial effect at least: the present invention can determine and proofread and correct shaded areas, thereby can correctly carry out motion compensation.
In sum, the invention relates to the method that a kind of processing is covered, at first, provide reference frame and present frame, and determine at least one prospect object.Then, decision is with respect at least one shaded areas or at least one non-shaded areas of prospect object.At last, unique according to present frame with the interpolation shaded areas, or unique according to reference frame with the non-shaded areas of interpolation.The present invention has obvious improvement technically, has tangible good effect, really is a new and innovative, progressive, practical new design.
Above-mentioned explanation only is the general introduction of technical solution of the present invention, for can clearer understanding technological means of the present invention, and can be implemented according to the content of specification, and for above-mentioned and other purposes, feature and advantage of the present invention can be become apparent, below especially exemplified by preferred embodiment, and conjunction with figs., be described in detail as follows.
Description of drawings
Fig. 1 is the schematic diagram that shows the example that covers.
Fig. 2 is the flow chart of the method that shows that the processing of the embodiment of the invention is covered.
Fig. 3 A is the schematic diagram that shows the example of shaded areas and non-shaded areas.
Fig. 3 B is the schematic diagram that shows another example of shaded areas and non-shaded areas.
10: object 12: the zone that is masked
21-24: step 30: prospect object
32: zone 34: zone
Embodiment
Reach technological means and the effect that predetermined goal of the invention is taked for further setting forth the present invention, below in conjunction with accompanying drawing and preferred embodiment, its embodiment of method, method, step, feature and effect thereof that the processing that foundation the present invention is proposed is covered, describe in detail as after.
Relevant aforementioned and other technology contents, characteristics and effect of the present invention can be known to present in the following detailed description that cooperates with reference to graphic preferred embodiment.By the explanation of embodiment, when can being to reach technological means that predetermined purpose takes and effect to obtain one more deeply and concrete understanding to the present invention, yet appended graphic only provide with reference to the usefulness of explanation, be not to be used for the present invention is limited.
Fig. 2 is the flow chart of the method that shows that the processing of the embodiment of the invention is covered, and the method that the processing of preferred embodiment of the present invention is covered is applicable to conversion (FRUC) on the frame rate.
In step 21, former frame (generally being called reference frame again) and present frame at first are provided, between former frame and present frame, produce the frame that makes new advances by interpositioning.In general, provide the frame of time N and the frame of time N+2, produce interpolation frame at time N+1 by moving estimation (motion estimation) and motion compensation (motioncompensation).Above-mentioned action estimation and motion compensation can be pixel-wise (pixel-based) or block mode (block-based).If the action estimation and the motion compensation of block mode, then each frame is split into the rectangular area of non-overlapping copies, be called huge collection block (macroblock, MB).For example, the size of each huge collection block can be 4x4 or 16x16 pixel.The action estimation of block mode can be with reference to this case applicant's U. S. application case " Method ofblock-based motion estimation ", application number 12/756,459.
Then, in step 22, determine or detect at least one prospect object.Because the mobile meeting usually of prospect object is moved faster than background, that is, action vector (the motion vector of prospect; MV) common action vector greater than background; therefore, in the present embodiment, the object that has than the big-movement vector promptly is decided to be the prospect object.
Fig. 3 A is the schematic diagram that shows the example of shaded areas and non-shaded areas, wherein illustration former frame, present frame and interpolation frame.In graphic, prospect object 30 is to move right for background.As shown in the figure, in interpolation frame, zone 32 is positioned at the left side of prospect object 30, and zone 34 is positioned at the right side of prospect object 30.Zone 32 is covered by prospect object 30 in former frame, and therefore zone 32 is called shaded areas in this manual.On the other hand, (though it is covered by prospect object 30) do not covered in zone 34 in present frame by prospect object 30 in former frame, and therefore zone 34 is called non-shaded areas in this manual.If prospect object 30 is moved to the left, shown in Fig. 3 B, Fig. 3 B is the schematic diagram that shows another example of shaded areas and non-shaded areas, and then left field 32 is non-shaded areas, and right side area 34 then is a shaded areas.
In step 23, determine at least one shaded areas and at least one non-shaded areas adjacent to the prospect object.In the present embodiment, the zone that is positioned at the opposite moving direction of prospect object is decided to be shaded areas.On the contrary, the zone that is positioned at the identical moving direction of prospect object is decided to be non-shaded areas.
Then, in step 24, the shaded areas of interpolation frame (for example zone 32 of Fig. 3 A) is the interpolation that unique frame according to present frame or time N+2 carries out shaded areas when carrying out interpolation.The non-shaded areas of interpolation frame (for example zone 34 of Fig. 3 A) is the interpolation that unique frame according to former frame or time N carries out non-shaded areas when carrying out interpolation.Cover/non-shaded areas when carrying out interpolation, can use forward direction (forward) or the back to (backward) action vectogram (MV map).If background is static haply, then cover/non-shaded areas when carrying out interpolation, can be by duplicating the respective background zone.
It should be noted that to cover/action of non-shaded areas vector can produce error because of moving of prospect object.For head it off, the action vector with the covering of error/non-shaded areas can be proofreaied and correct earlier before carrying out interpolation.At aforesaid U. S. application case " Method ofblock-based motion estimation " the action evaluation method of disclosed block mode, when the huge collection block with present frame oppositely maps to interpolation frame, in the interpolation frame at least two huge collection blocks near the huge collection block of interpolation be chosen as candidate (candidate).Wherein, the vectorial mutual consistent person of action of the action of huge collection block vector and adjacent (for example left side and upside) huge collection block of the huge collection block of interpolation then is chosen as the action vector of the huge collection block of interpolation.
The above, it only is preferred embodiment of the present invention, be not that the present invention is done any pro forma restriction, though the present invention discloses as above with preferred embodiment, yet be not in order to limit the present invention, any those skilled in the art, in not breaking away from the technical solution of the present invention scope, when the technology contents that can utilize above-mentioned announcement is made a little change or is modified to the equivalent embodiment of equivalent variations, in every case be not break away from the technical solution of the present invention content, according to technical spirit of the present invention to any simple modification that above embodiment did, equivalent variations and modification all still belong in the scope of technical solution of the present invention.

Claims (13)

1. method that processing is covered is characterized in that it may further comprise the steps:
One reference frame and a present frame are provided;
Determine at least one prospect object;
Decision is with respect at least one shaded areas or at least one non-shaded areas of this prospect object; And
Unique according to this present frame with this shaded areas of interpolation, or unique according to this reference frame with this non-shaded areas of interpolation.
2. the method that processing according to claim 1 is covered is characterized in that wherein said reference frame is a former frame.
3. the method that processing according to claim 1 is covered, the decision that it is characterized in that wherein said prospect object are according to a corresponding actions vector.
4. the method that processing according to claim 3 is covered is characterized in that the action vector of the action vector of wherein said prospect object greater than a background.
5. the method that processing according to claim 1 is covered is characterized in that wherein said shaded areas covered by this prospect object in this reference frame, but then not crested in this present frame; This non-shaded areas is covered by this prospect object in this present frame, but then not crested in this reference frame.
6. the method that processing according to claim 1 is covered it is characterized in that wherein said shaded areas is positioned at the zone of the opposite moving direction of this prospect object, and this non-shaded areas is positioned at the zone of the identical moving direction of this prospect object.
7. the method that processing according to claim 1 is covered, the interpolation that it is characterized in that wherein said shaded areas are by the respective background zone of duplicating in this present frame.
8. the method that processing according to claim 1 is covered, the interpolation that it is characterized in that wherein said non-shaded areas are by the respective background zone of duplicating in this reference frame.
9. the method that processing according to claim 1 is covered is characterized in that the interpolation of wherein said shaded areas or non-shaded areas is to use an action vectogram.
10. the method that processing according to claim 9 is covered is characterized in that more comprising before this shaded areas of interpolation or non-shaded areas:
Proofread and correct an action vector of this shaded areas or non-shaded areas.
11. the method that processing according to claim 10 is covered is characterized in that the action vector of this shaded areas of wherein said correction or non-shaded areas comprises:
At least one huge collection block of this present frame is oppositely mapped to interpolation frame;
Selecting in this interpolation frame, at least two these huge collection blocks near the huge collection block of this interpolation are candidate; And
In the middle of this candidate, the action of this huge collection block vector and the vectorial mutual consistent person of the action of the adjacent huge collection block of the huge collection block of this interpolation are selected the action vector as the huge collection block of this interpolation.
12. the method that processing according to claim 1 is covered is characterized in that wherein said interpolation is a pixel-wise.
13. the method that processing according to claim 1 is covered is characterized in that wherein said interpolation is the block mode.
CN2010101915601A 2010-05-31 2010-05-31 Occlusion processing method Pending CN102263925A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101915601A CN102263925A (en) 2010-05-31 2010-05-31 Occlusion processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101915601A CN102263925A (en) 2010-05-31 2010-05-31 Occlusion processing method

Publications (1)

Publication Number Publication Date
CN102263925A true CN102263925A (en) 2011-11-30

Family

ID=45010358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101915601A Pending CN102263925A (en) 2010-05-31 2010-05-31 Occlusion processing method

Country Status (1)

Country Link
CN (1) CN102263925A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6008865A (en) * 1997-02-14 1999-12-28 Eastman Kodak Company Segmentation-based method for motion-compensated frame interpolation
US20030035592A1 (en) * 2000-09-08 2003-02-20 Cornog Katherine H. Interpolation of a sequence of images using motion analysis
US20040179594A1 (en) * 2003-02-20 2004-09-16 The Regents Of The University Of California Phase plane correlation motion vector determination method
US20050129124A1 (en) * 2003-12-10 2005-06-16 Tae-Hyeun Ha Adaptive motion compensated interpolating method and apparatus
CN101207707A (en) * 2007-12-18 2008-06-25 上海广电集成电路有限公司 System and method for advancing frame frequency based on motion compensation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6008865A (en) * 1997-02-14 1999-12-28 Eastman Kodak Company Segmentation-based method for motion-compensated frame interpolation
US20030035592A1 (en) * 2000-09-08 2003-02-20 Cornog Katherine H. Interpolation of a sequence of images using motion analysis
US20040179594A1 (en) * 2003-02-20 2004-09-16 The Regents Of The University Of California Phase plane correlation motion vector determination method
US7197074B2 (en) * 2003-02-20 2007-03-27 The Regents Of The University Of California Phase plane correlation motion vector determination method
US20050129124A1 (en) * 2003-12-10 2005-06-16 Tae-Hyeun Ha Adaptive motion compensated interpolating method and apparatus
CN101207707A (en) * 2007-12-18 2008-06-25 上海广电集成电路有限公司 System and method for advancing frame frequency based on motion compensation

Similar Documents

Publication Publication Date Title
US10771816B2 (en) Method for deriving a motion vector
US20130202194A1 (en) Method for generating high resolution depth images from low resolution depth images using edge information
Jin et al. Virtual-view-assisted video super-resolution and enhancement
CN101207707A (en) System and method for advancing frame frequency based on motion compensation
JP2015525999A (en) Method and apparatus for unified disparity vector derivation in 3D video coding
CN102665061A (en) Motion vector processing-based frame rate up-conversion method and device
US20170094306A1 (en) Method of acquiring neighboring disparity vectors for multi-texture and multi-depth video
CN108924568B (en) Depth video error concealment method based on 3D-HEVC framework
CN101867759A (en) Self-adaptive motion compensation frame frequency promoting method based on scene detection
CN102572446A (en) Method for concealing entire frame loss error of multi-view video
CN103414899A (en) Motion estimation method of video coding
US9609361B2 (en) Method for fast 3D video coding for HEVC
Fujiwara et al. Motion-compensated frame rate up-conversion based on block matching algorithm with multi-size blocks
CN108668135B (en) Stereoscopic video B frame error concealment method based on human eye perception
CN102215394B (en) Method of block-based motion estimation and method of increasing frame speed
US20110249870A1 (en) Method of occlusion handling
CN102263925A (en) Occlusion processing method
Racapé et al. Spatiotemporal texture synthesis and region-based motion compensation for video compression
CN102300086A (en) Method for expanding reference frame boundary and limiting position of motion compensation reference sample
Lai et al. Fast motion estimation based on diamond refinement search for high efficiency video coding
CN107483936B (en) A kind of light field video inter-prediction method based on macro pixel
CN111713105B (en) Video image processing method, device and storage medium
Zhu et al. View synthesis oriented depth map coding algorithm
CN108366265B (en) Distributed video side information generation method based on space-time correlation
JP2006236063A (en) Device for extracting moving body

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20111130