CN101853402B - Method for identifying barrier in perspective imaging process - Google Patents

Method for identifying barrier in perspective imaging process Download PDF

Info

Publication number
CN101853402B
CN101853402B CN2010101656151A CN201010165615A CN101853402B CN 101853402 B CN101853402 B CN 101853402B CN 2010101656151 A CN2010101656151 A CN 2010101656151A CN 201010165615 A CN201010165615 A CN 201010165615A CN 101853402 B CN101853402 B CN 101853402B
Authority
CN
China
Prior art keywords
thing
blocked
image
sequence
image sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010101656151A
Other languages
Chinese (zh)
Other versions
CN101853402A (en
Inventor
袁艳
周志良
相里斌
王潜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN2010101656151A priority Critical patent/CN101853402B/en
Publication of CN101853402A publication Critical patent/CN101853402A/en
Application granted granted Critical
Publication of CN101853402B publication Critical patent/CN101853402B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method for identifying a barrier in a perspective imaging process, which belongs to the technical field of image processing and aims to solve the problem that the barrier and a blocked object are superposed in the perspective imaging. The method comprises the following steps of: focusing the plane of the barrier, translating an image sequence to be processed and calculating a variance to identify and eliminate the barrier; and focusing the plane of the blocked object and performing translation, summation and averaging on the image sequence obtained after the barrier is eliminated to obtain an image of the blocked object. The method has the advantages of effectively identifying the barrier in the perspective imaging process, improving the contrast and definition of the image of the blocked object and identifying moving barriers, along with universal applicability and no need of repeated shooting and manual intervention to scenes.

Description

A kind of method that thing is blocked in identification in the perspective imaging process
Technical field
The present invention relates to a kind of method that thing is blocked in identification in the perspective imaging process, belong to technical field of image processing.
Background technology
In traditional imaging system, the depth of field of camera can be regulated according to the size of aperture, and the size of aperture is the used pore size of actual imaging just.The general little aperture (small-bore) that uses when taking the scenery of the big degree of depth, and when taking feature scenery, require the depth of field very little, for the scenery virtualization outside the feature is fuzzy, can select large aperture (large aperture) imaging.Target is by other scenery (grove, crowd etc.) partial occlusion if be taken, and the light that at this moment has the part visual angle can't arrive in the pore diameter range of camera.And if the aperture of camera is enough big, just can receive light that the target that is blocked sends from other visual angles and to its imaging.This moment, the depth of field of camera became extremely short, and that is therefore blocked the target front blocks thing owing to the serious virtualization of quilt not in field depth is fuzzy, blocks the purpose that thing is taken thereby can reach penetrating component.
Utilize the array combination of repeatedly scanning shoot or a plurality of cameras of one camera, can synthesize virtual " large aperture ".Each camera in this virtual large aperture receives the light that target is sent from a plurality of visual angles, is recorded in respectively in the detector of each camera.For concerning the object point of focal plane, they can be superimposed as a picture point by the picture after through different cameral after treatment, and then can not be superimposed upon different positions at the object point to the position of focal plane becomes disc of confusion, thus outstanding the demonstration to the position of focal plane imaging.In the system of this synthetic large aperture imaging, the restoration of defocus blur image that prospect is blocked thing is aliasing at burnt picture rich in detail with the background target, has seriously reduced the contrast and the sharpness of target image.In order to reduce the influence of out-of-focus image aliasing, need before synthetic target image, prospect be blocked the picture of thing and discern and eliminate.
In prior art; Prospect is blocked the method that the picture of thing eliminates mainly to be comprised: utilize video camera array to obtain the dynamic image of target; From video, extract the consecutive numbers two field picture in a period of time then; The relatively variation of respective pixel in this image sequence; Color does not change in time or changes very little part and represented the static thing that blocks in the image, the variance of each pixel sequence in the sequence of computed images, and then the variance part that is lower than certain selected threshold value can be identified as and block thing; Perhaps, before photographic subjects, earlier place the rear projection screen of a width of cloth complete blue (or other and block the thing different colours) at the back blocking thing, this is taken, be with rear projection screen color different portions in the image that obtains and block thing.During photographic subjects, remove rear projection screen, keep the position and the attitude of camera constant, therefore pixel corresponding with blocking thing in the image that obtains can be identified; And; Suppose to block thing itself and all have close color, and block thing and with the color of being blocked target obvious difference is arranged, the while large tracts of land is blocked most visual fields that thing has covered camera; Therefore under such assumed condition; Do on average for the color of captured any piece image, the average that obtains is with to block the thing color very approaching, and the pixel that differs greatly with this average in the image has then been represented the target of being blocked.
Therefore, in the technology that the existing picture that prospect is blocked thing is eliminated, there is following shortcoming: need multiframe to repeat to take and can not discern the mobile thing that blocks, perhaps need artificial scene is got involved to cause the perspective imaging process complicated.
Summary of the invention
The invention provides a kind of method that thing is blocked in identification in the perspective imaging process, reached to eliminate and blocked image aliasing and the purpose that improves the target image contrast.
A kind of method that thing is blocked in identification in the perspective imaging process comprises:
According to the translational movement of the different depth of focusing, pending image sequence is carried out translation and sum-average arithmetic obtains preliminary composograph sequence;
, at the corresponding translational movement of the image that blocks thing pending image sequence is carried out translation and calculates obtaining to eliminate the image sequence that blocks thing according to focusing in the said preliminary composograph sequence.
The present invention is through repeatedly focusing and variance ratio method; Can in the perspective imaging process, effective recognition block thing; The contrast and the sharpness of target image blocked in raising, do not have to make hypothesis to the condition of photographed scene, need not repeatedly to repeat to take and can discern the mobile thing that blocks; Do not need simultaneously artificial intervention yet, have general applicability scene.
Description of drawings
Fig. 1 is the multi-angle imaging synoptic diagram that embodiment of the present invention provides;
Fig. 2 is that the schematic flow sheet of the method for thing is blocked in the identification in the perspective imaging process that embodiment of the present invention provides;
Fig. 3 is the pending image that three width of cloth that provide of embodiment of the present invention are taken from different visual angles;
Fig. 4 focuses respectively in the preliminary composograph sequence that provides of embodiment of the present invention blocking grid and blocked the image of literal;
Fig. 5 be the specific embodiment of the invention provide pending image is carried out the plane picture of translation and snap to grid;
Fig. 6 is the gray variance figure that blocks thing that the specific embodiment of the invention provides;
Fig. 7 is the binary map of blocking of blocking thing that the specific embodiment of the invention provides;
Fig. 8 is the image sequence figure that thing is blocked in elimination that the specific embodiment of the invention provides;
Fig. 9 is that elimination that the specific embodiment of the invention provides is blocked thing and is registered to the image sequence figure of objective plane;
Figure 10 is the synthetic target image that thing is blocked in elimination that the specific embodiment of the invention provides.
Embodiment
Embodiment of the present invention provides a kind of method that thing is blocked in identification in the perspective imaging process, and the technical scheme that adopts is based on the only theory relevant with the degree of depth of object point of the picture point side-play amount of same object point in different cameral.As shown in Figure 1; C and C ' are respectively the pupil center of any two diverse location cameras, and separation is d, and pupil is s to the distance of image planes; Object point P is z to the distance of camera pupil plane; Object point is imaged on p and p` place respectively through two cameras, and making the local coordinate of two picture points in the system of camera coordinates separately is y and y`, can be known easily by geometric relationship:
d z = | y - y ′ | + d z + s
Obtaining the coordinate offset amount of picture point in two width of cloth images is:
Δy = | y - y ′ | = d · s z
Can know by following formula; Change in the depth z of the picture point side-play amount of same object point in different cameral according to object point; Therefore according to selected focusing depth z; The image that different cameral is taken squints according to corresponding offset Δ y, all can be registered to identical position with the corresponding picture at burnt object point of z so, and the picture of other depth locations staggers each other because of out of focus.
Come to introduce the method that embodiment of the present invention proposes in detail below in conjunction with Figure of description, as shown in Figure 2, this method specifically can comprise:
Step 21 according to the translational movement of the different depth of focusing, is carried out translation and sum-average arithmetic obtains preliminary composograph sequence to pending image sequence.
Particularly, shown the pending image that three width of cloth are taken from different visual angles among Fig. 3, the literal of target for being blocked by white grid.As can be seen from Figure 3, translation has taken place owing to take the difference at visual angle in three pictures of object, and the translational movement Δ between the adjacent image is relevant with the depth location of object itself following of the condition that waits the camera spacing.
At first, under the condition of each Δ, all images is all carried out translation, then the image sequence after the translation is made sum-average arithmetic, obtain preliminary composograph sequence according to different Δ values.
Step 22 is carried out translation at the corresponding translational movement of the image that blocks thing to pending image sequence according to focusing in the preliminary composograph sequence.
Particularly, from preliminary composograph sequence, can select focusing is blocking grid and is being blocked two width of cloth images (as shown in Figure 4) of literal, the translational movement Δ corresponding with it 1And Δ 2Represented the degree of depth of blocking face and being blocked face respectively.Pending image is carried out adjacent spaces translational movement Δ 1Translation, all images all is registered to blocks face, obtain translation alignment shown in Figure 5 to the plane picture of grid.
Step 23 is calculated acquisition gray variance figure to the pending image sequence after the translation.
Particularly, in the pending image sequence after translation, the picture that blocks thing all is in same position in all images, and the picture that is blocked target still has mutual skew in each image.Therefore, represent in the pending image sequence of pixel after whole translation that blocks thing in each image less color or gray-value variation only take place, and other pixels are represented the picture of different objects in pictures different, its color or gray-value variation are bigger.Can through following formula calculate each pixel sequence variance S (i j), obtains the gray variance figure that blocks thing shown in Figure 6:
S ( i , j ) = 1 N Σ k = 1 N [ I k ( i , j ) - I k ( i , j ) ‾ ] 2 ,
I k ( i , j ) ‾ = 1 N Σ k = 1 N I k ( i , j )
Wherein, (i j) representes the variance of each pixel sequence, I to S k(i, j) in the pending image after k translation of expression pixel (k representes natural number for i, gray-scale value j).
Step 24 is according to gray variance figure and block thing through predetermined value identification and obtain to block binary map.
Particularly; As can be seen from Figure 6; Blocking representative among the gray variance figure of thing, to block the variance of thing part much little with respect to the variance of other parts in the image, therefore blocks thing image partly through selecting appropriate threshold t (variance is less than the variance of blocking thing and greater than the variance of canescence image) can identify representative.Order
T ( i , j ) = 0 , S ( i , j ) < t ; 1 , S ( i , j ) &GreaterEqual; t .
What obtain blocking thing blocks binary map T (as shown in Figure 7), and wherein, t representes predetermined value, T (i, j) remarked pixel among the expression gray variance figure (i, j) thing and the value of being blocked thing are blocked in expression, blocking thing is 0, it is 1 that quilt is blocked thing.
Step 25 multiplies each other and obtains to eliminate the image sequence that blocks thing blocking pending image sequence after binary map and the translation.
The pending image sequence that blocks after binary map and the translation is not carried out product, promptly
J k(i,j)=T(i,j)·1 k(i,j),
Wherein, J kExpression is registered to blocks the plane and eliminates the image sequence block behind the thing, and k representes natural number.
Then can obtain elimination as shown in Figure 8 and block the image sequence of thing, to accomplish blocking the identification of thing with representing the part of blocking thing to eliminate in the original image.
Further, in order to obtain to be blocked clearly the image of thing, this embodiment can also may further comprise the steps:
Step 26; Blocking thing and blocked difference between the respectively corresponding translational movement of the image of thing as new translational movement according to focusing in the preliminary composograph sequence, the image sequence that thing is blocked in elimination carries out translation and sum-average arithmetic obtains the pending image that thing is blocked in elimination.
Particularly, for the image sequence that elimination is blocked behind the thing is registered to the literal plane of being blocked, can be to J kCarry out translation once more, the translational movement between the adjacent image is a Δ 21Image sequence after the translation is as shown in Figure 9, and the pixel that this interval scale is blocked literal all is in identical position in the image preface of thing is blocked in elimination.The image preface of once more thing is blocked in elimination is carried out sum-average arithmetic, can be eliminated to block the synthetic target image (shown in figure 10) behind the thing.Comparison diagram 4 is visible, eliminates the influence that the target image that blocks behind the thing has reduced aliasing greatly, and picture contrast and sharpness obviously improve.
Above step has been described the processing procedure of blocking under the gray level image that thing is single plane; But in the actual scene; The thing that blocks that thing has covered certain depth range or a plurality of different levels can appear blocking; And pending image is the situation of coloured image, can adopt the method identification that this embodiment provides equally and eliminates and block thing.Be specially: when blocking the thing that blocks that thing has covered certain depth range or a plurality of different levels; Method that can repeated using step 21 to step 26 to each plane block thing discern and eliminate (be equivalent to from front to back to block thing push away to clear away remove), block the synthetic target image of thing to obtain final elimination; When pending image is coloured image; The step 23 of technique scheme is become the variogram that the pending image sequence after the translation is calculated three passages of acquisition RGB respectively; Then step 24 is become the binary map of blocking that obtains three passages of RGB according to the variogram of three passages of RGB; Again the binary map of blocking of three passages of RBG is carried out logical and (AND) and calculate to be obtained colour and block binary map, at last through step 25 and step 26 realization to identification of blocking thing and elimination in the coloured image.
The above; Be merely the preferable embodiment of the present invention, but protection scope of the present invention is not limited thereto, any technician who is familiar with the present technique field is in the technical scope that the present invention discloses; The variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (5)

1. the method that thing is blocked in identification in the perspective imaging process is characterized in that, comprising:
According to the translational movement of the different depth of focusing, pending image sequence is carried out translation and sum-average arithmetic obtains preliminary composograph sequence;
, at the corresponding translational movement of the image that blocks thing pending image sequence is carried out translation and calculates obtaining to eliminate the image sequence that blocks thing according to focusing in the said preliminary composograph sequence;
Said calculating obtains to eliminate the image sequence that blocks thing, specifically comprises:
Pending image sequence after the translation calculated obtain gray variance figure, and block thing through predetermined value identification and obtain to block binary map, will saidly block the acquisition elimination of multiplying each other of pending image sequence after binary map and the translation again and block the image sequence of thing.
2. method according to claim 1 is characterized in that, calculate the variance that obtains each pixel sequence said pending image sequence calculating acquisition gray variance figure after the translation is comprised through following formula:
S ( i , j ) = 1 N &Sigma; k = 1 N [ I k ( i , j ) - I k ( i , j ) &OverBar; ] 2 ,
I k ( i , j ) &OverBar; = 1 N &Sigma; k = 1 N I k ( i , j )
Wherein, (i j) representes the variance of each pixel sequence, I to S k(i, j) in the pending image after k translation of expression pixel (k representes natural number for i, gray-scale value j), and N representes the number of image in the pending image sequence, gets natural number.
3. method according to claim 1 is characterized in that, saidly blocks thing through predetermined value identification and obtains to block binary map and calculate through following formula and obtain:
T ( i , j ) = 0 , S ( i , j ) < t ; 1 , S ( i , j ) &GreaterEqual; t .
Wherein, t representes predetermined value, and (i, j) (i j) representes to block thing and the value of being blocked thing to the remarked pixel among the expression gray variance figure to T, and blocking thing is 0, and being blocked thing is 1.
4. method according to claim 1 is characterized in that, this method also comprises:
Blocking thing and blocked the difference between the respectively corresponding translational movement of the image of thing according to focusing in the said preliminary composograph sequence, the image sequence that thing is blocked in said elimination carries out translation and sum-average arithmetic obtains the pending image that thing is blocked in elimination.
5. method according to claim 1 is characterized in that, this method also comprises:
Pending image for colour; At first the pending image sequence after the translation is calculated the variogram that obtains three passages of RGB respectively; Obtain the binary map of blocking of three passages of RGB then according to the variogram of three passages of RGB, again the binary map of blocking of three passages of RBG is carried out logical and and calculated and obtain colour and block binary map.
CN2010101656151A 2010-04-30 2010-04-30 Method for identifying barrier in perspective imaging process Expired - Fee Related CN101853402B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101656151A CN101853402B (en) 2010-04-30 2010-04-30 Method for identifying barrier in perspective imaging process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101656151A CN101853402B (en) 2010-04-30 2010-04-30 Method for identifying barrier in perspective imaging process

Publications (2)

Publication Number Publication Date
CN101853402A CN101853402A (en) 2010-10-06
CN101853402B true CN101853402B (en) 2012-09-05

Family

ID=42804880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101656151A Expired - Fee Related CN101853402B (en) 2010-04-30 2010-04-30 Method for identifying barrier in perspective imaging process

Country Status (1)

Country Link
CN (1) CN101853402B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968784B (en) * 2012-10-17 2015-06-17 北京航空航天大学 Method for aperture synthesis imaging through multi-view shooting
CN113643289B (en) * 2021-10-13 2022-02-11 海门市芳华纺织有限公司 Fabric surface defect detection method and system based on image processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6829383B1 (en) * 2000-04-28 2004-12-07 Canon Kabushiki Kaisha Stochastic adjustment of differently-illuminated images
CN101266685A (en) * 2007-03-14 2008-09-17 中国科学院自动化研究所 A method for removing unrelated images based on multiple photos
CN101562701A (en) * 2009-03-25 2009-10-21 北京航空航天大学 Digital focusing method and digital focusing device used for optical field imaging

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6829383B1 (en) * 2000-04-28 2004-12-07 Canon Kabushiki Kaisha Stochastic adjustment of differently-illuminated images
CN101266685A (en) * 2007-03-14 2008-09-17 中国科学院自动化研究所 A method for removing unrelated images based on multiple photos
CN101562701A (en) * 2009-03-25 2009-10-21 北京航空航天大学 Digital focusing method and digital focusing device used for optical field imaging

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Dong-Hak Shin, et al..Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging.《OPTICS EXPRESS》.2008,第16卷(第21期),第16294-16304页. *
Mehdi DaneshPanah, et al..Profilometry and optical slicing by passive three-dimensional imaging.《OPTICS LETTERS》.2009,第34卷(第7期),第1105-1107页. *
薛盖超,等.方差阴影图中的光渗现象消除算法.《计算机辅助设计与图形学学报》.2009,第21卷(第2期),第165-171页. *

Also Published As

Publication number Publication date
CN101853402A (en) 2010-10-06

Similar Documents

Publication Publication Date Title
TWI510086B (en) Digital refocusing method
CN105122793B (en) Image processing device, image capture device, and image processing program
US8384763B2 (en) Generating a depth map from a two-dimensional source image for stereoscopic and multiview imaging
US20100118122A1 (en) Method and apparatus for combining range information with an optical image
EP2252071A2 (en) Improved image conversion and encoding techniques
CN103561258B (en) Kinect depth video spatio-temporal union restoration method
US9965834B2 (en) Image processing apparatus and image acquisition apparatus
US9990738B2 (en) Image processing method and apparatus for determining depth within an image
KR20120090491A (en) Image segmentation device and method based on sequential frame imagery of a static scene
US20150229913A1 (en) Image processing device
CN103177432B (en) A kind of by coded aperture camera acquisition panorama sketch method
TW201813371A (en) Ghost artifact removal system and method
CN109493283A (en) A kind of method that high dynamic range images ghost is eliminated
US20210125305A1 (en) Video generation device, video generation method, program, and data structure
CN104853080B (en) Image processing apparatus
CN111064945B (en) Naked eye 3D image acquisition and generation method
CN102959942A (en) Image capture device for stereoscopic viewing-use and control method of same
EP2833637A1 (en) Method for processing a current image of an image sequence, and corresponding computer program and processing device
CN101853402B (en) Method for identifying barrier in perspective imaging process
KR20150095301A (en) Method and apparatus of generating depth map
CN106469306B (en) More people&#39;s image extract real-times and synthetic method based on infrared structure light
CN110290313B (en) Method for guiding automatic focusing equipment to be out of focus
CN112132771A (en) Multi-focus image fusion method based on light field imaging
EP4068207A1 (en) Method of pixel-by-pixel registration of an event camera to a frame camera
JP6439285B2 (en) Image processing apparatus, imaging apparatus, and image processing program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120905

Termination date: 20130430