CN100423021C - Method and device for segmentation low depth image - Google Patents

Method and device for segmentation low depth image Download PDF

Info

Publication number
CN100423021C
CN100423021C CNB2003101024202A CN200310102420A CN100423021C CN 100423021 C CN100423021 C CN 100423021C CN B2003101024202 A CNB2003101024202 A CN B2003101024202A CN 200310102420 A CN200310102420 A CN 200310102420A CN 100423021 C CN100423021 C CN 100423021C
Authority
CN
China
Prior art keywords
image
zone
value
higher order
feature space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2003101024202A
Other languages
Chinese (zh)
Other versions
CN1497494A (en
Inventor
C·金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seiko Epson Corp
Original Assignee
Seiko Epson Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seiko Epson Corp filed Critical Seiko Epson Corp
Publication of CN1497494A publication Critical patent/CN1497494A/en
Application granted granted Critical
Publication of CN100423021C publication Critical patent/CN100423021C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A method for extracting an object of interest from an image is provided. The method initiates with defining an image feature space based upon frequency information. Then, the image feature space is filtered to smooth both focused regions and defocused regions while maintaining respective boundaries associated with the focused regions and the defocused regions. The filtered image feature space is manipulated by region merging and adaptive thresholding to extract an object-of-interest. A computer readable media, an image capture device and an image searching system are also provided.

Description

Be used for the method and apparatus that low depth image is cut apart
Technical field
The application requires following right of priority: submitted on October 17th, (1) 2002, application number is 60/419,303, title is the U.S. Provisional Patent Application of " Segmentation of Image with Low Depth-of-Field Using Higher Order Statistics Test and MorphologicalFiltering by Reconstruction (using the image segmentation with low depth of field of more higher order statistical test and form filtration by reconstruction) "; Submitted on February 28th, (2) 2003, application number is 60/451,384, title is the U.S. Provisional Patent Application of " Automatic Segmentationof Low Depth-of-Field Image Using Morphological Filters AndRegion Merging (using the cutting apart automatically of low depth image of morphological filter and regional merger) ".Each application in these two provisional application is all introduced for all purposes at this by the mode of reference.
The present invention relates to Digital image technology in general, relates in particular to a kind of method and apparatus that is used for an image division is become similar zone.
Background technology
It is one of computer vision problem of tool challenge that automated graphics is cut apart.The target of image segmentation is that an image division is become some similar zones.The depth of field (DOF) is meant the nearest distance to the solstics that " sharp " of institute's perception in a picture focuses on.Low DOF is a kind of camera work, is generally used for assisting to understand the depth information in the 2-dimentional photo.Low DOF typically refers to when interested object (OOI) is in " sharp " and focuses on and background object is a kind of state that blur, during non-focusing.Figure 1A to 1C is the exemplary illustration of low DOF image.The butterfly of Figure 1A, promptly interested object is by high order focusing, and background defocuses.The football player of Figure 1B and football are interested objects, because the two all is a high order focusing, and background defocuses.Similarly, with reference to figure 1C, bird is a high order focusing, and the remainder of image defocuses.Have and can be used for many application cutting apart of low DOF image, the scope that for example is used for the image index of content-based retrieval, object-based compression of images, Video Object Extraction, the analysis of 3D micro image and is used for estimation of Depth is cut apart.
The zone of supposing sharp focus comprises enough high fdrequency components, and then zone that should will focus on by the amount that compares high-frequency content and the difference of low DOF image are come.Two kinds of methods that are used for this low DOF image segmentation are arranged: based on the edge with based on the method in zone.Based on the method at edge by measuring on border that the defocus amount of each edge pixel is extracted object.Should show based on the algorithm at edge and be used to the accuracy cutting apart artificial object and have the object at sharp boundaries edge.But this method usually can not detect the boundary edge of natural objects, and the result produces incoherent border.
On the other hand, based on the high frequency region in the partitioning algorithm dependence detected image in zone.Herein, rational naming a person for a particular job is measured the focus level of each pixel by calculating this high fdrequency component.For this purpose, used Several Methods, such as the multi-scale descriptive statistics of the wavelet coefficient variance in the summation of the space of square anti-Gauss (SSAG) function, the high frequency band, high frequency wavelet coefficient and local variance etc.Utilize high fdrequency component usually in focusing and out-focus region, all to cause error separately.In out-focus region,, also can there be enough strong busy (busy) texture region of high fdrequency component although blur owing to defocusing.These zones are easy to be categorized as focal zone mistakenly.On the contrary, having also may be in these region generating mistake near the focal zone of stable gray level.Therefore, the sharp keen details that only relies on OOI is for can being a kind of restriction based on the DOF image partition method in zone.And although combine the thinning algorithm that is used for the high resolving power classification, this multi-scale method also is easy to produce jiggly border.
Fig. 2 is the synoptic diagram such as the optics geometry of such typical image acquisition equipment of camera.Lens 100 have following shortcoming: the light of the point of distance-z that these lens only provide the lens equation that freely is familiar with is focused on:
1 z ′ + 1 - z = 1 f , - - - ( 2 )
Wherein z ' is the distance of imaging plane 102 to lens 100, and f is a focal length.Point in other distance is imaged as small circle.The size of this circle of confusion can followingly be determined: distance for-it is the point of z ' that the point of z is imaged at apart from lens, 1/z '+1/-z=1/f herein,
So
( z ‾ ′ - z ′ ) = f ( z ‾ + f ) f ( z + f ) ( z ‾ - z ) . - - - ( 3 )
Can correctly receive the position of the focusedimage of object for-z in distance if imaging plane 102 is in, then for naming a person for a particular job of-z produces diameter be in distance
Figure C20031010242000073
Fuzzy graph, wherein d represents the diameter of lens 100.The depth of field (DOF) is the distance range that object obtains the focusing of " enough good ", and the focusing of so-called " enough good " is meant the resolution of the diameter of the circle of confusion less than this imaging device.
Certainly, what this DOF depended on use is any sensor, but under any circumstance, this lens opening is big more clearly, and this DOF is more little.Certainly, when adopting the large aperture, it is more serious that the error in the focusing process will become.As shown in Figure 2, d f104 and d r106 represent to be somebody's turn to do the front and back boundary of " depth of field " respectively.Utilize low DOF, the diameter of the circle of confusion diminishes, and therefore only has OOI to be in sharp focus, and the object in the background then is fuzzy and non-focusing.In addition, will be subjected to not good extraction result's influence based on the cutting techniques of color and strength information.
Therefore, need provide a kind of method and apparatus,, thereby interested object accurately and effectively can be extracted from background so that the image relevant with the low depth of field cut apart in order to solve prior art problems.
Summary of the invention
Broadly, the present invention satisfies these needs by a kind of method and system is provided, method and system of the present invention is transformed to view data based on frequency with view data, and simplify this based on the view data of frequency so that more effectively from the interested object of this image data extraction (OOI).Should be appreciated that the present invention can implement in a lot of modes, comprise as a kind of method, a kind of system, computer code or device.The creative several embodiment of the present invention are described below.
In one embodiment, provide a kind of method that is used for divide image data.This method is beginning to limit an image feature space according to frequency information.Simplify the data of this image feature space then by morphological tools.Subsequently, the zone of the image feature space that filtered is assigned as initial objects.At this, this zone is called as a seed region, and this seed region is relevant with the mxm. in the zone that is assigned to the image feature space that has filtered.Each zone of the image space that has filtered all frequency level with basicly stable is relevant.Then, the border of the initial OOI of the image feature space that is filtered by a kind of regional merger technology innovation.Carrying out adaptive threshold subsequently determines so that determine the ratio of size with the view data size of this initial interested object.
A kind of method of image segmentation is provided in another embodiment.Higher order statistical (HOS) figure is this method to produce more from view data.Simplify this HOS figure then.Subsequently, determine a relevant border of a focal zone with the HOS figure of this modification.Determine to determine finally cutting apart of this focal zone by adaptive threshold then.
The method of the objects that is used for split image is provided in another embodiment.This method is being beginning according to frequency information split image feature space.Then, filter this image feature space, with level and smooth focal zone and out-focus region these two, keep the border that is associated with this focal zone and this out-focus region respectively simultaneously.
In yet another embodiment, provide a computer-readable medium with the programmed instruction that is used for image segmentation.This computer-readable medium comprises and is used for producing the more programmed instruction of higher order statistical (HOS) figure from view data.Comprise the programmed instruction that is used to revise this HOS figure.Be provided for the programmed instruction on a definite border that is associated with the focal zone of the HOS figure that is revised.Also comprise and be used for according to the size of the value that is associated with focal zone and the programmed instruction of finally cutting apart of recently determining this focal zone between the view data size.
In another embodiment, provide image capture apparatus.This image capture apparatus comprises lens, and these lens are configured for the object that focuses within the depth of field (DOF).The image recording assembly is included in this image capture apparatus.This image recording assembly is configured for the image information that receives from scioptics and produces the digital picture that comprises the object within this DOF.This image recording assembly can produce one of this digital picture more higher order statistical (HOS) figure, so that be extracted in object within this DOF from this digital picture.
In another embodiment, provide an image search system.This image search system comprises an image capture apparatus with lens, and these lens are configured for the object that focuses within the depth of field (DOF).Comprise the image extraction component that intercoms mutually with this image capture apparatus.This image extraction component is configured for the object that is extracted within this DOF.Comprise the Image Retrieval system that intercoms mutually with this image extraction component.This Image Retrieval system is configured for the data of reception corresponding to the object within this DOF.This Image Retrieval system is further configured the coupling that becomes to be used to be identified in the OOI between the data that received and the collected view data.
In one aspect of the invention, provide a kind of method that is used for divide image data, comprised the steps:
According to frequency information definition image feature space;
Utilize morphological tools to filter the view data of this image feature space;
A zone of the image feature space that filtered is assigned as initial objects;
Discern the border of initial objects of the image feature space of this filtration; And
Determine the size of this initial objects and the ratio between the view data size;
The method operation that wherein said zone with the image feature space that filtered is assigned as initial objects comprises the steps:
Discern the zone of this image feature space that is associated with basicly stable frequency level; And
Assign a value according to this basicly stable frequency level each zone in the zone of being discerned, wherein the zone of the image space of this filtration that is associated with this initial objects is assigned a mxm.;
The method operation on the border of the initial objects of the image feature space of wherein said this filtration of identification comprises the steps:
Calculate normalized superposition boundary, this normalized superposition boundary represent to be used in reference to be shown in this initial objects and and the zone that joins, this initial objects border between the value of the boundary pixel shared;
If should be worth greater than threshold value, then this method comprises the steps:
The described zone that joins with this initial objects border is integrated in this initial objects.
In another aspect of the present invention, a kind of method of image segmentation is provided, comprise the steps:
Produce more higher order statistical figure from view data;
Revise this more higher order statistical figure;
Determine the border that is associated with the focal zone of the more higher order statistical figure of this modification;
According to the size of the value that is associated with this focal zone and finally the cutting apart of definite this focal zone recently between the view data size;
Discern the inside in similar zone of the more higher order statistical figure of this modification; And
Mark is assigned to this similar zone;
The wherein said method of determining the border that is associated with the focal zone of the more higher order statistical figure of this modification is operated and is comprised the steps:
Determine the value of the boundary number shared between the zone that indication joins on this focal zone and border; With
If should be worth greater than threshold value, then this method comprises the steps:
The zone that this focal zone of merger and this border join.
Aspect another, provide a kind of image capture apparatus of the present invention, having comprised:
Lens are configured for the object that focuses within the depth of field;
The image recording assembly is configured for from the image information that receives through these lens and produces the digital picture that comprises the object within the depth of field;
This image recording assembly can produce the more higher order statistical figure of this digital picture;
According to this more higher order statistical figure cut apart from the object within this depth of field of this digital picture, comprising:
Determine and this border of being associated of the focal zone of higher order statistical figure more;
According to the size of the value that is associated with this focal zone and finally the cutting apart of definite this focal zone recently between the view data size;
Determine more to comprise on the border that is associated of the focal zone of higher order statistical figure with this:
Determine the value of the boundary number shared between the zone that indication joins on this focal zone and border; With
If should be worth greater than threshold value, then this method comprises: the zone that this focal zone of merger and this border join.
Again aspect another, provide a kind of image search system of the present invention, comprised foregoing image capture apparatus, this image capture apparatus also comprises:
With the image extraction component that this image capture apparatus intercoms mutually, this image extraction component is configured for the object that extracts within this depth of field; And
The Image Retrieval system that intercoms mutually with this image extraction component, this Image Retrieval system be configured for receive with this depth of field within the corresponding data of object, this Image Retrieval system further is configured for the coupling that is identified between the data that received and the collected view data.
According to example explanation principle of the present invention is described in detail, makes other aspects and advantages of the present invention become apparent below in conjunction with accompanying drawing.
Description of drawings
Describe by detailed description, the present invention will be understood easily, the identical identical structural unit of Reference numeral indication below in conjunction with accompanying drawing.
Figure 1A to 1C is the exemplary illustration of low DOF image.
Fig. 2 is the synoptic diagram such as the optics geometry of the such typical image acquisition equipment of camera.
Fig. 3 A-3C represents original image according to an embodiment of the invention and the image feature space that is associated, and has illustrated and has used the more effect of higher order statistical.
Fig. 4 A-4E illustrates according to an embodiment of the invention and illustrates: the HOS of low DOF image (4A) schemes (4B), will scheme the result (4D) of (4C), application region merger and the result (4E) that the application self-adapting threshold value is determined to HOS by the form filtration application of rebuilding.
Fig. 5 A-5C represents illustrating of regional according to an embodiment of the invention merger technology.
Fig. 6-1 provides experimental result at four series of each step that is associated with the cutting techniques of embodiment described herein to 6-4.
Fig. 7-1 illustrates the image of four series to 7-4, the image of these four series can to the result of existing cutting techniques with compare by using the result that embodiment described herein produces.
Fig. 8 is the process flow diagram that is used for extracting from an image method operation of interested object according to an embodiment of the invention.
Fig. 9 is the rough schematic view of an image capture apparatus according to an embodiment of the invention, and this image capture apparatus has and is configured for the circuit that extracts the interested object that is associated with low depth image.
Figure 10 is the rough schematic view of an image search system according to an embodiment of the invention.
Embodiment
Described in the invention is a kind of system, apparatus and method that are used for extracting from the low depth of field (DOF) image interested object (OOI).But according to description subsequently, apparent for those of ordinary skills: the present invention can be put into practice under some or all the details of these detailed descriptions not having.In other example, do not describe known process operation in detail, in order to avoid unnecessarily make the present invention unclear.Figure 1A-C and Fig. 2 are described in " background technology " part.The term of Shi Yonging " approximately " is meant this reference value+/-10% herein.
Embodiments of the invention provide a kind of method and system, and this method and system is used for and will separates from other prospect or the background object of this image with the low depth of field (DOF) image objects that be associated, sharp focus (OOI).Therefore, the image with low DOF can be divided into focal zone and out-focus region.Use and this view data associated frequency information, rather than color or strength information, this image divided.Be different from the attribute of working strength, texture or color wherein and find the intensity in zone or cutting apart of coloured image, for the OOI that automatic extraction focuses on, focal length hint (focus cue) can play most important effect.Low DOF image is transformed into the suitable feature space that is used to cut apart.In one embodiment, higher order statistical (HOS) is implemented to the conversion of this suitable feature space by calculating more for all pixels in this low DOF image, so that produce a HOS figure.Then, utilize the form of passing through to rebuild as described below to filter and simplify (promptly revising) this HOS figure.The border of OOI is defined, and is updated by regional merger.Subsequently, determine to determine final OOI by adaptive threshold.Therefore, the accurate extraction of the OOI that is associated with this low DOF view data is provided for a lot of application.
In order to set up the model that defocuses of focusedimage, usually describe by defocusing the blurring effect that causes by the 2-D Gaussian function:
G σ ( x , y ) = 1 2 π σ 2 exp ( - x 2 + y 2 2 σ 2 ) - - - ( 1 )
Wherein, σ is the propagation parameter or the filtration ratio of control defocus amount.Therefore, out-of-focus image I d(x, model y) can be established, as focusedimage I f(x, y) and Gaussian function G σ (x, linear convolution y):
I d(x,y)=G σ(x,y)*I f(x,y), (4)
As shown in formula (4), because this image that defocuses is by low-pass filter, so the high fdrequency component in this image is removed or is reduced.The zone of supposing sharp focus comprises enough high fdrequency components, and then the zone that should will focus on by the amount of high-frequency content is relatively hanged down the DOF image and distinguished and come with being somebody's turn to do.
Make R represent one group of pixel, and R={ (k, l); 1≤k≤K, 1≤l≤L}, wherein this image size is K * L.Target is the interested object (OOI) that R is divided into sharp focus, and interested object will be represented with OOI, and all the other zone OOI CExpression.
Make P={R i, i ∈ 1,, N}}, -The division of expression R.The OOI of image is defined by as follows:
OOI = U i = 1 N ooi R i - - - ( 5 )
R wherein iBe i connected region, and N OoiExpression belongs to the number in the zone of OOI.In other words, OOI represents the interested object that focuses on, by the N of P OoiIndividual zone is formed.Equation (5) allows a plurality of OOI of definition naturally, and promptly OOI can be made up of the son-OOI that separates.
The initial step of cutting apart comprises that the low DOF image I with input is transformed into only feature space.Should be appreciated that, the selection of feature space can depend on partitioning algorithm at application.For example, this feature space can be represented one group of wavelet coefficient, or the local variance image field.
In one embodiment, more higher order statistical (HOS) is used in conversion at feature space.More particularly, calculate Fourth-order moment for the whole pixels in this image.Should be appreciated that this Fourth-order moment has the ability that suppresses Gaussian noise, improved the final accuracy of extracting OOI thus.(x, Fourth-order moment y) is defined as follows:
m ^ ( 4 ) ( x , y ) = 1 N η Σ ( s , t ) ∈ η ( x , y ) ( I ( s , t ) - m ^ ( x , y ) ) 4 - - - ( 6 )
Wherein η (i, j) be centrally located in (i, one group of pixel j),
Figure C20031010242000142
Be that (x, sampling average y) (promptly for I m ^ ( x , y ) = 1 N η Σ ( s , t ) ∈ η ( x , y ) I ( s , t ) ), and N η is the size of η.Because the dynamic range of Fourth-order moment value is very big, thus the value of each pixel be lowered ratio and exceed with 255, thereby make each pixel from [0,255] value.This result images is called as HOS figure, and it is defined as:
HOS ( x , y ) = min ( 255 , m ^ ( 4 ) ( x , y ) / 100 ) . - - - ( 7 )
Use equation (7) at all pixels, produce HOS figure, and O={HOS (x, y); (x, y) ∈ R}.
Fig. 3 A-C represents original image according to an embodiment of the invention and the image feature space that is associated, and has illustrated and has used the more effect of higher order statistical.Fig. 3 C illustrates the HOS figure that produces according to the low DOF image of drawing from Fig. 3 A described herein.Local variance figure shown in it and Fig. 3 B is compared, found out that as seen the HOS figure of Fig. 3 C produces closeer and higher value in the focal region, suppress the noise in the out-focus region simultaneously.That is, OOI 110c illustrates than image 110b and has clearer solid white zone.
Should be appreciated that above-mentioned feature space conversion is promptly used HOS and calculated in order to define HOS figure, makes and finally can adopt feature space qualification more fully for image segmentation.In one embodiment, the HOS figure from this low DOF image transformation has 0 to 255 gray level.Higher value within the 0-255 scope is corresponding to the more noble potential of focal zone.Because the smooth region that focuses on can not calculate by HOS and detect, some zones that defocus may produce noise simultaneously, so need a kind ofly suitable be used for instrument that HOS figure revises and eliminate respectively and focusing on and the dark sheet and the paillette of out-focus region.
The combination of determining that mathematical morphology is known as by the open and close that utilize given structural unit comes a kind of method of smooth noise grayscale image.What some kinds of morphological tools relied on is two basic transformation groups that are called as corrosion and expand.Make B represent a window or flat structural unit, make B XyFor the translation of B and its starting point is positioned at (x, y).Subsequently, in being configured to the morphological filter of image simplification, use the corrosion ε of the HOS figure O that causes by structural unit B B(O)
ϵ B ( O ) ( x , y ) = min ( kl ) ∈ B x y HOS ( k , l ) - - - ( 8 )
Similarly, this expansion
δ B ( O ) ( x , y ) = max ( k l ) ∈ B x y HOS ( k , l ) . - - - ( 9 )
Basic corrosion and the morphological filter that expands and allow definition such as form to open and close:
Form is opened γ B(O) and close
Figure C20031010242000153
(O) provide by following formula: γ B(O)=δ BB(O)),
Figure C20031010242000154
Form ON operation device γ B(O) adopt the back following expansion δ BThe corrosion ε of () B().Corrosion makes image become darker and expands and to make image become brighter.Form is opened (or closing) and is simplified original signal by eliminating bright (or dark) component that does not drop among the structural unit B.Therefore, the morphological operations device can directly be used for binary picture and be need not any change.
A feature of morphological filter is that this crosses the boundary information that the ripple device does not allow perfect conservation object.Therefore, this may be a defective in some cases.In order to overcome this defective, can adopt filtrator by rebuilding.Though be similar to the filtrator of form open and close in nature, should rely on different corrosion and expansive working device by the filtrator of rebuilding, make that their definition meeting was complicated a little.Original image O is with respect to benchmark image O RShort distance (geodesic) the corrosion ε on basis of size 1 (1)(O, O R) be defined as follows:
ε (1)(O,O R)(x,y)=max{ε B(O)(x,y),O R(x,y)}, (11)
And with respect to O RTwo short distance expansion δ of O (1)(O, O R) provide by following formula:
δ (1)(O,O R)(x,y)=min(δ B(O)(x,y),O R(x,y)} (12)
Therefore, this short distance expansion δ (1)(O, O R) the classical expansive working device δ of use B(O) this image O that expands.The gray-scale value that expands is more than or equal to the original value in O.But as following the discussion, short distance expands and these gray-scale values is restricted to the corresponding grey scale value of benchmark image R.
Therefore, the ε by the iteration basic version (1)(O, O R) and δ (1)(O, O R) the acquisition short distance corrosion and the expansion of size arbitrarily.For example, the short distance of unlimited size corrosion (expansion), the reconstruction of promptly so-called utilization corrosion (expansion) is provided by following formula:
Utilize the reconstruction of corrosion:
Figure C20031010242000161
Utilize the reconstruction of expanding:
Figure C20031010242000162
Should be appreciated that,
Figure C20031010242000163
(O, O R) and γ (rec)(O, O R) after the iteration of certain number of times, will reach stable.In one embodiment, open γ by the form of rebuilding (rec)B(O), O) and by the form of rebuilding close
Figure C20031010242000164
B(O), O) these two simplification filtrators can be considered to γ (rec)(O, O R) and
Figure C20031010242000165
Figure C20031010242000166
(O, O R) special case.
Open similarly with form, use basic corrosion manipulater ε earlier by the form of rebuilding is initial B(O) eliminate the amount of showing the score that does not drop among the structural unit B.But, not only to use a basic expansion subsequently, but by utilizing expansive working device γ (rec)The reconstruction of () recovers the profile of the component also do not removed fully.By selecting O to realize rebuilding as benchmark image R, this will guarantee: for each pixel, the gray level of gained will not be higher than the gray level among the original image O.
Among the embodiment of scheme described here, by the form of rebuilding close-Kai is applied to HOS figure as a simplification instrument.Should be appreciated that, by the form of reconstruction filter close-a kind of strength of Kai is: it is filled little dark hole and eliminates isolated little paillette, ideally keeps other component and their profile simultaneously.The size of the component that certainly, is eliminated depends on the size of structural unit.
Fig. 4 A-4C represents that the HOS of low DOF image schemes and will be applied to the synoptic diagram of HOS figure by the morphological filter of rebuilding according to an embodiment of the invention.Fig. 4 D and 4E will be illustrated in the back.Fig. 4 A is an exemplary low DOF image.Fig. 4 B is the figure of HOS as a result that produces by the HOS that calculates at each pixel value of the view data of Fig. 4 A.As can be seen, Fig. 4 B comprises the dark sheet in the interested object, and this dark sheet is defined as two football players and football 114a.In out-focus region, there is paillette in addition, the paillette among for example regional 116b.By the simplification of HOS figure, for example will be applied to the HOS figure of Fig. 4 B by the morphological filter of rebuilding, eliminated the little dark sheet in this focal zone.That is, Fig. 4 C represents the HOS figure of a simplification, wherein uses as described above by the morphological filter of rebuilding and realizes simplifying.For example, football 114c does not comprise the dark sheet of football 114b.Equally, when comparison diagram 4C and Fig. 4 B, the little paillette in the out-focus region is eliminated.Therefore, shown in Fig. 4 C, the smooth region of this focusing is covered well, and the zonule of Fen Saning is removed by this filtrator simultaneously.
For focusing on the representative configuration cutting techniques that image or scene is divided into zone similar on intensity, after by the simplification of morphological filter, can carry out marker extraction and watershed algorithm.The marker extraction step is for example selected the prime area by being identified in the big zone of simplifying the stable gray level that obtains in the step, wherein this simplification step application that can be the morphological filter of above-mentioned discussion.After the marker extraction, a large amount of pixels are not assigned to any zone.These pixels are corresponding to mainly concentrating on this region contour uncertain zone on every side.These pixels are assigned to a given area can be regarded as the decision process that accurately defines this division or cut apart.A form decision means is a watershed algorithm, and this algorithm comes pixel is carried out mark in the mode that is similar to region growth technique.
With focus on the traditional different of partitioned image based on cutting apart of intensity, the task of this low DOF image segmentation is to extract the zone (being OOI) that focuses on from this image.Can be by using as what describe below very may be that the seed region of OOI comes the similar focal zone of merger.
In one embodiment, in the early stage each is put down district (flat zone), also may become a zone even this means a pixel region as a zone processing and regardless of its size.Then, suppose that the zone that is associated with mxm. belongs to initial OOI, and have from 0 to T LThe zone of value belongs to initial OOI cWith reference to figure 4C, the HOS figure of simplification generally includes the uncertain region, for example has value v (T L<v<255) regional 112c, these values are assigned to OOI or OOI cIt will be appreciated by the skilled addressee that OOI is meant interested object, OOI is the reference that is used for mathematic(al) representation simultaneously.This OOI is upgraded in such appointment, and can be by using at uncertain region and current OOI, OOI nBoundary information between (i.e. OOI in the n time iteration) is carried out.Therefore, by calculating normalized superposition boundary (nob) with i uncertain region R in the n time iteration N, iBe assigned to OOI nAlgorithm such as following discussion ground carry out this function.
Divide P for given one n, be given in i uncertain region R by following formula N, i∈ P nAnd OOI nBetween normalization superposition boundary (nob).
nob ni = cardinal ( BR n , i ∩ OOI n ) cardinal ( BR n , i ) , - - - ( 15 )
R wherein N, iThe set of boundary pixel be defined as BR ni = { x ∈ R ni | min r ∈ R n , i | | r - x | | ≤ T b } .
Should be appreciated that, as this uncertain region R N, iWith OOI nWhen non-conterminous, equation 15 obtains null value, and works as R N, iBy OOI nBoundary pixel when surrounding fully, equation 15 obtains 1 value.Therefore, in one embodiment of the invention, the value between 0 and 1 can be used to judge will be at P nIn the uncertain region be assigned to OOI nOr OOI n cIn another embodiment of the present invention, be used to limit the threshold value T of the boundary pixel in a zone bBe set as 1 simply.Obviously, uncertain region R N, i∈ P nBelong to OOI nOr any other zone.In the test item of hypothesis,
H 0 : R n , i ⊆ OOI n ; H 1 : H 0 c . - - - ( 16 )
The model of normalized superposition boundary (nob) can be created as a continuous random variable nob (stochastic variable should be used runic), is taken at the nob value in [0,1].If nob N, iGreater than threshold value, region R then N, iBe integrated into OOI nShould divide P subsequently nAnd OOI nBe updated, obtain cumulative OOI nSequence, and finally converge to OOI..In one embodiment, test to calculate by following likelihood ratio and be used to find this threshold value T NobA starting point (should be appreciated that,, reduced this iteration index n) for contracted notation:
If P is (ooi|nob i)>P (ooi c| nob i), then with R iBe assigned to ooi; Otherwise be assigned to ooi cWherein ooi represents to have the OOI classification of prior probability P (ooi), and ooi cExpression has prior probability P (ooi cThe classification of the non-OOI of)=1-P (ooi).P (ooi|nob i) and P (ooi c| nob i) expression corresponds respectively to H 0And H 1The posteriority conditional probability.If use Bayes' theorem on the both sides of expression formula, and every be rearranged as shown below:
p ( no b i | ooi ) p ( no b i | oo i c ) H 0 > < H 1 P ( oo i c ) P ( ooi ) , - - - ( 17 )
The ratio on the left side is known as likelihood ratio, and whole equation is commonly called the likelihood ratio test.Because this test is based on the area classification of selecting to have maximum a posteriori probability, so this decision criteria is called as maximum a posteriori (MAP) criterion.This decision criteria also can be called as the least error criterion, because say fifty-fifty, this criterion produces the incorrect judgement of minimal amount.And, because interested object and background object may be size and shape arbitrarily arbitrarily, so can adopt equal preceding value, i.e. (P (ooi)=P (ooi c)), so this expression formula simplifies and is maximum likelihood (ML) criterion:
p ( no b i | ooi ) p ( no b i | oo i c ) H 0 > < H 1 1 . - - - ( 18 )
Utilization index distributes and sets up the model of this classification conditional probability density function, produces:
p ( nob i | ooi c ) = &lambda; 1 e - &lambda; 1 nob i u ( nob i ) - - - ( 19 )
p ( no b i | ooi ) = &lambda; 2 e - &lambda; 2 ( 1 - no b i ) u ( 1 - no b i )
Wherein u (x) represents step function.The approximate model of having set up this True Data of above-mentioned distribution: p (nob i| ooi) can be at nob iHave high value near=1, and along with nob i→ 0 decay rapidly, p (nob simultaneously i| ooi c) can be at nob iHave high value near=0, and along with nob i→ 1 decay fast.Finally, rearrange equation 18 and 19 by mode as described below and obtain to be used for nob iOptimal threshold:
nob i H 0 > < H 1 &lambda; 2 &lambda; 1 + &lambda; 2 + ln ( &lambda; 1 / &lambda; 2 ) &lambda; 1 + &lambda; 2 = T nob . - - - ( 20 )
Parameter lambda 1And λ 2Can be estimated according to real data.But, if between exponential distribution, present symmetry (λ 12), the expression formula that then is used for optimal threshold can be similar to and be reduced to:
T nob = &lambda; 2 &lambda; 1 + &lambda; 2 + ln ( &lambda; 1 / &lambda; 2 ) &lambda; 1 + &lambda; 2 &ap; 1 2 - - - ( 21 )
Therefore, if nob iGreater than T Nob, R then iJust be integrated into OOI, and OOI is updated.Till this process iterates to always and does not have merger to take place.Should be appreciated that value 1/2 is an example value, and the present invention's value of being not limited to 1/2, because be used for T NobThe value of any appropriate can be selected.
Fig. 5 A-5C represents illustrating of regional according to an embodiment of the invention merger technology.Among Fig. 5 A, nob iGreater than T Nob, so R i122 are integrated into OOI 0In, and R k126 are not integrated into OOI 0In, because nob kLess than T NobIn other words, R i122 and OOI 0Shared border between the 120a is greater than R i1/2 of 122 whole border, thus make R i122 are integrated into OOI 0Among the 120a so that define the OOI of Fig. 5 B 1120b.Because R k126 and OOI 0Shared border between the 120a is less than 1/2, so R k126 are not integrated into OOI 0Among the 120a.As mentioned above, T NobIt can be any suitable value that (comprises 0 and 1) between 0 and 1.In next iteration, shown in Fig. 5 B, because nob 3>T NobSo, R j124a is integrated into OOI 1Among the 120b, produce the OOI of Fig. 5 C 2120c.In order to accelerate this process, in one embodiment of the invention, can in advance very little zone be integrated into the adjacent area with immediate value.For example, as an initial step, R j124a can be integrated into region R iIn 122.Fig. 4 D illustrates the result who regional merger is applied to the simplification HOS figure of Fig. 4 C.For example, by using above-mentioned regional merger technology, the regional 112c of Fig. 4 C is integrated among the OOI 118 of Fig. 4 D.
Determining to carry out the final size that is associated with focal zone (being OOI) by adaptive threshold judges.The judgement of this adaptive threshold can be based on such hypothesis, and promptly OOI has occupied a rational part of this image.With T A=255 beginnings reduce this threshold value, until the size of OOI become than image size about 20% big till.For example with reference to Fig. 5 C, R k126 can not be confirmed as an OOI, because OOI 2The size of 120c is about 20% bigger than this image size.But, if OOI 2The size of 120c is about 20% littler than this image size, then R k126 can be considered to the part of this OOI.Should be appreciated that, determine that for adaptive threshold the present invention is not limited to 20% of image sizes values, can select any appropriate value of OOI size and the ratio of this image size at this.With reference to figure 4E, adaptive threshold can be determined that technology is applied to Fig. 4 D, so that produce the image of Fig. 4 E.
The embodiment of Tao Luning has been implemented and at the COREL from JPEG compression herein TMDone test on the low DOF image of selecting in the CD-ROM image collection.Coloured image at first is transformed into the grayscale image that is used for this test.Test pattern all is not used in the similar out-focus region of this test.Being used for 3 of η takes advantage of 3 neighbouring relations to be used in equation 6 defined above.In these tests, threshold value T LBe set to 20.It will be appreciated by the skilled addressee that one of most important parameter is the size of the structural unit (SE) of this morphological filter.For all tests, this size is set to 31 * 31, except image shown in Fig. 4 A.Because the size of football 114a is too little shown in Fig. 4 A, so when the SE that uses 31 * 31, filtrator has just been removed this ball.For better subjective result, only adopt 21 * 21 SE at Fig. 4 A.
Fig. 6-1 provides experimental result at four series of each step that is associated with the cutting techniques of embodiment described herein to 6-4.First image of each series is a low DOF image.Second image of each series is a HOS figure who produces from corresponding low DOF image.The HOS figure that the 3rd image of each series is a simplification wherein has been applied to each corresponding HOS figure by the morphological filter of rebuilding.The 4th image of each series illustrates the image that regional merger is applied to its corresponding simplified HOS figure.The 5th image of each series is adaptive threshold have been determined to be applied to corresponding the 4th image of each series and the image that obtains.Therefore, the 5th image shows is the OOI of the extraction that obtains from having used embodiment described here.
Fig. 7-1 illustrates the image of four series to 7-4, and the result that the image of these four series can produce with using embodiment described herein the result of cutting techniques compares.First image of each series is a low DOF image.Second image of each series illustrates the result who draws from the multi-scale method based on high frequency wavelet coefficient and their statistics.The 3rd image of each series illustrates from having used has the result that the local variance scheme of cutting apart based on Markov random field (MRF) model draws.The 4th image of each series illustrates from using the result that scheme described here draws.As this explanation was showed, owing in the preliminary classification reason aspect the piece, even in conjunction with the thinning algorithm that is used for the high resolving power classification, the result who obtains from the image of second series also was block.Owing in the MRF module, adopted smoothness constraint, so the algorithm that uses for the 3rd image of each series can cause the non-OOI zone of adjacency to trend towards being connected.Proposed projects shown in the 4th image of each series produces result more accurately having on the various images of low DOF.For purpose relatively, the 5th image of each series provides by manually cutting apart a reference of generation.
Can also assess the segmentation performance of institute's proposed algorithm by using objective criteria.Be proposed a mass measurement assessing the Video object segmentation algorithm performance and can be used to provide objective criteria based on pixel.Be defined as from this spatial distortion with reference to the OOI of OOI estimation:
d ( O est , O ref ) = &Sigma; ( x , y ) O est ( x , y ) &CircleTimes; O ref ( x , y ) &Sigma; ( x , y ) O ref ( x , y ) , - - - ( 22 )
O wherein EstAnd O RefBe respectively the binary mask of estimation and reference, and
Figure C20031010242000212
It is scale-of-two " XOR " operation.Below form 1 provide from following three results' that draw spatial distortion and measured: the 1) variance of the wavelet coefficient in the high frequency band of second image of Fig. 7-1 to each series of 7-4,2) the local variance scheme of three image of Fig. 7-1 to each series of 7-4, and 3) by the scheme of the suggestion of four graphical representation of Fig. 7-1 to each series of 7-4.By manually cutting apart the acquisition reference diagram, as Fig. 7-1 to shown in corresponding the 5th image of 7-4.Because scale-of-two " XOR " operation, the pixel on the OOI is set to 1, otherwise is 0.As shown in Table 1, the scheme of expression embodiment described here has the distortion measurement lower than other method, and these measurements are mated with subjective assessment well.
Form 1
Image sequence Second image of Fig. 7-1--Fig. 7-4 The 3rd image of Fig. 7-1--Fig. 7-4 The 4th image of Fig. 7-1--Fig. 7-4
7-1 0.1277 0.1629 0.0354
7-2 0.2872 0.4359 0.1105
7-3 0.2138 0.1236 0.1568
7-4 0.3134 0.2266 0.1709
Fig. 8 is the process flow diagram that is used for extracting from an image method operation of interested object according to an embodiment of the invention.This method is to operate 140 beginnings.Herein, image feature space of definition.This image feature space is based on frequency information, and the high-order HOS of each pixel that is applied to the image that is associated with image feature space as top reference is described.This method enters operation 142 then, wherein filters this image.According to one embodiment of present invention, be used to filter this image space by the morphological filter of rebuilding.As mentioned above, this morphological filter has been simplified this image space.That is, by the morphological filter of describing according to Fig. 4 A-4E, hole that elimination is associated with focal zone or out-focus region and isolated sheet.In one embodiment, produce initial OOI by the inside of discerning similar zone.The effect of the seed region that is used for this initial OOI can be played in zone with the mxm. among this simplification HOS figure.In one embodiment, this value is based on the frequency level in this similar zone.
The method of Fig. 8 enters operation 144 subsequently, in this operation, and execution area merger, the i.e. border of definite interested object.At first, also can become a zone even this means the district of a pixel with each flat district as a zone processing and regardless of its size.Suppose and mxm. v subsequently hThe zone that is associated belongs to an initial OOI, and has from 0 to T LThe zone of value belongs to initial OOI cFor example, in Fig. 4-(c), the HOS figure of simplification comprises and has value (T i, v h) the uncertain region, v wherein hEqual 255.Those uncertain regions are assigned to OOI or OOI cBy considering at uncertain region and current OOI, OOI nBorder relation between (i.e. OOI in the n time iteration) is carried out such appointment iteratively.In one embodiment, the calculating by normalized superposition boundary (nob) of as above discussing with reference to figure 5A-5C comes the application region merger.This method enters operation 148 then, in this operation, defines the final size of interested object.At this, as above-mentioned discuss with reference to Fig. 4 E and Fig. 5 C, can use the final size of determining to determine this interested object of adaptive threshold.That is,, then enlarge this interested object, till the size of this interested object reaches the number percent of this definition if the size that is associated with this interested object is littler than the number percent of a definition of this entire image size.In one embodiment, the number percent of this definition is about 20% of whole screen size.
Fig. 9 is the rough schematic view of an image capture apparatus according to an embodiment of the invention, and this image capture apparatus has and is configured for the circuit that extracts the interested object that is associated with low depth image.Image capture apparatus 150 comprises can focus on the lens 152 on the interested object.By conversion block 164, convert interested object and the background information that is associated to digital picture.Operate this numerical data then, so that extract this interested object.Herein, microprocessor 153 (for example special IC) is configured to extract as described herein interested object.
The microprocessor 153 of Fig. 9 comprises that image extracts circuit 154.Image extracts circuit 154 to be made up of characteristics of image translation circuit 156, and circuit 156 is configured for and produces aforesaid HOS figure.Filtering circuit 158 is configured for the border that object definite and in the depth of field is associated.Merger circuit 160 is configured for to be analyzed and HOS figure associated frequency information, so that make up the relevant similar zone of this HOS figure.Merger circuit 160 can also comprise can carry out the top circuit of determining the function of description with reference to adaptive threshold.Storage medium 162 is provided for the interested object that storage is extracted.Certainly, the code of execution feature extraction functions and cluster/metadata generating function can be hard coded within on the semi-conductor chip.It will be appreciated by the skilled addressee that image extracts circuit 154 and can comprise the logic gate that is configured to provide above-mentioned functions.For example, can adopt hardware description language (HDL) to synthesize firmware and logic gate layout, so that necessary function described herein is provided.
Image capture apparatus 150 can be any image capture apparatus, for example microscope, telescope, camera, video camera etc.Should be appreciated that image extracts circuit 150 and can be integrated into image capture apparatus 150 or be configured to plate.Similarly, storer 162 can be included in the image capture apparatus 150, or separate.Therefore, any microscope, telescope or any low DOF image all can be operated, so that can extract an interested object.Be also to be understood that this image capture apparatus can with can communicate according to describing the multi-purpose computer that extracts interested object in ground herein.
Figure 10 is the rough schematic view of an image search system according to an embodiment of the invention.Image capture apparatus 150 is configured for scioptics 152 capture digital image data in piece 164.The Digital Image Data of being caught can be handled on image extraction component 166, and image extraction component 166 is configured for the interested object that extracts low depth image.Should be appreciated that in one embodiment of the invention, image extraction component 166 can be a multi-purpose computer.That is, image extraction component 166 is according to extracting interested object in the extraction scheme of this discussion.Image extraction component 166 intercoms mutually with content retrieval system 168.Content retrieval system 168 intercoms mutually with network 170.Therefore, can come carries out image search on a distributed network according to the interested object that is extracted.
In a word, embodiment described here provides a kind of method and system, and this method and system is assigned to the pixel in this low DOF image in two zones according to the more higher order statistical of the pixel of low DOF image.Should be transformed into a suitable feature space by low DOF image, Here it is, and the so-called HOS of this paper schemes.Application is simplified this HOS figure by the morphological filter of rebuilding.Used after this morphological filter application region merger technology.Use adaptive threshold to determine for the final selected size that is associated with this interested object subsequently.
Should be appreciated that by the powerful morphological tools that employing is used to simplify, even for the smooth region that focuses on, the scheme of this suggestion also can be carried out well, as long as the border of these smooth regions comprises high fdrequency component (being the edge).But, if the smooth region that should focus on is too big, embodiment then described herein may not as above-mentioned effectively.If this algorithm is configured to combine some semantic or human knowledge, then can solve this obstacle.It will be obvious to those skilled in the art that the algorithm of being advised combines with low DOF camera work can expand to video object segmentation, still have very high challenge because extract object video from any video sequence.In addition, embodiment described herein can be applied to any suitable low depth image of interested objects such as wherein expectation extraction is for example micro-, photography.
If understood the above embodiments, then should be clear, the present invention can adopt the various computer implemented operation that relates to the data that are stored in the computer system.These operations comprise that those requirements carry out the operation of physics to physical quantity.In general, although be not necessary, this tittle takes to be carried out storage, transmission, combination, relatively and the form of the electrical or magnetic signal of other operation.And performed operating in often for example is called as on the term and produces, discerns, determines or relatively.
Foregoing invention can utilize other computer system configurations to be realized, comprises hand-held device, microprocessor system, based on consumer electronics device microprocessor or programmable, microcomputer, mainframe computer etc.The present invention can also be realized in distributed computing environment, in this environment, by executing the task through the teleprocessing device of communication network link.
The present invention can also be embodied as the computer-readable code on the computer-readable medium.This computer-readable medium is can be by any data storage device of the data of computer system reads after can storing.This computer-readable medium also comprises the electromagnetic carrier that wherein contains computer code.The example of computer-readable medium comprises hard disk drive, network attached storage (NAS), ROM (read-only memory), random access memory, CD-ROM, CD-R, CD-RW, tape and other optics and non-optical data storage device.This computer-readable medium can also be issued on the computer system by network coupled, so that this computer-readable code is stored and carried out with the form of issue.
Though for the purpose of clear understanding is described in detail foregoing invention, it is evident that, can within the scope of appending claims, carry out certain change and modification.Therefore, present embodiment should be regarded as illustrative, rather than restrictive, and the present invention will be not limited thereto the details that the place provides, but can be in the scope of appending claims and equivalent scope thereof correct.In these claims, unit and/or step do not imply any specific order of operation, unless provide explanation clearly in these claims.

Claims (22)

1. a method that is used for divide image data comprises the steps:
According to frequency information definition image feature space;
Utilize morphological tools to filter the view data of this image feature space;
A zone of the image feature space that filtered is assigned as initial objects;
Discern the border of initial objects of the image feature space of this filtration; And
Determine the size of this initial objects and the ratio between the view data size;
The method operation that wherein said zone with the image feature space that filtered is assigned as initial objects comprises the steps:
Discern the zone of this image feature space that is associated with basicly stable frequency level; And
Assign a value according to this basicly stable frequency level each zone in the zone of being discerned, wherein the zone of the image space of this filtration that is associated with this initial objects is assigned a mxm.;
The method operation on the border of the initial objects of the image feature space of wherein said this filtration of identification comprises the steps:
Calculate normalized superposition boundary, this normalized superposition boundary represent to be used in reference to be shown in this initial objects and and the zone that joins, this initial objects border between the value of the boundary pixel shared;
If should be worth greater than threshold value, then this method comprises the steps:
The described zone that joins with this initial objects border is integrated in this initial objects.
2. method according to claim 1, wherein said method operation according to frequency information definition image feature space comprises the steps.
Be each calculated for pixel values of being associated with this image feature space higher order statistical more.
3. method according to claim 2, wherein this more higher order statistical represent the Fourth-order moment that is associated with each pixel value.
4. method according to claim 1 wherein saidly utilizes the method operation of the view data that morphological tools filters this image feature space to comprise the steps:
This morphological tools is applied to the view data of this image feature space in the mode on the border that keeps this objects.
5. method according to claim 1 wherein saidly utilizes the method operation of the view data that morphological tools filters this image feature space to comprise the steps:
Remove the dark sheet that is associated with the focal zone of this image feature space; And
Remove the paillette that is associated with the out-focus region of this image feature space.
6. the method for an image segmentation comprises the steps:
Produce more higher order statistical figure from view data;
Revise this more higher order statistical figure;
Determine the border that is associated with the focal zone of the more higher order statistical figure of this modification;
According to the size of the value that is associated with this focal zone and finally the cutting apart of definite this focal zone recently between the view data size;
Discern the inside in similar zone of the more higher order statistical figure of this modification; And
Mark is assigned to this similar zone;
The wherein said method of determining the border that is associated with the focal zone of the more higher order statistical figure of this modification is operated and is comprised the steps:
Determine the value of the boundary number shared between the zone that indication joins on this focal zone and border; With
If should be worth greater than threshold value, then this method comprises the steps:
The zone that this focal zone of merger and this border join.
7. method according to claim 6, the wherein said method operation that produces higher order statistical figure more from view data comprises the steps:
The value that is associated with each pixel of convergent-divergent to scale; And
Limit the maximal value of the value that is associated with each pixel.
8. method according to claim 6 wherein comprises the steps: according to the size of the value that is associated with this focal zone and the method for finally the cutting apart operation of recently determining this focal zone between the view data size
Reduce threshold value, till the ratio between big or small and this view data size of the value that is associated with this focal zone becomes greater than 20%.
9. method according to claim 6, wherein this focal zone is an objects.
10. method according to claim 6, wherein said modification this more the method operation of higher order statistical figure comprise the steps:
By corrosion with reconstruction application to this pixel value of being associated of higher order statistical figure more; With
By expansion reconstruction application is arrived this pixel value, should use reconstruction by expanding and comprise the steps:
Recover and this profile of the component that is associated of higher order statistical figure more.
11. method according to claim 6 wherein comprises the steps: according to the size of the value that is associated with this focal zone and the method for finally the cutting apart operation of recently determining this focal zone between the view data size
The definition threshold value;
The size of definite value that is associated with this focal zone on this threshold value; With
Reduce this threshold value, till the ratio between big or small and this view data size of the value that is associated with this focal zone is greater than 20%.
12. an image capture apparatus comprises:
Lens are configured for the object that focuses within the depth of field;
The image recording assembly is configured for from the image information that receives through these lens and produces the digital picture that comprises the object within the depth of field;
This image recording assembly can produce the more higher order statistical figure of this digital picture;
According to this more higher order statistical figure cut apart from the object within this depth of field of this digital picture, comprising:
Determine and this border of being associated of the focal zone of higher order statistical figure more;
According to the size of the value that is associated with this focal zone and finally the cutting apart of definite this focal zone recently between the view data size;
Determine more to comprise on the border that is associated of the focal zone of higher order statistical figure with this:
Determine the value of the boundary number shared between the zone that indication joins on this focal zone and border; With
If should be worth greater than threshold value, then this method comprises: the zone that this focal zone of merger and this border join.
13. image capture apparatus according to claim 12, wherein this image recording assembly comprises filtering circuit, and this filtering circuit is configured for the border that this object definite and within this depth of field is associated.
14. image capture apparatus according to claim 12, wherein this more higher order statistical figure according to this digital picture associated frequency information definition feature space.
15. image capture apparatus according to claim 12, wherein this image recording assembly comprises eigentransformation circuit and merger circuit, this eigentransformation circuit is configured for this higher order statistical figure more of generation, and this merger circuit is configured for analyzes this frequency information so that make up this more relevant similar zone of higher order statistical figure.
16. image capture apparatus according to claim 12, wherein this image capture apparatus is to select from the group of being made up of microscope, telescope, camera and video camera.
17. an image search system comprises image capture apparatus as claimed in claim 12, this image capture apparatus also comprises:
With the image extraction component that this image capture apparatus intercoms mutually, this image extraction component is configured for the object that extracts within this depth of field; And
The Image Retrieval system that intercoms mutually with this image extraction component, this Image Retrieval system be configured for receive with this depth of field within the corresponding data of object, this Image Retrieval system further is configured for the coupling that is identified between the data that received and the collected view data.
18. image search system according to claim 17, wherein this image capture apparatus is to select from the group of being made up of microscope, telescope, camera and video camera.
19. image search system according to claim 17, wherein this image extraction component is a multi-purpose computer.
20. image search system according to claim 17, wherein this image extraction component is integrated in this image capture apparatus.
21. image search system according to claim 17, wherein this Image Retrieval system comprises:
Database is configured for the collected view data of storage; And
Database inquiry system is configured for by the signature index that relatively is associated with the data that received and the signature index that is associated with collected view data and is identified in coupling between the data that received and the collected view data.
22. image search system according to claim 17 wherein should be to liking objects.
CNB2003101024202A 2002-10-17 2003-10-17 Method and device for segmentation low depth image Expired - Fee Related CN100423021C (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US41930302P 2002-10-17 2002-10-17
US60/419303 2002-10-17
US60/451384 2003-02-28
US10/412128 2003-04-11

Publications (2)

Publication Number Publication Date
CN1497494A CN1497494A (en) 2004-05-19
CN100423021C true CN100423021C (en) 2008-10-01

Family

ID=34256676

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2003101024202A Expired - Fee Related CN100423021C (en) 2002-10-17 2003-10-17 Method and device for segmentation low depth image

Country Status (1)

Country Link
CN (1) CN100423021C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107004274A (en) * 2014-11-26 2017-08-01 汤姆逊许可公司 The method and apparatus that estimation does not focus on the depth of all-optical data

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FI117265B (en) * 2004-12-29 2006-08-15 Nokia Corp An electronic device and a method for processing image data in an electronic device
CN100446544C (en) * 2005-08-26 2008-12-24 电子科技大学 Method for extraction method of video object external boundary
US9363499B2 (en) * 2013-11-15 2016-06-07 Htc Corporation Method, electronic device and medium for adjusting depth values
EP3114432B1 (en) * 2014-03-05 2017-11-22 Sick IVP AB Image sensing device and measuring system for providing image data and information on 3d-characteristics of an object
US20150348260A1 (en) * 2014-05-29 2015-12-03 Siemens Aktiengesellschaft System and Method for Mapping Patient Data from One Physiological State to Another Physiological State
CN106651870B (en) * 2016-11-17 2020-03-24 山东大学 Segmentation method of image out-of-focus fuzzy region in multi-view three-dimensional reconstruction
CN110192079A (en) * 2017-01-20 2019-08-30 英泰克普拉斯有限公司 3 d shape measuring apparatus and measurement method
CN108805886B (en) * 2018-05-30 2021-09-03 中北大学 Persistent clustering segmentation method for multi-fusion physical signatures
CN110009592A (en) * 2018-09-19 2019-07-12 永康市巴九灵科技有限公司 Fall grey degree real-time monitoring system
US10931853B2 (en) * 2018-10-18 2021-02-23 Sony Corporation Enhanced color reproduction for upscaling
CN111474893A (en) * 2019-11-23 2020-07-31 田华 Intelligent pixel array control system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001136369A (en) * 1999-09-29 2001-05-18 Seiko Epson Corp Method and device for dividing picture for preventing overlap transmission and recording medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001136369A (en) * 1999-09-29 2001-05-18 Seiko Epson Corp Method and device for dividing picture for preventing overlap transmission and recording medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107004274A (en) * 2014-11-26 2017-08-01 汤姆逊许可公司 The method and apparatus that estimation does not focus on the depth of all-optical data
CN107004274B (en) * 2014-11-26 2021-08-10 交互数字Ce专利控股公司 Method and apparatus for estimating depth of unfocused plenoptic data

Also Published As

Publication number Publication date
CN1497494A (en) 2004-05-19

Similar Documents

Publication Publication Date Title
US7302096B2 (en) Method and apparatus for low depth of field image segmentation
CN110956094B (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network
US7974464B2 (en) Method of directed pattern enhancement for flexible recognition
US8045783B2 (en) Method for moving cell detection from temporal image sequence model estimation
JP4234378B2 (en) How to detect material areas in an image
EP1700269B1 (en) Detection of sky in digital color images
CN111738318B (en) Super-large image classification method based on graph neural network
CN100423021C (en) Method and device for segmentation low depth image
CN110569747A (en) method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
CN101971190A (en) Real-time body segmentation system
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
JP2008217706A (en) Labeling device, labeling method and program
CN115205692B (en) Typical feature intelligent identification and extraction method based on generation of countermeasure network
CN112287906B (en) Template matching tracking method and system based on depth feature fusion
JP6448212B2 (en) Recognition device and recognition method
CN113505670A (en) Remote sensing image weak supervision building extraction method based on multi-scale CAM and super-pixels
CN111428730B (en) Weak supervision fine-grained object classification method
CN114140445A (en) Breast cancer pathological image identification method based on key attention area extraction
CN106503728A (en) A kind of image-recognizing method and device
CN116071752A (en) Intelligent digital meter reading identification method and system
JP4285644B2 (en) Object identification method, apparatus and program
CN113052311A (en) Feature extraction network with layer jump structure and method for generating features and descriptors
JP2010140201A (en) Image processing apparatus, image processing method, image processing program
KR102131243B1 (en) Plant Area Extraction System and Method Based on Deep Running and Connectivity Graphs
Shyu et al. Automatic object extraction from full differential morphological profile in urban imagery for efficient object indexing and retrievals

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081001

Termination date: 20141017

EXPY Termination of patent right or utility model