WO2001043071A2 - Image processing - Google Patents

Image processing Download PDF

Info

Publication number
WO2001043071A2
WO2001043071A2 PCT/GB2000/004707 GB0004707W WO0143071A2 WO 2001043071 A2 WO2001043071 A2 WO 2001043071A2 GB 0004707 W GB0004707 W GB 0004707W WO 0143071 A2 WO0143071 A2 WO 0143071A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
values
intensity
depth
value
Prior art date
Application number
PCT/GB2000/004707
Other languages
French (fr)
Other versions
WO2001043071A3 (en
Inventor
Li-Qun Xu
Ebroul Izquierdo
Original Assignee
British Telecommunications Public Limited Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Telecommunications Public Limited Company filed Critical British Telecommunications Public Limited Company
Priority to US10/129,788 priority Critical patent/US7050646B2/en
Priority to GB0212972A priority patent/GB2372661B/en
Priority to AU21921/01A priority patent/AU2192101A/en
Priority to CA002394591A priority patent/CA2394591C/en
Publication of WO2001043071A2 publication Critical patent/WO2001043071A2/en
Publication of WO2001043071A3 publication Critical patent/WO2001043071A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • This invention relates to image processing and in particular to a system and method for processing digital images to detect and extract physical entities or objects represented in the image.
  • Image segmentation is of fundamental importance to many digital image- processing applications.
  • the process of image segmentation refers to the grouping together of parts of an image that have similar image characteristics and this is often the first process in image processing tasks. For instance, in the field of video coding it is often desirable to decompose an image into an assembly of its constituent object components prior to coding. This pre-processing step of image segmentation then allows individual objects to be coded separately. Hence, significant data compression can be achieved in video sequences since slow moving background can be transmitted less frequently than faster moving objects.
  • Image segmentation is also important in the field of image enhancement, particularly in medical imaging such as radiography. Image segmentation can be used to enhance detail contained in an image in order to improve the usefulness of the image. For instance, filtering methods based on segmentation have been developed for removing noise and random variations in intensity and contrast from captured digital images to enhance image detail and assist human visualisation and perception of the image contents.
  • image segmentation is important include multi-media applications such as video indexing and post production content-based image retrieval and interpretation, that is to say video sequence retrieval based on user supplied content parameters and machine recognition and interpretation of image contents based on such parameters.
  • intensity values are determined for each pixel or picture element in a digital image and on the basis of these values a threshold value is determined that distinguishes each pixel of an object in the image from pixels representing background detail.
  • the threshold intensity value is determined dynamically for each image according to the statistical distribution of intensity values, that is to say, the value is based on a histogram analysis of all the intensity values for a particular image. Peaks in the histogram distribution generally represent intensity values predominately associated with a particular object. If two objects are present in an image there will be two peaks. In these circumstances the intersection or overlap between the two peaks is taken as the threshold value.
  • Image simplification involves the removal of low order intensity value differences between adjacent pixels within an object boundary while the intensity value differences are maintained at the object boundaries.
  • Image simplification is often achieved in digital image processing by using so called non-linear diffusion methods.
  • the concept of non-linear diffusion for image processing is described in the published paper "Scale Space and Edge Detection Using Anisotropic Diffusion", IEEE Trans, on Pattern Analysis and Machine Intelligence Vol.12 No. 7 pp629-639, July 1990.
  • pixel intensities are altered in a manner analogous to diffusion of physical matter to provide regions of homogenous intensity within object boundaries while preventing diffusion at the object boundaries, thereby preserving intensity contrast at the boundaries.
  • a method of processing an image recorded by an imaging device comprising the steps of:- a) providing an image comprising an array of adjacent elements each corresponding to a respective part of the image and having a respective intensity value associated therewith; b) processing said intensity values to determine an intensity contrast value for each respective element according to differences in intensity values between respective adjacent elements; c) providing a depth value for each element corresponding to the distance between the imaging device and at least part of an image forming object represented in the image by the respective element; d) processing said depth values to determine a depth contrast value for each respective element according to differences in depth values between respective adjacent elements; and, e) processing said intensity contrast values and said depth contrast values to identify at least one area of the image corresponding to one or more respective objects in the image being processed.
  • depth contrast values with the intensity contrast values
  • data relating to the relief of an image that is the depth of image forming objects (or more precisely the distance travelled by the reflected incident radiation) in the image
  • said depth values are determined according to the spacing between corresponding points on a stereoscopic image pair. The spacing between corresponding points can be readily converted into depth values based on known imaging system geometry. Hence, additional image processing can be minimised.
  • the spacing between said points is determined by matching said corresponding points and estimating a vector value for the relative spacing and direction of said points.
  • respective vector values can be used to represent the respective depth values associated with the respective elements.
  • step e) comprises the step of altering the intensity values of each respective element in accordance with the respective intensity contrast value and the respective depth contrast value of the element. This can increase intensity differences between adjacent elements at the respective object boundaries.
  • the step of altering the intensity values comprises the step of modifying the respective intensity contrast values according to the respective depth contrast values, and altering the intensity values of the respective elements towards an average intensity value determined by the intensity values of surrounding elements if the respective modified intensity contrast value of the element is below a threshold value. This improves image simplification by reducing differences in intensity values between elements corresponding to positions within an object.
  • the intensity contrast values are modified such that elements having a higher than average depth contrast value have their respective intensity values altered less than elements having a lower than average depth contrast value. This increases the difference in the intensity values between adjacent elements corresponding to positions on opposing sides of object boarders.
  • step e) comprises a non-linear diffusion process for altering element intensity values in accordance with respective intensity contrast values modified in accordance with respective depth contrast values.
  • step e) comprises a non-linear diffusion process for altering element intensity values in accordance with respective intensity contrast values modified in accordance with respective depth contrast values.
  • the method further comprises the step of delineating said object or objects from the image. This allows the objects to be stored, retrieved, coded, or processed separately, for example.
  • the delineating step comprises the steps of:- determining a statistical distribution function of the altered intensity values of the respective elements; determining a threshold value or range of values to include all the intensity values of the respective elements of at least one identified area in the image; and, selecting elements having modified intensity values within the threshold range or above or below said threshold value; and delineating image data relating to said selected elements from image data relating to the remaining elements.
  • determining a statistical distribution function of the altered intensity values of the respective elements determining a threshold value or range of values to include all the intensity values of the respective elements of at least one identified area in the image.
  • selecting elements having modified intensity values within the threshold range or above or below said threshold value and delineating image data relating to said selected elements from image data relating to the remaining elements.
  • a method of processing an image in accordance with a non-linear diffusion process comprising the steps of:- i) providing an image comprising an array of adjacent elements each corresponding to a respective part of the image and having a respective intensity value associated therewith; ii) processing said intensity values to determine an intensity contrast value for each respective element according to differences in intensity values between respective adjacent elements; iii) identifying from said array of elements object defining elements corresponding to points on respective objects in the image; iv) altering said element intensity values in accordance said respective intensity contrast values, whereby said elements are altered to a lesser or greater extent in dependence on whether said respective element is an object element.
  • step iii) comprises the step of determining a depth value associated with a disparity field for each element in the image and identifying said object elements from said depth values.
  • step iii) comprises the step of determining a motion value associated with a motion vector in a video sequence for each element in the image and identifying said object elements from said motion values. Accordingly, the non-linear diffusion process can be modified in accordance with object positions determined by motion recorded in a video sequence.
  • an image processing system for processing an image recorded by an imaging device; said system comprising:- a) a data receiver for receiving data relating to an image comprising an array of adjacent elements each corresponding to a respective part of the image and having a respective intensity value associated therewith; an intensity value processor configured to determine an intensity contrast value for each respective element according to differences in intensity values between respective adjacent elements; a depth value processor configured to determine a depth value for each element corresponding to the distance between the imaging device and at least part of an image forming object represented in the image by the respective element; a depth contrast value processor configured to determine a depth contrast value for each respective element according to differences in depth values between respective adjacent elements; and, an object segment processor configured to process intensity contrast values and said depth contrast values to identify at least one area of the image corresponding to one or more respective objects in the image being processed.
  • Figure 1 is a schematic block diagram of a system for processing digital images
  • Figure 2a shows a pair of stereoscopic images of a scene viewed from two different perspectives with a stereoscopic imaging device
  • Figure 2b shows the images of Figure 2 in side by side relation
  • Figure 3 is a schematic block diagram of an image processor for processing digital images in the system of Figure 1 ;
  • Figure 4 is a flow chart of a method for processing digital images
  • Figure 5a is a pre-processed image of a scene comprising an object to be segmented
  • Figure 5b is a processed image of the image of Figure 5a processed in accordance with a known non-linear diffusion process
  • Figure 5c is a processed image of the image of Figure 5a showing disparity or depth vectors for the image of Figure 5a obtained from a stereoscopic image pair;
  • Figure 5d is a processed image of the image of Figure 5a processed in accordance with a modified non-linear diffusion process utilising the disparity data represented in Figure 5c;
  • Figure 5e is shows an object mask extracted from the processed image of Figure 5a.
  • an image processor 102 is arranged to receive digital images from a memory 104 storing two dimensional images of three dimensional scenes recorded by means of an optical- electronic imaging device 106.
  • the imaging device 106 receives electromagnetic radiation from all areas of the scene being recorded including one or more distinct image forming objects 108 within the imaging device's field of view 110.
  • the imaging device can be any device capable of forming optical-electronic images, including for example an array of light sensitive photo-diodes or the like connected to respective charged coupled devices for forming a digital image of picture elements or pixels capable of being stored in electronic digital memory 104.
  • the pixels each have a grey level value associated with them representative of the brightness or intensity of the respective part of the scene they represent. Data relating to the colour associated with each pixel may also be stored in the memory 104.
  • the imaging device comprises two separate optical-electronic imaging systems for recording stereoscopic image pairs.
  • Figure 2a shows a pair of images, 200 to the left of the drawing and 202 to the right, that define a stereoscopic image pair corresponding to two different perspective projections in slightly different planes of the same scene.
  • the image processor 102 is programmed in a known manner to process stereoscopic image pairs of the type shown to obtain data relating to the depth of the or each object and the background in a scene, or more precisely, the distance travelled by the incident electromagnetic radiation reflected by the or each object or background to the respective light sensitive pixels of the imaging device.
  • the image processor is programmed to determine disparity vectors in much the same way that conventional image processes are programmed to determine motion vectors for object segmentation prior to video sequence coding. For instance, depth is estimated from the stereoscopic images by estimating a disparity vector for each pair of corresponding points in the image pair.
  • a point 204 on an object in a scene has a position defined by the spatial co-ordinates (x,y,z). This point is projected on the left image at a point 206 having the local spatial co-ordinates (x,y)l and likewise on the right image at a point 208 having the spatial co-ordinates (x,y)r.
  • the left and right images have the same co-ordinate reference frame and so the distance and direction between the two corresponding points 206 and 208, known as the disparity vector, can be readily determined.
  • Figure 2b shows the two images 200 and 202 in side by side relation.
  • the disparity vector 210 for corresponding points 206 and 208 is shown on the right hand image 202.
  • the vector extends between the projected point 206 of image 200 and point 208 on image 202.
  • the image processor 102 comprises a data receiving interface 302 for receiving data defining stereoscopic image pairs of a scene or sequence of scenes from the memory 104.
  • the data-receiving interface is connected to a first processor 304 which is programmed to determine an intensity contrast value for each of the pixels in one or both stereoscopic images.
  • the intensity contrast value is the intensity or grey level gradient at the respective pixel determined by the local variation in intensity in the adjacent pixels.
  • the receiving interface is also connected to a second processor 306 which includes a first module 308 programmed to determine the disparity vector associated with each pixel and a second module 310 programmed to determine a disparity or depth contrast value for each pixel.
  • the disparity or depth contrast value is the disparity or depth value gradient at the respective pixel determined by the local variation in depth values associated with the adjacent pixels.
  • the first 304 and second 306 processors are connected to a third processor 312 which is programmed to process the image in accordance with a nonlinear diffusion process based on the intensity contrast and depth contrast values determined by the respective first and second processors.
  • a fourth processor 314 is connected to the third processor 312 for processing the image data simplified by the processor 312 to delineate and extract groups of neighbouring pixels representing physically meaning entities or objects contained within the image being processed.
  • the image processor of Figure 3 is programmed to segment an image by first simplifying the image and then extract objects from the image by histogram based threshold analysis and extraction. An example of an image segmentation method will now be described with reference to the flowchart of Figure 4.
  • Data defining a pair of stereoscopic images of a scene or a sequence of images pairs constituting a video sequence are read from memory 104 by the interface 302 of the image processor 102 in step 400.
  • the image data is stored in the memory 104 as a set of grey level values, one for each pixel.
  • the grey level values are processed by the processor 304 to determine the local variation in intensity in the region of each respective pixel to determine a respective contrast value for each of the pixels.
  • image data defining an image pair is processed by the processor 306, first in step 404 by processor 308 to determine respective disparity vectors 210, and second in step 406 to determine respective depth contrast values based on the local variation in disparity vector values in the region of each respective pixel.
  • Step 404 can be based on the method disclosed in the paper "Depth Based Segmentation" IEEE Transaction on Circuits and Systems for Video Technology, 7(1), February 1997, pp237-239, the contents of which are incorporated herein by reference.
  • the image is simplified in step 408 by processor 312 according to a data dependent non-linear diffusion process.
  • Step 408 involves altering the respective pixel intensity values by modifying the respective intensity contrast values according the corresponding depth contrast values determined in steps 402 and 406 respectively.
  • the intensity values are altered towards an average intensity value determined by the intensity values of the respective surrounding pixels if the modified respective contrast value for the pixel is below a certain value.
  • step 408 is analogous to a physical diffusion process the step is iterative and repeats until a pre-determined equilibrium is achieved.
  • the process of step 408 ultimately provides an image where the intensity values tend to an equilibrium value within the region corresponding to an object within the image, that is to say the or each object is represented by a separate homogeneous region of intensity.
  • the diffusion process is considerably reduced in regions corresponding to object boundaries so that there is significant contrast in intensity between objects and objects and between objects and background within an image of a scene.
  • An example of the process of step 408 is described in greater detail in the example described below.
  • step 410 the processed image data of the simplified image is processed by the processor 314 to determine an image segmentation grey level threshold value for image segmentation.
  • step 412 one or more objects are extracted from the image according to the modified intensity values of the respective pixels. Steps 410 and 412 may be implemented in accordance with the histogram based segmentation method described in the paper "An Amplitude Segmentation Method Based on the Distribution Function of an Image", Compute, Vision, Graphics and Image Processing, 29, 47-59, 1985 mentioned above.
  • an image containing one or more structurally meaningful entities or objects is first simplified, that is to say the image is processed to remove inconsequential detail from the image, and then segmented into regions containing respective entities or objects.
  • image simplification is based on a modified non-linear diffusion process involving grey level intensity values of respective picture elements or pixels comprising the image.
  • concentration / is equal to the -ve flux divergent.
  • concentration /, or I(x, y,t) is identified as the intensity (grey level value) at any spatial sampling position I( ⁇ , )of the evolved image at a time t.
  • Equation (3) defines a non-linear diffusion process in which local averaging of grey level values is inhibited in regions of object boundaries and diffusion velocity is controlled by the local intensity (or grey level value) gradient.
  • Local averaging is the process of averaging the grey level values of adjacent pixels and replacing the current grey level value of a pixel with this average value. If the diffusity function f(.) is chosen as a continuously decreasing function of the image gradient, the diffusion process approximates to a constant solution, or equilibrium, representing a simplified image with sharp boundaries. The amount of diffusion in each pixel or image point is modulated by a function of the image gradient at that point. Accordingly, image regions of high intensity contrast undergo less diffusion, whereas uniform regions are diffused considerably.
  • Equation (3) may be combined together with a rapidly decreasing diffusivity function:
  • K is a threshold value
  • Equation (5) presents a contrasting behaviour according to the magnitude of the image intensity gradient. It will sharpen edges with a local gradient greater than K, while smoothing edges with gradient magnitude lower than K. The value of K can be determined experimentally.
  • Figures 5a and 5b show respective pre and post processed images where the image has been processed using the above-defined non-linear diffusion mathematical model.
  • the above model is improved by using the disparity values associated with the respective pixels since these values vary considerably at object borders.
  • the accuracy of disparity or depth estimation can be substantially increased at the object borders given the known object outline from the intensity contrast values.
  • the disparity values are used to control the diffusion when non-linear diffusion is applied.
  • Figure 5c shows the distribution of disparity values for the a stereoscopic image pair corresponding to the image of Figure 5a. In this representation only the horizontal component of the respective disparity vectors is shown. The magnitude of the vector is represented by grey values. As shown, the approximate position of the object boundaries coincide with the image regions where the disparity variation is high. Thus, by analysing the local variation of the disparity vectors it is possible to detect the position of the respective object borders.
  • the size of the window 502 is for example 8x8 pixels.
  • the smoothness can be expressed as:
  • control function g There are several choices for the control function g.
  • a suitable family of functions is given by:
  • C e (0, 1) is a threshold modulating the influence of ⁇ in the diffusion process.
  • Figure 5d shows a simplified image of Figure 5a when disparity-controlled diffusion is applied according to the above mathematical model.
  • the depth values could instead be obtained by using an active imaging device comprising a low power laser range finder to simultaneously obtain depth information relevant to respective pixels in an image or image sequence.
  • the data-driven aspects of the above described non-linear diffusion process could be readily implemented for video sequences using motion values instead of the disparity values and determined in a similar way as the disparity values but using monoscopic sequential frames of a video sequence instead of stereoscopic image pairs, for example.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Nuclear Medicine (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

One respect of the invention concerns a method and system for processing an image recorded by an imaging device. The method comprises the steps of: a) providing an image comprising an array of adjacent elements each corresponding to a respective part of the image and having a respective intensity value associated therewith (400); b) processing the intensity values to determine an intensity contrast value for each respective element according to differences in intensity values between respective adjacent elements (402); c) determining a depth value for each element correspondingt to the distance between the imaging device and at least part of an image forming object represented in the image by the respective element (404); d) processing the depth values to determine a depth contrast value for each respective element according to differences in depth values between respective adjacent elements (406); and, processing said intensity contrast values and said depth contrast values to identify at least one area of the image corresponding to one or more respective objects in the image being processed (408). Step (408) can be a non-linear diffusion process whereby variations in grey level values are diffused in regions corresponding to objects or background and enhanced in regions corresponding to object boundaries.

Description

IMAGE PROCESSING
This invention relates to image processing and in particular to a system and method for processing digital images to detect and extract physical entities or objects represented in the image.
Image segmentation is of fundamental importance to many digital image- processing applications. The process of image segmentation refers to the grouping together of parts of an image that have similar image characteristics and this is often the first process in image processing tasks. For instance, in the field of video coding it is often desirable to decompose an image into an assembly of its constituent object components prior to coding. This pre-processing step of image segmentation then allows individual objects to be coded separately. Hence, significant data compression can be achieved in video sequences since slow moving background can be transmitted less frequently than faster moving objects. Image segmentation is also important in the field of image enhancement, particularly in medical imaging such as radiography. Image segmentation can be used to enhance detail contained in an image in order to improve the usefulness of the image. For instance, filtering methods based on segmentation have been developed for removing noise and random variations in intensity and contrast from captured digital images to enhance image detail and assist human visualisation and perception of the image contents.
Other fields where image segmentation is important include multi-media applications such as video indexing and post production content-based image retrieval and interpretation, that is to say video sequence retrieval based on user supplied content parameters and machine recognition and interpretation of image contents based on such parameters.
Fundamental to image segmentation is the detection of homogeneous regions and/or the boundaries of such regions which represent objects in that image. Homogeneity may be detected in terms of intensity or texture, that is grey level values, motion (for video sequences), disparity (for stereoscopic images), colour, and/or focus for example. Many approaches to image segmentation have been attempted including texture-based, intensity-based, motion-based and focus-based segmentation. Known approaches require significant computational resources and often provide unsatisfactory results. One approach that uses intensity or grey level values for object segmentation is thresholding. The concept of image segmentation based on thresholding is described in the paper "An Amplitude Segmentation Method Based on the Distribution Function of an Image", Compute, Vision, Graphics and Image Processing, 29, 47-59, 1985. In the thresholding method intensity values are determined for each pixel or picture element in a digital image and on the basis of these values a threshold value is determined that distinguishes each pixel of an object in the image from pixels representing background detail. In practice, the threshold intensity value is determined dynamically for each image according to the statistical distribution of intensity values, that is to say, the value is based on a histogram analysis of all the intensity values for a particular image. Peaks in the histogram distribution generally represent intensity values predominately associated with a particular object. If two objects are present in an image there will be two peaks. In these circumstances the intersection or overlap between the two peaks is taken as the threshold value. This approach to image segmentation is relatively straightforward but can be computationally intensive particularly when complex images are presented, for example, images comprising a number of objects or complex backgrounds or when the image is heavily "textured", that is to say, the image comprises a number separate regions within an object that have different intensity values. When textured images are processed using threshold- based methods "over-segmentation" can occur, that is, regions within an object are themselves recognised as separate objects within the image being processed.
The problem of over segmentation can be partially overcome if the image is simplified prior to thresholding. Image simplification involves the removal of low order intensity value differences between adjacent pixels within an object boundary while the intensity value differences are maintained at the object boundaries. Image simplification is often achieved in digital image processing by using so called non-linear diffusion methods. The concept of non-linear diffusion for image processing is described in the published paper "Scale Space and Edge Detection Using Anisotropic Diffusion", IEEE Trans, on Pattern Analysis and Machine Intelligence Vol.12 No. 7 pp629-639, July 1990. In this method pixel intensities are altered in a manner analogous to diffusion of physical matter to provide regions of homogenous intensity within object boundaries while preventing diffusion at the object boundaries, thereby preserving intensity contrast at the boundaries. It has been found, however, that methods of image simplification based on known non-linear diffusion algorithms result in over segmentation. According to a first aspect of the invention there is provided a method of processing an image recorded by an imaging device; said method comprising the steps of:- a) providing an image comprising an array of adjacent elements each corresponding to a respective part of the image and having a respective intensity value associated therewith; b) processing said intensity values to determine an intensity contrast value for each respective element according to differences in intensity values between respective adjacent elements; c) providing a depth value for each element corresponding to the distance between the imaging device and at least part of an image forming object represented in the image by the respective element; d) processing said depth values to determine a depth contrast value for each respective element according to differences in depth values between respective adjacent elements; and, e) processing said intensity contrast values and said depth contrast values to identify at least one area of the image corresponding to one or more respective objects in the image being processed.
Thus, by processing depth contrast values with the intensity contrast values, data relating to the relief of an image, that is the depth of image forming objects (or more precisely the distance travelled by the reflected incident radiation) in the image, can be used to improve object boundary detection and thereby improve segmentation of the image into its constituent object parts. By using two parameters instead of one the accuracy of determining object boundaries can be significantly improved. Preferably, said depth values are determined according to the spacing between corresponding points on a stereoscopic image pair. The spacing between corresponding points can be readily converted into depth values based on known imaging system geometry. Hence, additional image processing can be minimised.
In a preferred embodiment, the spacing between said points is determined by matching said corresponding points and estimating a vector value for the relative spacing and direction of said points. In this way, respective vector values can be used to represent the respective depth values associated with the respective elements.
Conveniently, said area is determined by identifying an outline of said respective object or objects in the image. This readily provides for object identification. Preferably, step e) comprises the step of altering the intensity values of each respective element in accordance with the respective intensity contrast value and the respective depth contrast value of the element. This can increase intensity differences between adjacent elements at the respective object boundaries. In preferred embodiments, the step of altering the intensity values comprises the step of modifying the respective intensity contrast values according to the respective depth contrast values, and altering the intensity values of the respective elements towards an average intensity value determined by the intensity values of surrounding elements if the respective modified intensity contrast value of the element is below a threshold value. This improves image simplification by reducing differences in intensity values between elements corresponding to positions within an object.
Conveniently, the intensity contrast values are modified such that elements having a higher than average depth contrast value have their respective intensity values altered less than elements having a lower than average depth contrast value. This increases the difference in the intensity values between adjacent elements corresponding to positions on opposing sides of object boarders.
Preferably, step e) comprises a non-linear diffusion process for altering element intensity values in accordance with respective intensity contrast values modified in accordance with respective depth contrast values. In this way, it is possible to improve known non-linear diffusion methods of image simplification by modifying the diffusion process in accordance with further object identifying data, that is to say using the depth data associated with each element.
In a preferred embodiment, the method further comprises the step of delineating said object or objects from the image. This allows the objects to be stored, retrieved, coded, or processed separately, for example.
Preferably, the delineating step comprises the steps of:- determining a statistical distribution function of the altered intensity values of the respective elements; determining a threshold value or range of values to include all the intensity values of the respective elements of at least one identified area in the image; and, selecting elements having modified intensity values within the threshold range or above or below said threshold value; and delineating image data relating to said selected elements from image data relating to the remaining elements. In this way it is possible to implement relatively simple thresholding methods to extract the object or objects from the processed image.
According to a second aspect of the invention there is provided a method of processing an image in accordance with a non-linear diffusion process; said method comprising the steps of:- i) providing an image comprising an array of adjacent elements each corresponding to a respective part of the image and having a respective intensity value associated therewith; ii) processing said intensity values to determine an intensity contrast value for each respective element according to differences in intensity values between respective adjacent elements; iii) identifying from said array of elements object defining elements corresponding to points on respective objects in the image; iv) altering said element intensity values in accordance said respective intensity contrast values, whereby said elements are altered to a lesser or greater extent in dependence on whether said respective element is an object element.
In one embodiment, step iii) comprises the step of determining a depth value associated with a disparity field for each element in the image and identifying said object elements from said depth values. In another embodiment, step iii) comprises the step of determining a motion value associated with a motion vector in a video sequence for each element in the image and identifying said object elements from said motion values. Accordingly, the non-linear diffusion process can be modified in accordance with object positions determined by motion recorded in a video sequence. According to a third aspect of the present invention there is provided an image processing system for processing an image recorded by an imaging device; said system comprising:- a) a data receiver for receiving data relating to an image comprising an array of adjacent elements each corresponding to a respective part of the image and having a respective intensity value associated therewith; an intensity value processor configured to determine an intensity contrast value for each respective element according to differences in intensity values between respective adjacent elements; a depth value processor configured to determine a depth value for each element corresponding to the distance between the imaging device and at least part of an image forming object represented in the image by the respective element; a depth contrast value processor configured to determine a depth contrast value for each respective element according to differences in depth values between respective adjacent elements; and, an object segment processor configured to process intensity contrast values and said depth contrast values to identify at least one area of the image corresponding to one or more respective objects in the image being processed. The invention will now be described, by way of example only, with reference to the accompanying drawings; in which:-
Figure 1 is a schematic block diagram of a system for processing digital images;
Figure 2a shows a pair of stereoscopic images of a scene viewed from two different perspectives with a stereoscopic imaging device;
Figure 2b shows the images of Figure 2 in side by side relation;
Figure 3 is a schematic block diagram of an image processor for processing digital images in the system of Figure 1 ;
Figure 4 is a flow chart of a method for processing digital images Figure 5a is a pre-processed image of a scene comprising an object to be segmented;
Figure 5b is a processed image of the image of Figure 5a processed in accordance with a known non-linear diffusion process;
Figure 5c is a processed image of the image of Figure 5a showing disparity or depth vectors for the image of Figure 5a obtained from a stereoscopic image pair;
Figure 5d is a processed image of the image of Figure 5a processed in accordance with a modified non-linear diffusion process utilising the disparity data represented in Figure 5c; and,
Figure 5e is shows an object mask extracted from the processed image of Figure 5a.
With reference to Figure 1 , in one arrangement of the present invention an image processor 102 is arranged to receive digital images from a memory 104 storing two dimensional images of three dimensional scenes recorded by means of an optical- electronic imaging device 106. The imaging device 106 receives electromagnetic radiation from all areas of the scene being recorded including one or more distinct image forming objects 108 within the imaging device's field of view 110. The imaging device can be any device capable of forming optical-electronic images, including for example an array of light sensitive photo-diodes or the like connected to respective charged coupled devices for forming a digital image of picture elements or pixels capable of being stored in electronic digital memory 104. The pixels each have a grey level value associated with them representative of the brightness or intensity of the respective part of the scene they represent. Data relating to the colour associated with each pixel may also be stored in the memory 104.
In the present arrangement the imaging device comprises two separate optical-electronic imaging systems for recording stereoscopic image pairs. Figure 2a shows a pair of images, 200 to the left of the drawing and 202 to the right, that define a stereoscopic image pair corresponding to two different perspective projections in slightly different planes of the same scene. The image processor 102 is programmed in a known manner to process stereoscopic image pairs of the type shown to obtain data relating to the depth of the or each object and the background in a scene, or more precisely, the distance travelled by the incident electromagnetic radiation reflected by the or each object or background to the respective light sensitive pixels of the imaging device. The image processor is programmed to determine disparity vectors in much the same way that conventional image processes are programmed to determine motion vectors for object segmentation prior to video sequence coding. For instance, depth is estimated from the stereoscopic images by estimating a disparity vector for each pair of corresponding points in the image pair. In Figure 2 a point 204 on an object in a scene has a position defined by the spatial co-ordinates (x,y,z). This point is projected on the left image at a point 206 having the local spatial co-ordinates (x,y)l and likewise on the right image at a point 208 having the spatial co-ordinates (x,y)r. The left and right images have the same co-ordinate reference frame and so the distance and direction between the two corresponding points 206 and 208, known as the disparity vector, can be readily determined.
Figure 2b shows the two images 200 and 202 in side by side relation. The disparity vector 210 for corresponding points 206 and 208 is shown on the right hand image 202. The vector extends between the projected point 206 of image 200 and point 208 on image 202.
It is possible to determine the distance of a point in an image from the disparity vector for that point based on knowledge of the imaging system geometry. The estimation of depth in an image using stereoscopic imaging is described in detail in the paper "Depth Based Segmentation" IEEE Transaction on Circuits and Systems for Video Technology, 7(1 ), February 1997, pp237-239.
In the arrangement of Figure 3, the image processor 102 comprises a data receiving interface 302 for receiving data defining stereoscopic image pairs of a scene or sequence of scenes from the memory 104. The data-receiving interface is connected to a first processor 304 which is programmed to determine an intensity contrast value for each of the pixels in one or both stereoscopic images. The intensity contrast value is the intensity or grey level gradient at the respective pixel determined by the local variation in intensity in the adjacent pixels. The receiving interface is also connected to a second processor 306 which includes a first module 308 programmed to determine the disparity vector associated with each pixel and a second module 310 programmed to determine a disparity or depth contrast value for each pixel. The disparity or depth contrast value is the disparity or depth value gradient at the respective pixel determined by the local variation in depth values associated with the adjacent pixels. The first 304 and second 306 processors are connected to a third processor 312 which is programmed to process the image in accordance with a nonlinear diffusion process based on the intensity contrast and depth contrast values determined by the respective first and second processors. A fourth processor 314 is connected to the third processor 312 for processing the image data simplified by the processor 312 to delineate and extract groups of neighbouring pixels representing physically meaning entities or objects contained within the image being processed.
The image processor of Figure 3 is programmed to segment an image by first simplifying the image and then extract objects from the image by histogram based threshold analysis and extraction. An example of an image segmentation method will now be described with reference to the flowchart of Figure 4.
Data defining a pair of stereoscopic images of a scene or a sequence of images pairs constituting a video sequence are read from memory 104 by the interface 302 of the image processor 102 in step 400. The image data is stored in the memory 104 as a set of grey level values, one for each pixel. In step 402 the grey level values are processed by the processor 304 to determine the local variation in intensity in the region of each respective pixel to determine a respective contrast value for each of the pixels. Subsequently or simultaneously, image data defining an image pair is processed by the processor 306, first in step 404 by processor 308 to determine respective disparity vectors 210, and second in step 406 to determine respective depth contrast values based on the local variation in disparity vector values in the region of each respective pixel. Step 404 can be based on the method disclosed in the paper "Depth Based Segmentation" IEEE Transaction on Circuits and Systems for Video Technology, 7(1), February 1997, pp237-239, the contents of which are incorporated herein by reference. The image is simplified in step 408 by processor 312 according to a data dependent non-linear diffusion process. Step 408 involves altering the respective pixel intensity values by modifying the respective intensity contrast values according the corresponding depth contrast values determined in steps 402 and 406 respectively. The intensity values are altered towards an average intensity value determined by the intensity values of the respective surrounding pixels if the modified respective contrast value for the pixel is below a certain value. In this regard, the intensity contrast values are modified such that pixels having a higher than average depth contrast value have their respective intensity values altered less than elements having a lower than average depth contrast value. Since step 408 is analogous to a physical diffusion process the step is iterative and repeats until a pre-determined equilibrium is achieved. The process of step 408 ultimately provides an image where the intensity values tend to an equilibrium value within the region corresponding to an object within the image, that is to say the or each object is represented by a separate homogeneous region of intensity. The diffusion process is considerably reduced in regions corresponding to object boundaries so that there is significant contrast in intensity between objects and objects and between objects and background within an image of a scene. An example of the process of step 408 is described in greater detail in the example described below.
In step 410 the processed image data of the simplified image is processed by the processor 314 to determine an image segmentation grey level threshold value for image segmentation. In step 412 one or more objects are extracted from the image according to the modified intensity values of the respective pixels. Steps 410 and 412 may be implemented in accordance with the histogram based segmentation method described in the paper "An Amplitude Segmentation Method Based on the Distribution Function of an Image", Compute, Vision, Graphics and Image Processing, 29, 47-59, 1985 mentioned above.
In the method described with reference to Figure 4, an image containing one or more structurally meaningful entities or objects is first simplified, that is to say the image is processed to remove inconsequential detail from the image, and then segmented into regions containing respective entities or objects. In the example described, image simplification is based on a modified non-linear diffusion process involving grey level intensity values of respective picture elements or pixels comprising the image. The process of step 408 will now be described with reference to the following mathematical example.
Example
Mathematically the process of diffusion can be described by the following partial differential equation, known as the diffusion equation: It = div(τ - VI) (1)
Equation (1) embodies two important properties: first, the equilibration property stated by Fick's law, = -τ - VI , where V/ is the concentration gradient, φ is the flux, and r is the diffusion tensor; and second, the continuity property given by /, = -div(φ) .
Thus the concentration /, is equal to the -ve flux divergent. In the context of the present invention the concentration /, or I(x, y,t) is identified as the intensity (grey level value) at any spatial sampling position I(χ, )of the evolved image at a time t.
If the diffusion tensor τ is constant over the whole image, then Equation (1 ) describes a linear diffusion model, /, = cV2/ (2)
where c is the diffusion constant and V / the Laplacian of the image intensity.
If the diffusion tensor τ in Equation (1 ) is defined as a function of the local energy variation, that is the local image intensity (or grey level value) gradient, at an image position (x, y), τ = f(x, y,t) , a diffusity function, then Equation (1) leads to,
It = V - [f(x, y,t)VI] = div(f(x, y,t)VI) (3)
Equation (3) defines a non-linear diffusion process in which local averaging of grey level values is inhibited in regions of object boundaries and diffusion velocity is controlled by the local intensity (or grey level value) gradient. Local averaging is the process of averaging the grey level values of adjacent pixels and replacing the current grey level value of a pixel with this average value. If the diffusity function f(.) is chosen as a continuously decreasing function of the image gradient, the diffusion process approximates to a constant solution, or equilibrium, representing a simplified image with sharp boundaries. The amount of diffusion in each pixel or image point is modulated by a function of the image gradient at that point. Accordingly, image regions of high intensity contrast undergo less diffusion, whereas uniform regions are diffused considerably.
Equation (3) may be combined together with a rapidly decreasing diffusivity function:
Figure imgf000013_0001
and this diffusivity function leads to a flux function φ of the form:
Figure imgf000013_0002
Where K is a threshold value.
Thus the derivative of equation (5) is positive for | | < _K' and negative for
||V/| > K . Consequently the diffusion process behaves in a forward parabolic manner for |V/| < K , while it behaves in a backward parabolic manner for |V/|| > K . That is,
Equation (5) presents a contrasting behaviour according to the magnitude of the image intensity gradient. It will sharpen edges with a local gradient greater than K, while smoothing edges with gradient magnitude lower than K. The value of K can be determined experimentally. Figures 5a and 5b show respective pre and post processed images where the image has been processed using the above-defined non-linear diffusion mathematical model.
The above model is improved by using the disparity values associated with the respective pixels since these values vary considerably at object borders. In addition, the accuracy of disparity or depth estimation can be substantially increased at the object borders given the known object outline from the intensity contrast values.
In one example of the present invention the disparity values are used to control the diffusion when non-linear diffusion is applied. Figure 5c shows the distribution of disparity values for the a stereoscopic image pair corresponding to the image of Figure 5a. In this representation only the horizontal component of the respective disparity vectors is shown. The magnitude of the vector is represented by grey values. As shown, the approximate position of the object boundaries coincide with the image regions where the disparity variation is high. Thus, by analysing the local variation of the disparity vectors it is possible to detect the position of the respective object borders.
The degree of smoothness ς(z) of the disparity vectors at any sampling position z = (x, y) , 500 in Figure 5c, is obtained by measuring the statistical variance of the disparity vectors inside a small observation window 502 centered at position 500. The size of the window 502 is for example 8x8 pixels. The smoothness can be expressed as:
Figure imgf000014_0001
where σx and σy are, respectively, the variances of the horizontal and vertical components of the disparity vectors inside the window 502. The diffusivity f(.) in Equation (4) is now defined as function of a ς -weighted image gradient |V/|| That is, at each sampling position zthe magnitude of the image
gradient is weighted by its local disparity variance ς{z) . So if ςmax is the maximum variance of the considered disparity field and g : [0, ςmax ] - [0,1] , that is any increasing control function satisfying the two conditions g(0)=0 and g(ςmax )=1, then:
Figure imgf000014_0002
There are several choices for the control function g. For example a suitable family of functions is given by:
Figure imgf000014_0003
where C e (0, 1) is a threshold modulating the influence of ς in the diffusion process. Applying the parabolic diffusion Equation (3) with diffusivity function /(||V/( , , t)|| ) an iterative disparity-driven or depth-driven diffusion process model is defined.
Figure 5d shows a simplified image of Figure 5a when disparity-controlled diffusion is applied according to the above mathematical model.
It can be seen from Figure 5d that the above-described disparity-driven nonlinear diffusion model is particularly appropriate for both object segmentation and pattern recognition image processing. Masks of complete physical objects can easily be extracted from the processed images using known histogram-based thresholding methods. An example of an extracted mask of the image of Figure 5a is shown in Figure 5e.
Although the present invention has been described with reference to stereoscopic disparity-driven non-linear diffusion it will be understood that other embodiments of the present invention could be readily implemented by the person skilled in the art without further inventive contribution. For example, the depth values could instead be obtained by using an active imaging device comprising a low power laser range finder to simultaneously obtain depth information relevant to respective pixels in an image or image sequence. In addition the data-driven aspects of the above described non-linear diffusion process could be readily implemented for video sequences using motion values instead of the disparity values and determined in a similar way as the disparity values but using monoscopic sequential frames of a video sequence instead of stereoscopic image pairs, for example.

Claims

1. An method of processing an image recorded by an imaging device; said method comprising the steps of:- a) providing an image comprising an array of adjacent elements each corresponding to a respective part of the image and having a respective intensity value associated therewith; b) processing said intensity values to determine an intensity contrast value for each respective element according to differences in intensity values between respective adjacent elements; c) providing a depth value for each element corresponding to the distance between the imaging device and at least part of an image forming object represented in the image by the respective element; d) processing said depth values to determine a depth contrast value for each respective element according to differences in depth values between respective adjacent elements; and, e) processing said intensity contrast values and said depth contrast values to identify at least one area of the image corresponding to one or more respective objects in the image being processed.
2. A method according to claim 1 wherein said depth values are determined according to the spacing between corresponding points on a stereoscopic image pair.
3. A method according to claim 2 wherein the spacing between said points is determined by matching said corresponding points and estimating a vector value for the relative spacing and direction of said points.
4. A method according to any preceding claim wherein said area is determined by identifying an outline of said respective object or objects in the image.
5. A method according to any preceding claim wherein step e) comprises the step of altering the intensity values of each respective element in accordance with the respective intensity contrast value and the respective depth contrast value of the element.
6. A method according to claim 5 wherein the step of altering the intensity values comprises the step of modifying the respective intensity contrast values according to the respective depth contrast values, and altering the intensity values of the respective elements towards an average intensity value determined by intensity values of surrounding elements if the respective modified intensity contrast value of the element is below a threshold value.
7. A method according to claim 6 wherein the intensity contrast values are modified such that elements having a higher than average depth contrast value have their respective intensity values altered less than elements having a lower than average depth contrast value.
8. A method according to any preceding claim wherein step e) comprises a nonlinear diffusion process for altering element intensity values in accordance with respective intensity contrast values modified in accordance with respective depth contrast values.
9. A method according to any one of claims 5 to 8 further comprising the step of de-lineating said object or objects from the image.
10. A method according to claim 9 wherein the delineating step comprises the steps of:- determining a distribution of the altered intensity values of the respective elements; determining a threshold value or range of values to include all the intensity values of the respective elements of at least one identified area in the image; and, selecting elements having modified intensity values within the threshold range or above or below said threshold value; and delineating image data relating to said selected elements from image data relating to the remaining elements.
11. An method of processing an image in accordance with a non-linear diffusion process; said method comprising the steps of:- a) providing an image comprising an array of adjacent elements each corresponding to a respective part of the image and having a respective intensity value associated therewith; b) processing said intensity values to determine an intensity contrast value for each respective element according to differences in intensity values between respective adjacent elements; c) identifying from said array of elements object defining elements corresponding to points on respective objects in the image; d) altering said element intensity values in accordance said respective intensity contrast values, whereby said elements are altered to a lesser or greater extent in dependence on whether said respective element is an object element.
12. A method according to claim 11 wherein step c) comprises the step of determining a depth value associated with a disparity field for each element in the image and identifying said object elements from said depth values.
13. A method according to claim 11 wherein step c) comprises the step of determining a motion value associated with a motion vector in a video sequence for each element in the image and identifying said object elements from said motion values.
14. A system configured to implement a method according to any preceding claim
15. An image processing system for processing an image recorded by an imaging device; said system comprising:- a) a data receiver for receiving data relating to an image comprising an array of adjacent elements each corresponding to a respective part of the image and having a respective intensity value associated therewith; an intensity value processor configured to determine an intensity contrast value for each respective element according to differences in intensity values between respective adjacent elements; a depth value processor configured to determine a depth value for each element corresponding to the distance between the imaging device and at least part of an image forming object represented in the image by the respective element; a depth contrast value processor configured to determine a depth contrast value for each respective element according to differences in depth values between respective adjacent elements; and, an object segment processor configured to process intensity contrast values and said depth contrast values to identify at least one area of the image corresponding to one or more respective objects in the image being processed.
PCT/GB2000/004707 1999-12-10 2000-12-08 Image processing WO2001043071A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US10/129,788 US7050646B2 (en) 1999-12-10 2000-12-08 Image processing system and method for image segmentation using intensity contrast and depth contrast values
GB0212972A GB2372661B (en) 1999-12-10 2000-12-08 Image processing
AU21921/01A AU2192101A (en) 1999-12-10 2000-12-08 Image processing
CA002394591A CA2394591C (en) 1999-12-10 2000-12-08 Image processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP99309943 1999-12-10
EP99309943.1 1999-12-10

Publications (2)

Publication Number Publication Date
WO2001043071A2 true WO2001043071A2 (en) 2001-06-14
WO2001043071A3 WO2001043071A3 (en) 2002-03-21

Family

ID=8241795

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2000/004707 WO2001043071A2 (en) 1999-12-10 2000-12-08 Image processing

Country Status (4)

Country Link
AU (1) AU2192101A (en)
CA (1) CA2394591C (en)
GB (1) GB2372661B (en)
WO (1) WO2001043071A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003044740A1 (en) * 2001-11-20 2003-05-30 Anoto Ab Method and a hand-held device for identifying objects in a sequence of digital images by creating binarized images based on a adaptive threshold value
US7283676B2 (en) 2001-11-20 2007-10-16 Anoto Ab Method and device for identifying objects in digital images
CN102147922A (en) * 2011-05-05 2011-08-10 河南工业大学 Two-dimensional Otsu broken line threshold segmentation method for gray image
US8774512B2 (en) 2009-02-11 2014-07-08 Thomson Licensing Filling holes in depth maps

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994018797A1 (en) * 1993-02-09 1994-08-18 Siemens Aktiengesellschaft Object-oriented segmentation process of stereoscopic images or image sequences

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994018797A1 (en) * 1993-02-09 1994-08-18 Siemens Aktiengesellschaft Object-oriented segmentation process of stereoscopic images or image sequences

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IMORI T ET AL: "A SEGMENTATION-BASED MULTIPLE-BASELINE STEREO (SMBS) SCHEME FOR ACQUISITION OF DEPTH IN 3-D SCENES" IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS,JP,INSTITUTE OF ELECTRONICS INFORMATION AND COMM. ENG. TOKYO, vol. E81-D, no. 2, 1 February 1998 (1998-02-01), pages 215-223, XP000736981 ISSN: 0916-8532 *
LIANG-HUA CHEN ET AL: "STEREO CORRESPONDENCE BY SURFACE SEGMENTATION" PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS,US,NEW YORK, IEEE, vol. -, 1992, pages 593-598, XP000366545 ISBN: 0-7803-0720-8 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003044740A1 (en) * 2001-11-20 2003-05-30 Anoto Ab Method and a hand-held device for identifying objects in a sequence of digital images by creating binarized images based on a adaptive threshold value
US7283676B2 (en) 2001-11-20 2007-10-16 Anoto Ab Method and device for identifying objects in digital images
US8774512B2 (en) 2009-02-11 2014-07-08 Thomson Licensing Filling holes in depth maps
CN102147922A (en) * 2011-05-05 2011-08-10 河南工业大学 Two-dimensional Otsu broken line threshold segmentation method for gray image

Also Published As

Publication number Publication date
WO2001043071A3 (en) 2002-03-21
GB2372661A (en) 2002-08-28
CA2394591C (en) 2008-02-05
GB2372661B (en) 2004-04-21
AU2192101A (en) 2001-06-18
CA2394591A1 (en) 2001-06-14
GB0212972D0 (en) 2002-07-17

Similar Documents

Publication Publication Date Title
US7050646B2 (en) Image processing system and method for image segmentation using intensity contrast and depth contrast values
Pan et al. Phase-only image based kernel estimation for single image blind deblurring
Rahtu et al. Segmenting salient objects from images and videos
KR102138950B1 (en) Depth map generation from a monoscopic image based on combined depth cues
KR100829581B1 (en) Image processing method, medium and apparatus
Gao et al. A fast image dehazing algorithm based on negative correction
CN111104943B (en) Color image region-of-interest extraction method based on decision-level fusion
Saha et al. Mutual spectral residual approach for multifocus image fusion
CN108377374B (en) Method and system for generating depth information related to an image
JP2012038318A (en) Target detection method and device
Pan et al. Single-image dehazing via dark channel prior and adaptive threshold
KR101921608B1 (en) Apparatus and method for generating depth information
Ma et al. Defocus blur detection via edge pixel DCT feature of local patches
CA2394591C (en) Image processing
KR101825218B1 (en) Apparatus and method for generaing depth information
EP2930687B1 (en) Image segmentation using blur and color
CN116958863A (en) Feature extraction method, device and storage medium based on thermal imaging video
Ahn et al. Segmenting a noisy low-depth-of-field image using adaptive second-order statistics
Marques et al. Enhancement of low-lighting underwater images using dark channel prior and fast guided filters
Kim et al. Single image dehazing of road scenes using spatially adaptive atmospheric point spread function
Yeung et al. Extracting smooth and transparent layers from a single image
Skosana et al. Edge-preserving smoothing filters for improving object classification
CN113436263B (en) Feature point extraction method and system based on image processing
Naidu et al. Efficient case study for image edge gradient based detectors-sobel, robert cross, prewitt and canny
Mun et al. Occlusion Aware Reduced Angular Candidates based Light Field Depth Estimation from an Epipolar Plane Image

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 10129788

Country of ref document: US

ENP Entry into the national phase in:

Ref country code: GB

Ref document number: 200212972

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 2394591

Country of ref document: CA

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP