US20080056609A1 - Fine Stereoscopic Image Matching And Dedicated Instrument Having A Low Stereoscopic Coefficient - Google Patents
Fine Stereoscopic Image Matching And Dedicated Instrument Having A Low Stereoscopic Coefficient Download PDFInfo
- Publication number
- US20080056609A1 US20080056609A1 US10/594,257 US59425705A US2008056609A1 US 20080056609 A1 US20080056609 A1 US 20080056609A1 US 59425705 A US59425705 A US 59425705A US 2008056609 A1 US2008056609 A1 US 2008056609A1
- Authority
- US
- United States
- Prior art keywords
- image
- point
- correlation
- stereopair
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Definitions
- the invention relates to a method and to a system for acquisition and for bringing into correspondence the points in a first image with the points in a second image forming a stereopair.
- Stereoscopy is a method giving the impression of relief from a pair of 2D (two-dimensional) images representing a scene that has been acquired from different viewpoints.
- two images forming a stereopair are acquired using two CCD (charge coupled device) sensors 15 , 16 (in matrix or array form) that lie in the same focal plane 19 ′ and are symmetrical with respect to the straight line A-A′ passing through the center of the observed scene and perpendicular to the focal plane 19 ′.
- CCD charge coupled device
- CCD sensors allow two images of the observed scene located at a certain distance 18 from the CCD sensors to be acquired.
- the main difficulty encountered when employing such a method is how to properly bring the points in the images of the stereopair into one-to-one correspondence. This is because, since the two images of the stereopair are not taken at the same angle of incidence, a given point in the scene, the position of which in the first image is given by the coordinate (X 1 ,Y 1 ), will have coordinates (X 2 ,Y 2 ) in the second image, where XI ⁇ X 2 and Y 1 ⁇ Y 2 .
- position difference (or difference in position)” will be preferred to the term “disparity” although both have the same meaning.
- the principle of correlation is based on measuring a local resemblance between two images. This local resemblance is measured by introducing weighted windows (or a matrix of coefficients) centered on neighborhoods that are homologous as regards geometrical positioning in the image. These weighted windows are called correlation windows.
- the method consists in applying a correlation window 3 centered on the point 40 under investigation in the first image 1 and in seeking its radiometrically homologous point 41 in the second image. This operation is carried out by displacement (in the second image) of the correlation window within a larger window, called the search area 4 .
- the search area 4 is centered on the estimated geometrically homologous point 42 of the current point 40 in the first image.
- the table of correlation values obtained constitutes the correlation sheet. The position difference for which the correlation value is a maximum is then adopted.
- This operation is carried out in succession on a subset of the points in the first image or on all said points. That point in the first image under investigation at a given instant will hereafter be called the current point.
- Each point in the first image may be considered as a signal which it is endeavored to find in the second image by correlation.
- a point 40 in the first image and a point 41 in the second image are radiometrically homologous points if they correspond to the same point in the scene represented in the images of the stereopair.
- the precision on determining the third coordinate of a point in the scene is given approximately by:
- One of the objects of the present invention is to provide a method of matching a stereopair that does not have the drawbacks of the method described above, and thus implement a method allowing precise matching of the points of a stereopair for small stereoscopic coefficients.
- the matching of the points of a stereopair is performed by generating disparity maps.
- the position of each point in the disparity map corresponds to the position of a point in the first image
- the value of each point in the disparity map represents the position difference between the point in the first image and its radiometrically homologous point in the second image.
- the value of the point with coordinates (A,B) of the disparity map is representative of the position difference between the point with coordinates (A,B) in the first image and its radiometrically homologous point in the second image.
- stereopair matching methods based on the correlation principle provide precise matching of the points only for a large stereoscopic coefficient.
- the stereoscopic coefficient (b/h) is the ratio of the separation 19 between the two CCD sensors 15 , 16 lying in the same focal plane 19 ′ (cf. FIG. 3 ) and the distance 18 between the observed scene and the CCD sensors.
- Stereopair acquisition and matching systems comprise an acquisition system and a processing system. These two systems are in general far apart and communicate via wire or wireless communication means.
- the processing systems allow a stereopair to be matched. These systems employ stereopair matching methods. In the case of a processing system employing a method based on the correlation principle, it is therefore necessary to have a large stereoscopic coefficient for precise matching of the images of a stereopair.
- Stereopair acquisition systems providing the processing systems with the images to be processed must therefore be designed in such a way that they meet this condition (large stereoscopic coefficient of the stereopair).
- the distance 18 between the acquisition system and the observed scene is very large.
- stereopair acquisition systems in space comprise two optical instruments 15 ′, 16 ′ (satellites) each having a CCD sensor 15 , 16 .
- Another object of the present invention is to provide a stereopair acquisition and matching unit comprising a simplified acquisition system, for the acquisition of a stereopair with a small stereoscopic coefficient, and a processing system employing the stereopair matching method according to the present invention.
- the invention relates to a processing system for an assembly for the acquisition and matching of a stereopair of images which comprises an acquisition system for acquiring a stereopair of images with a stereoscopic coefficient of a few hundredths and the processing system for processing the stereopair acquired, the processing system comprising:
- the invention also relates to a method for matching a stereopair with a stereoscopic coefficient of a few hundredths, the method comprising the following steps:
- the invention also relates to an assembly for the acquisition and matching of a stereopair of images, comprising a system for the acquisition of a stereopair of images and a system for processing the stereopair acquired, in which the system for acquisition of the stereopair comprises a single acquisition instrument comprising two CCD sensors in the optical focal plane, each CCD sensor allowing the acquisition of one image, the acquisition system being designed to operate with stereoscopic coefficients of a few hundredths and the processing system comprises:
- the invention also relates to an acquisition system for an assembly for the acquisition and matching of a stereopair of images, comprising the system for acquisition of a stereopair of images and a system for processing the stereopair acquired, the stereopair acquisition system comprising a single acquisition instrument comprising two CCD sensors in the optical focal plane, each CCD sensor allowing acquisition of one image of a stereopair of images, the acquisition system being designed to operate with stereoscopic coefficients of a few hundredths.
- FIG. 1 illustrates a scene shown as a stereopair
- FIG. 2 illustrates a view of a scene of the column component of an injected sinusoidal offset, and the result of the measurement of this offset by correlation
- FIG. 3 illustrates a perspective view of a stereoscopic system
- FIG. 4 illustrates a graph showing the degree of correlation along one direction of a SPOT 5 -type image as a function of the position difference between the current point in the first image and the current point in the second image, the current point corresponding in each image to the correlated point or the point at the center of the SPOT 5 window (cf. page 23);
- FIG. 5 illustrates a graph showing the degree of correlation of a window of the 2 ⁇ 2 square hypomode type as a function of the position difference between the current point in the first image and the current point in the second image, the current point corresponding in each image to the correlated point or to the point at the center of the window;
- FIG. 6 illustrates a graph showing the degree of correlation of a prolate-type window as a function of the position difference between the current point in the first image and the current point in the second image, the current point corresponding in each image to the correlated point or to the point at the center of the window;
- FIG. 7 illustrates a view of a scene, points in the scene that are preserved after applying a row criterion, points in the scene that are preserved after applying a column criterion, and points in the scene that are preserved after applying a row and a column criterion;
- FIG. 9 illustrates a view of an instrument dedicated to the stereoscopy
- FIG. 11 illustrates a view of a pair of images each having three impulse responses or modulation transfer functions (MTFs).
- FIG. 12 illustrates correlation and barycentric correction steps of the image matching method.
- the method presented here makes it possible to compute precise maps of the disparities between stereopairs having small stereoscopic coefficients with the same altimetric precision as for large stereoscopic coefficients. This method operates down to very low stereoscopic coefficients (0.01) without degrading the altimetric precision.
- Determination of the continuous model of the unidirectional correlation will result in the generation of an equation for linking the measurement of the position difference performed by the correlation operation along a processing direction as a function of the actual value of the position difference.
- This equation will demonstrate the abovementioned drawback of the “adhesion” correlation.
- An example of processing that allows this adhesion effect to be limited will then be presented.
- a second relationship namely a local directional morphological condition, on which the precision of the measured position difference depends, will also be obtained from the continuous modeling of the unidirectional correlation.
- the set of epipolar planes is defined as being the set of planes passing through the two optical centers C 1 and C 2 .
- the points P, P 1 and P 2 form part of the scene 38 , the first image 36 and the second image 37 of the stereopair, respectively. It may be seen that, for all the points in the scene 38 , the point P and its image points P 1 and P 2 lie in one and the same epipolar plane 35 .
- the lines corresponding to the intersection of the image planes 36 and 37 with the epipolar plane 35 are the epipolar lines. These are known since the positions of the CCD sensors C 1 and C 2 are known.
- the continuous formulation of the nonlinear correlation coefficient is performed in the case of a stereopair of images with a low level of noise.
- the correlation coefficient along the direction of unit vector ⁇ in a neighborhood centered on ⁇ t ⁇ is:
- the aim is to find the vector position difference ⁇ t that maximizes this correlation coefficient. This is because the position difference for which the correlation is a maximum corresponds to the position difference between the coordinates of a point in the first image and the coordinates of its radiometrically homologous point in the second image.
- u(t) be the vector function which at any point t in the image associates ⁇ t.
- ⁇ 0 ⁇ ( u 0 ) ⁇ ⁇ ⁇ I ⁇ I ⁇ ⁇ ( x + u 0 ) ⁇ ⁇ x ⁇ ⁇ ⁇ I 2 ⁇ ( x ) ⁇ ⁇ x ⁇ ⁇ ⁇ ⁇ I ⁇ 2 ⁇ ( x + u 0 ) ⁇ ⁇ x
- the objective is to find u 0 such that this correlation coefficient is a maximum.
- a pixel is the smallest element of an image which can be individually assigned a gray level representative of an intensity.
- ⁇ v ⁇ ( u O ) 1 - ⁇ 0 * [ I ′ ⁇ ( x ) ⁇ ( ⁇ v ⁇ ( x ) + u 0 ) ] 2 2 ⁇ ⁇ I ⁇ ⁇ 0 2 + ⁇ 0 * [ I ⁇ ( x ) ⁇ I ′ ⁇ ( x ) ⁇ ( ⁇ ⁇ ( x ) + u 0 ) ] 2 2 ⁇ ⁇ I ⁇ ⁇ 0 4
- I′ is the derivative of I along ⁇ right arrow over ( ⁇ ) ⁇ .
- a ⁇ ( t ) [ ⁇ * II ′ ] ⁇ I ⁇ ⁇ t 2 .
- This fundamental correlation equation makes it possible to link the position difference measured by the correlation operation to the actual value of the position difference, and to do so without passing via the explicit computation of the correlation—this computation is always very costly in terms of time.
- the barycentric correction method is based on the following observation.
- the fundamental equation at a coordinate point for example 0, becomes, if the image has a quasi-density concentrated at ⁇ x 0 ′ ⁇ ,
- the image adhesion effect assigns to the current point the true position difference resulting predominantly from another more singular point.
- the barycentric correction method will therefore consist in seeking the “most singular” position in the correlation window used at the current point and in assigning to this position the most singular value of the position difference measured at the current point.
- the correlation window ⁇ is positive and positioned at 0.
- the position is thus:
- OG _ ⁇ OP _ ⁇ ( x ) ⁇ ⁇ ⁇ ( x ) ⁇ ⁇ ( x ) ⁇ ⁇ x ⁇ ⁇ ⁇ ( x ) ⁇ ⁇ ( x ) ⁇ ⁇ x
- the correlation curvature signal-to-noise ratio is then defined by:
- the correlation condition is: SNR c greater than the threshold preferably of the order of 10 (one order of magnitude), which makes it possible to choose the points adopted. If this condition is met, the noise can be neglected when computing the fundamental and morphological equations, and the model established earlier is applicable.
- ⁇ 0 ⁇ ( u 0 ) ( ⁇ 0 ⁇ I ) ⁇ I ⁇ ⁇ ( u 0 ) ⁇ I ⁇ ⁇ 0 ⁇ ⁇ * I ⁇ 2 ⁇ ( u 0 )
- a preferred solution for computing the correlation coefficient is to compute the numerator and denominator separately, and then to compute the quotient therefrom. This is because the direct computation involves squared derivatives, for example in the expression involving I′(x) 2 (cf. the fundamental equation).
- a two-times zoom on the images of the stereopair is performed.
- This zoom consists, for each of the images of the stereopair, in doubling the number of row points by inserting a point of unknown value between two points of known value in the image, in doubling the number of column points, by inserting a point of unknown value between two points of known value in the image, and in determining the unknown values by interpolation using the known values.
- the interpolation on the two-times zoom is an estimation of intermediate values in a series of known values. The interpolation on the two-times zoom of the images must be precise.
- the method uses long separable interpolation filters of the sinc type for the various interpolations.
- the size of image interpolation sinc filter for performing a zoom will preferably be a size 35 filter.
- a sinc filter will also be used for performing the interpolations on the correlation sheet. It will be preferable to use a size 11 sinc filter for the interpolations on the correlation sheet during the subpixel search for the correlation maximum.
- the correlation coefficient between the digitized images of the stereopair is computed.
- a correlation window 3 is needed to compute the correlation coefficient between the two images of the pair.
- the correlation windows may be of various types. However, certain window types minimize the impact of the error on the position difference measurement. The analysis that has led to the determination of a preferred type of correlation window will now be presented.
- the correlation sheet In the vicinity of the maximum, the correlation sheet must be convex so as to ensure convergence of the algorithm for searching for the principal maximum since, if the correlation sheet is not convex, this means that several points in the second image of the stereopair may potentially correspond to the current point in the first image.
- the finest element that can be found in an image is equal to the impulse response. Consequently, it is sufficient to numerically study the shape of the correlation sheet for a pair of identical images reduced to three impulse responses separated by a distance of length ⁇ 1 belonging to the [ 0 ; 7 ] pixel interval, and to seek the maximum distance between the two impulse responses while maintaining a convex correlation sheet. This value then corresponds to the maximum possible exploration and gives meaning to the concept of “small” position difference used in the introduction of the image model.
- the 1D analysis is sufficient in the case of separable impulse responses.
- two identical images 1 , 2 having impulse responses 301 , 302 , 303 separated by distances ⁇ 1 304 , as illustrated in FIG. 11 are taken.
- the fact of taking two identical images in order to form the stereopair means that the points in the second image that are geometrically and radiometrically homologous to a point in the first image are coincident.
- the correlation window 3 is then slid along the analysis direction, along the second image 2 .
- FIGS. 4 , 5 and 6 The results of the correlation are given in FIGS. 4 , 5 and 6 for the following cases: SPOT 5 THR ( FIG. 4 ); standard hypomode (clustering by 2 ⁇ 2 packet of pixels while maintaining the same sampling) ( FIG. 5 ); and prolate ( FIG. 6 ). These figures are graphs. The functions plotted in these graphs represent the correlation sheets.
- these maxima correspond to the case 300 in which the correlation window 3 is placed over the true point radiometrically homologous to the current point in the first image, and to other cases in which the correlation window is shifted relative to the true radiometrically homologous point.
- the correlation sheet has four maxima.
- the correlation sheet therefore gives us five points homologous to the current point in the first image. Now, only one of these measured homologous points is the true point homologous to the current point in the first image. This is the reason why it is necessary to ensure that the correlation sheet is convex. With a window of the SPOT 5 type, the maximum position difference for maintaining a convex correlation sheet is therefore 1 pixel.
- the standard hypomode allows exploration over 2 pixels. This means that the maximum position difference that can be observed between two homologous points is two pixels.
- the prolate function allows exploration over 4 pixels.
- the prolate solution ensures the best enlarged convexity of the correlation sheet and strong continuity properties over the position difference measurement.
- the prolate function possesses the property of being the positive function whose supports are the most concentrated simultaneously in space and in frequency. A preferred method will therefore use a prolate function as correlation function.
- This measurement precision search means preferably opting for a correlation window equal to the prolate function (the reader will have understood that it is possible to opt for another type of correlation window, such as a correlation window of the hypomode or SPOT 5 type, or any other type of correlation window known to those skilled in the art).
- the adhesion effect is smaller for windows of smaller spatial support (i.e. of reduced size).
- a prolate function will be used that has the smallest possible size compatible with the conditions for applying the fine correlation model described above and recalled below.
- the search for the optimum window then amounts to finding, at any point, the prolate correlation window of maximum circular spectral support that belongs to the series ⁇ Pn ⁇ and meets the morphological condition. This is achieved simply by successive trials with prolate functions of decreasing spectral support.
- a window of maximum spectral support is a window of minimum spatial support. Consequently, the aim is to find the prolate correlation window of smallest size (spatial support) that meets the morphological condition described above.
- the exploration boundary imposed by the type of correlation window on the one hand, and by the search at any point for the smallest window size on the other, does not allow the radiometrically homologous points of a stereopair to be determined when the position difference between the points is large.
- an image is displayed on a screen measuring 21 cm along a row by 18 cm along a column, and if this image has 210 pixels along a row and 180 pixels along a column, then this image has a level of resolution of 10 pixels per centimeter.
- a reduction in level of resolution (by passing from a fine level of resolution to a coarser level of resolution) will correspond to a reduction in the number of pixels along a row and along a column for displaying the image.
- a reduction factor of 10 in the level of resolution there will now be only 21 pixels along a row and 18 along a column, i.e. a level of resolution of one pixel per centimeter.
- dyadic multiscale (multiple levels of resolution) approach is necessary in order to meet the correlation sheet convexity condition irrespective of the amplitude of the position differences.
- the pair of images is degraded in terms of resolution (i.e. the level of resolution is reduced) through a convolution by a spectral support prolate with a radius r c/s , where “r c ” is the radius corresponding to the image zoomed by a factor of 2 and “s” is the current level of resolution.
- Lowering the level of resolution requires the holes to be plugged, that is to say requires the value of the points in the disparity map that are of unknown value to be estimated.
- a point of unknown value is a point in the disparity map whose value is unknown.
- the holes are plugged iteratively by convolution with a circular prolate function, for example of radius 7 .
- the missing points are in turn assigned values.
- an exhaustive map of the disparities between the points in the first image and their radiometrically homologous points in the second image is computed.
- the iteration at each level of resolution consists, by successive interpolation, in geometrically correcting, using this map along one direction, for example the epipolar direction, the second image in order to make it more and more geometrically similar to the first image.
- the operations of interpolating the second image, according to the computed successive unidirectional disparity maps require the application of a formula for the composition of the disparity maps.
- the composition is carried out between the map of the disparities between the interpolated image and the first image and the disparity map that has allowed the interpolated image to be generated (i.e. the disparity map for the preceding iteration).
- the interpolation errors therefore do not build up through the iterations, as we are always in the case of composition of at most two disparities.
- a point of unknown value is a point in the first image whose matching with a point in the second image has been rejected (i.e. a point whose matching is assumed to be erroneous).
- the criterion for rejecting falsely correlated points consists in comparing, point by point, the correlation curvatures between the first image and the disparity-corrected second image (interpolated within the reference group).
- a point is rejected at each level of resolution when the difference between the curvature values between the two images is greater than a certain value. This value is equal at most to the minimum difference at the four points associated with the current point in the first image (the points associated with the current point are the points neighboring the current point but are located above, below, to the right and to the left of the current point).
- the unidirectional fine correlation method was explained above so as to make it easier to understand the model.
- the bidirectional fine correlation method will now be presented.
- the images of the stereopair are resampled in epipolar geometry along the rows or columns by interpolation.
- the search for 2D disparity tables is made alternately along the rows and the columns.
- the major difficulty in reconstructing the disparity table is generally that along the epipolar direction.
- the relief-induced position differences are in general very high-frequency differences, while those along the orthogonal direction are often induced by vibrations of the carrier, and are low-frequency differences. This allows larger processing windows to be used along the direction perpendicular to the epipolar lines. This difference will of course be managed by the choice of a larger signal-to-noise ratio along the direction orthogonal to the epipolar lines than along the epipolar direction.
- FIG. 7 shows the mask 23 for the points retained after applying the geometrical curvature criterion to the countryside-type image 20 with a prolate 9 ⁇ 9 window.
- This mask 23 for the retained points is obtained by making the mask 21 for the retained points after application of the row criterion intersect with the mask 22 for the retained points after application of the column criterion.
- the black areas in the image correspond to the points that have been rejected after applying the morphological condition.
- the white points represent the retained points.
- the images in FIG. 7 are shown in black and white for displaying the retained points better.
- the image thus interpolated is then analyzed along its orthogonal direction. This method is iterated.
- the computation method based on the fine correlation method described above, will now be explained in detail with reference to FIG. 11 .
- the correlation method is automatically tailored to the image as soon as the correlation curvature signal-to-noise ratio SNR c has been set.
- This computation is a dyadic multiscale (multiple levels of resolution) method. This means that the processing takes place for various levels of resolution.
- the number of levels of resolution is known as soon as the maximum position difference along a row and along a column is known.
- the computation method based on the fine correlation method is multidirectional.
- the sizes of the prolate correlation windows are automatically computed at any point and for all the levels of resolution with correlation windows that may be very different in the epipolar direction from the orthogonal direction. They are set by the correlation curvature signal-to-noise ratios that are different along the epipolar direction and the orthogonal direction.
- the method will be to increase the levels of resolution, that is to say going from the coarsest resolution to the finest resolution.
- the input images are to be filtered, during the first pass around the main processing loop of the correlation method. This makes it possible to reduce input image noise. Therefore the following are carried out:
- the data preparation step is carried out by performing the following steps:
- the data processing step is carried out. This step comprises the following substeps, some of which are illustrated in FIG. 12 . These substeps are repeated for each point in the image and along each processing direction.
- This computation is carried out for all the points in the first image 2000 and makes it possible to obtain an intermediate disparity map 2005 (i.e. a map in which in particular the barycentric correction technique has not been applied). The following are then carried out:
- the value of the position difference computed for the current point is shifted, in the disparity map, to the position of the barycenter of the points in the first image that are contained within the correlation window (used for computing the position difference between the current point and its assumed radiometrically homologous point).
- the processing is carried out for all the levels of resolution, eliminating points computed with a prolate correlation window containing a smaller prolate correlation window, and doing so along both processing directions. What is therefore obtained as output of the method is the disparity map for the finest level of resolution.
- the output data of the method is obtained.
- This consists of tables of sizes equal to the size of the first and second image. These tables are:
- This acquisition and matching unit comprises an acquisition system and a processing system.
- This acquisition and processing unit allows acquisition and processing of a stereopair having a low stereoscopic coefficient.
- the acquisition and processing unit has the advantage of limiting the hidden parts, that is to say parts appearing only in one of the two images, and for example seeing the streets in an urban environment comprising skyscrapers.
- the acquisition system may be integrated into or far apart from the processing system. When the acquisition and processing systems are far apart, they communicate via wire or wireless communication means.
- the acquisition system allows the acquisition of the input data of the method (step A of the method).
- This acquisition system is for example a satellite and includes communicating means (for communicating with the processing system), processing means (of the processor type), memory means (for storing the acquired images) and an optoelectronic detector (optical system+CCD sensors).
- the processing system is programmed to carry out the steps of the matching method described above. This processing system itself allows the matching of the stereopair to be carried out.
- the processing system is for example a workstation that includes memory (RAM, ROM) means that are connected to processing means, such as a processor, display means, such as a display screen, and inputting means, such as a keyboard and a mouse.
- processing means such as a processor, display means, such as a display screen, and inputting means, such as a keyboard and a mouse.
- the processing system is connected to communication means so as to receive the images to be matched that are required by the acquisition system.
- FIG. 8 The results obtained by four stereopair matching methods are presented ( FIG. 8 : precision as a function of b/h), these methods being: the standard correlation method using a prolate correlation window ( 26 ); the standard correlation method using a constant correlation window ( 25 ); the fine correlation method using a prolate correlation window ( 27 ); and the fine correlation method using a prolate correlation window with the points resulting from windows including smaller windows being removed ( 28 ).
- the images of the stereopair are images of Marseilles having a sampling pitch of one meter with a transfer function complying approximately with the Shannon principle (value close to 0 at the cutoff frequency)
- the signal-to-noise ratio of the images is equal to 100.
- the acquisition of the images of the stereopair is a matrix acquisition. These images are computed from an orthophotograph sampled at one meter and from a numerical model of the terrain with submetric precision covering the same area.
- Stereopairs are generated for several b/h values. This coefficient takes values between 0.005 and 0.25. This coefficient is caused by the single aim-off in pitch. mode.
- the precision with the fine correlation method is twice as good as that measured with the conventional method.
- the fine correlation has a constant precision for b/h values between 0.01 and 0.15.
- the altimetric precision the standard deviation of which is plotted here on the y-axis with the pixel as unit, is better than one pixel.
- This method contains subpixel precision with small stereoscopic coefficients (b/h).
- the recommended method namely fine correlation without inclusion of windows, makes it possible to reject shadow points that are often aberrant.
- the degree of correlation is close to 1 if the shadow effect is neglected.
- the stereopair matching method described above therefore allows stereoscopic images to be processed for a small stereoscopic coefficient (low b/h) with the same altimetric precision as for large stereoscopic coefficients.
- This method operates down to very small stereoscopic coefficients (0.01) without the altimetric precision being degraded.
- This method permits a novel design of space acquisition/photographing systems for acquiring stereopairs.
- a stereoscopic coefficient (b/h) close to 0.02 limits the homologous lines of sight of the CCD sensors to a value of less than ⁇ 1°.
- the acquisition system comprises a single acquisition instrument (not shown) comprising a single optical system (not shown) and two symmetrical CCD sensors 31 , 32 in the optical focal plane.
- Each CCD sensor shown in FIG. 9 is a linear array consisting of detectors 33 a , 33 b , 33 c .
- These detectors 33 a , 33 b , 33 c are, for example, CCD light-sensitive electronic photodiodes for converting the light signal into an electric current proportional to the intensity of this light.
- These detectors 33 a , 33 b , 33 c are placed side by side along a line, and form the linear array 33 .
- Each detector 33 a , 33 b , 33 c is responsible for the observation of one pixel.
- Each detector 33 a , 33 b , 33 c captures the light coming from one pixel of the terrain.
- Each linear array 33 , 34 allows the acquisition of one row of the image.
- the linear array 33 allows the acquisition of one row of the first image and the linear array 34 the acquisition of one row of the second image.
- the first and second images of the stereopair are therefore acquired row by row by the linear CCD arrays 33 , 34 as the satellite moves around its orbit (between time t and t+ ⁇ t).
- the rows of images acquired by the first and second linear arrays 33 , 34 are stored in memory means.
- the acquisition system sends (via wireless communication means) the stereopair of images 1 , 2 to the processing system.
- This processing system is preferably based on the ground and allows the points in the stereopair to be matched.
- the stereopair acquisition system will include an optoelectronic sensor comprising a single optical system and two symmetrical CCD sensors in the focal plane.
- the fine stereopair matching method employed in the processing system remains valid for acquisition systems comprising a single space instrument consisting of two matrices or two linear arrays whenever the attitude perturbations are corrected or negligible.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Measurement Of Optical Distance (AREA)
- Stereoscopic And Panoramic Photography (AREA)
- Image Analysis (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Eye Examination Apparatus (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0403143 | 2004-03-26 | ||
FR0403143A FR2868168B1 (fr) | 2004-03-26 | 2004-03-26 | Appariement fin d'images stereoscopiques et instrument dedie avec un faible coefficient stereoscopique |
PCT/FR2005/000752 WO2005093655A1 (fr) | 2004-03-26 | 2005-03-29 | Appariement fin d'images stereoscopiques et instrument dedie avec un faible coefficient stereoscopique |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/FR2005/000752 A-371-Of-International WO2005093655A1 (fr) | 2004-03-26 | 2005-03-29 | Appariement fin d'images stereoscopiques et instrument dedie avec un faible coefficient stereoscopique |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/717,045 Continuation US8064687B2 (en) | 2004-03-26 | 2010-03-03 | Fine stereoscopic image matching and dedicated instrument having a low stereoscopic coefficient |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080056609A1 true US20080056609A1 (en) | 2008-03-06 |
Family
ID=34944459
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/594,257 Abandoned US20080056609A1 (en) | 2004-03-26 | 2005-03-29 | Fine Stereoscopic Image Matching And Dedicated Instrument Having A Low Stereoscopic Coefficient |
US12/717,045 Expired - Fee Related US8064687B2 (en) | 2004-03-26 | 2010-03-03 | Fine stereoscopic image matching and dedicated instrument having a low stereoscopic coefficient |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/717,045 Expired - Fee Related US8064687B2 (en) | 2004-03-26 | 2010-03-03 | Fine stereoscopic image matching and dedicated instrument having a low stereoscopic coefficient |
Country Status (7)
Country | Link |
---|---|
US (2) | US20080056609A1 (de) |
EP (1) | EP1756771B1 (de) |
AT (1) | ATE495511T1 (de) |
DE (1) | DE602005025872D1 (de) |
FR (1) | FR2868168B1 (de) |
IL (1) | IL178299A (de) |
WO (1) | WO2005093655A1 (de) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090201384A1 (en) * | 2008-02-13 | 2009-08-13 | Samsung Electronics Co., Ltd. | Method and apparatus for matching color image and depth image |
US20110025829A1 (en) * | 2009-07-31 | 2011-02-03 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for selecting image capture positions to generate three-dimensional (3d) images |
US20110025825A1 (en) * | 2009-07-31 | 2011-02-03 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for creating three-dimensional (3d) images of a scene |
US8274552B2 (en) | 2010-12-27 | 2012-09-25 | 3Dmedia Corporation | Primary and auxiliary image capture devices for image processing and related methods |
US20130094753A1 (en) * | 2011-10-18 | 2013-04-18 | Shane D. Voss | Filtering image data |
US9185388B2 (en) | 2010-11-03 | 2015-11-10 | 3Dmedia Corporation | Methods, systems, and computer program products for creating three-dimensional video sequences |
US9292927B2 (en) * | 2012-12-27 | 2016-03-22 | Intel Corporation | Adaptive support windows for stereoscopic image correlation |
US9344701B2 (en) | 2010-07-23 | 2016-05-17 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for identifying a rough depth map in a scene and for determining a stereo-base distance for three-dimensional (3D) content creation |
US9380292B2 (en) | 2009-07-31 | 2016-06-28 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene |
US10200671B2 (en) | 2010-12-27 | 2019-02-05 | 3Dmedia Corporation | Primary and auxiliary image capture devices for image processing and related methods |
CN111260623A (zh) * | 2020-01-14 | 2020-06-09 | 广东小天才科技有限公司 | 图片评价方法、装置、设备及存储介质 |
CN118567098A (zh) * | 2024-08-05 | 2024-08-30 | 浙江荷湖科技有限公司 | 一种自适应的亚像素精度光场数据校准方法 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101203881B (zh) * | 2005-06-23 | 2015-04-22 | 皇家飞利浦电子股份有限公司 | 图像和相关数据的组合交换 |
IL191615A (en) * | 2007-10-23 | 2015-05-31 | Israel Aerospace Ind Ltd | A method and system for producing tie points for use in stereo adjustment of stereoscopic images and a method for identifying differences in the landscape taken between two time points |
ES2361758B1 (es) | 2009-12-10 | 2012-04-27 | Universitat De Les Illes Balears | Procedimiento de establecimiento de correspondencia entre una primera imagen digital y una segunda imagen digital de una misma escena para la obtención de disparidades. |
FR2977469B1 (fr) | 2011-07-08 | 2013-08-02 | Francois Duret | Dispositif de mesure tridimensionnelle utilise dans le domaine dentaire |
FR3032282B1 (fr) | 2015-02-03 | 2018-09-14 | Francois Duret | Dispositif de visualisation de l'interieur d'une bouche |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5550937A (en) * | 1992-11-23 | 1996-08-27 | Harris Corporation | Mechanism for registering digital images obtained from multiple sensors having diverse image collection geometries |
US5963664A (en) * | 1995-06-22 | 1999-10-05 | Sarnoff Corporation | Method and system for image combination using a parallax-based technique |
US5995681A (en) * | 1997-06-03 | 1999-11-30 | Harris Corporation | Adjustment of sensor geometry model parameters using digital imagery co-registration process to reduce errors in digital imagery geolocation data |
US20020113864A1 (en) * | 1996-08-16 | 2002-08-22 | Anko Borner | A sterocamera for digital photogrammetry |
US20020135468A1 (en) * | 1997-09-22 | 2002-09-26 | Donnelly Corporation, A Corporation Of The State Of Michigan | Vehicle imaging system with accessory control |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3012601C2 (de) * | 1980-04-01 | 1983-06-09 | Deutsche Forschungs- und Versuchsanstalt für Luft- und Raumfahrt e.V., 5000 Köln | Verfahren und Einrichtung zur zeilenweisen Aufnahme von Gegenständen |
DE69635101T2 (de) * | 1995-11-01 | 2006-06-01 | Canon K.K. | Verfahren zur Extraktion von Gegenständen und dieses Verfahren verwendendes Bildaufnahmegerät |
KR100601958B1 (ko) * | 2004-07-15 | 2006-07-14 | 삼성전자주식회사 | 3차원 객체 인식을 위한 양안차 추정 방법 |
FR2879791B1 (fr) * | 2004-12-16 | 2007-03-16 | Cnes Epic | Procede de traitement d'images mettant en oeuvre le georeferencement automatique d'images issues d'un couple d'images pris dans le meme plan focal |
US8077964B2 (en) * | 2007-03-19 | 2011-12-13 | Sony Corporation | Two dimensional/three dimensional digital information acquisition and display device |
-
2004
- 2004-03-26 FR FR0403143A patent/FR2868168B1/fr not_active Expired - Fee Related
-
2005
- 2005-03-29 EP EP05744514A patent/EP1756771B1/de not_active Not-in-force
- 2005-03-29 WO PCT/FR2005/000752 patent/WO2005093655A1/fr active Application Filing
- 2005-03-29 DE DE602005025872T patent/DE602005025872D1/de active Active
- 2005-03-29 AT AT05744514T patent/ATE495511T1/de not_active IP Right Cessation
- 2005-03-29 US US10/594,257 patent/US20080056609A1/en not_active Abandoned
-
2006
- 2006-09-26 IL IL178299A patent/IL178299A/en active IP Right Grant
-
2010
- 2010-03-03 US US12/717,045 patent/US8064687B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5550937A (en) * | 1992-11-23 | 1996-08-27 | Harris Corporation | Mechanism for registering digital images obtained from multiple sensors having diverse image collection geometries |
US5963664A (en) * | 1995-06-22 | 1999-10-05 | Sarnoff Corporation | Method and system for image combination using a parallax-based technique |
US20020113864A1 (en) * | 1996-08-16 | 2002-08-22 | Anko Borner | A sterocamera for digital photogrammetry |
US5995681A (en) * | 1997-06-03 | 1999-11-30 | Harris Corporation | Adjustment of sensor geometry model parameters using digital imagery co-registration process to reduce errors in digital imagery geolocation data |
US20020135468A1 (en) * | 1997-09-22 | 2002-09-26 | Donnelly Corporation, A Corporation Of The State Of Michigan | Vehicle imaging system with accessory control |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8717414B2 (en) * | 2008-02-13 | 2014-05-06 | Samsung Electronics Co., Ltd. | Method and apparatus for matching color image and depth image |
US20090201384A1 (en) * | 2008-02-13 | 2009-08-13 | Samsung Electronics Co., Ltd. | Method and apparatus for matching color image and depth image |
US20110025829A1 (en) * | 2009-07-31 | 2011-02-03 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for selecting image capture positions to generate three-dimensional (3d) images |
US20110025825A1 (en) * | 2009-07-31 | 2011-02-03 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for creating three-dimensional (3d) images of a scene |
US12034906B2 (en) | 2009-07-31 | 2024-07-09 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene |
US11044458B2 (en) | 2009-07-31 | 2021-06-22 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene |
US8436893B2 (en) | 2009-07-31 | 2013-05-07 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for selecting image capture positions to generate three-dimensional (3D) images |
US9380292B2 (en) | 2009-07-31 | 2016-06-28 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene |
US8508580B2 (en) | 2009-07-31 | 2013-08-13 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for creating three-dimensional (3D) images of a scene |
US8810635B2 (en) | 2009-07-31 | 2014-08-19 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for selecting image capture positions to generate three-dimensional images |
US9344701B2 (en) | 2010-07-23 | 2016-05-17 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for identifying a rough depth map in a scene and for determining a stereo-base distance for three-dimensional (3D) content creation |
US9185388B2 (en) | 2010-11-03 | 2015-11-10 | 3Dmedia Corporation | Methods, systems, and computer program products for creating three-dimensional video sequences |
US8441520B2 (en) | 2010-12-27 | 2013-05-14 | 3Dmedia Corporation | Primary and auxiliary image capture devcies for image processing and related methods |
US10200671B2 (en) | 2010-12-27 | 2019-02-05 | 3Dmedia Corporation | Primary and auxiliary image capture devices for image processing and related methods |
US10911737B2 (en) | 2010-12-27 | 2021-02-02 | 3Dmedia Corporation | Primary and auxiliary image capture devices for image processing and related methods |
US11388385B2 (en) | 2010-12-27 | 2022-07-12 | 3Dmedia Corporation | Primary and auxiliary image capture devices for image processing and related methods |
US8274552B2 (en) | 2010-12-27 | 2012-09-25 | 3Dmedia Corporation | Primary and auxiliary image capture devices for image processing and related methods |
US20130094753A1 (en) * | 2011-10-18 | 2013-04-18 | Shane D. Voss | Filtering image data |
US9292927B2 (en) * | 2012-12-27 | 2016-03-22 | Intel Corporation | Adaptive support windows for stereoscopic image correlation |
CN111260623A (zh) * | 2020-01-14 | 2020-06-09 | 广东小天才科技有限公司 | 图片评价方法、装置、设备及存储介质 |
CN118567098A (zh) * | 2024-08-05 | 2024-08-30 | 浙江荷湖科技有限公司 | 一种自适应的亚像素精度光场数据校准方法 |
Also Published As
Publication number | Publication date |
---|---|
EP1756771A1 (de) | 2007-02-28 |
EP1756771B1 (de) | 2011-01-12 |
FR2868168B1 (fr) | 2006-09-15 |
IL178299A0 (en) | 2007-02-11 |
WO2005093655A1 (fr) | 2005-10-06 |
FR2868168A1 (fr) | 2005-09-30 |
DE602005025872D1 (de) | 2011-02-24 |
US8064687B2 (en) | 2011-11-22 |
IL178299A (en) | 2011-06-30 |
ATE495511T1 (de) | 2011-01-15 |
US20100239158A1 (en) | 2010-09-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8064687B2 (en) | Fine stereoscopic image matching and dedicated instrument having a low stereoscopic coefficient | |
US8471897B2 (en) | Method and camera for the real-time acquisition of visual information from three-dimensional scenes | |
US5613013A (en) | Glass patterns in image alignment and analysis | |
Campos et al. | A surface reconstruction method for in-detail underwater 3D optical mapping | |
Jordt-Sedlazeck et al. | Refractive structure-from-motion on underwater images | |
US8213707B2 (en) | System and method for 3D measurement and surface reconstruction | |
Gennery | Modelling the environment of an exploring vehicle by means of stereo vision | |
US20150304558A1 (en) | Method of 3d reconstruction and 3d panoramic mosaicing of a scene | |
Alexandrov et al. | Multiview shape‐from‐shading for planetary images | |
US6175648B1 (en) | Process for producing cartographic data by stereo vision | |
CN111429527A (zh) | 一种车载相机的外参自动标定方法及系统 | |
CN115909025A (zh) | 一种小天体表面采样点地形视觉自主检测识别方法 | |
Krsek et al. | Differential invariants as the base of triangulated surface registration | |
CN105352482A (zh) | 基于仿生复眼微透镜技术的3-3-2维目标检测方法及系统 | |
CN111222544B (zh) | 一种卫星颤振对相机成像影响的地面模拟测试系统 | |
Frobin et al. | Calibration and model reconstruction in analytical close-range stereophotogrammetry | |
CN109741389A (zh) | 一种基于区域基匹配的局部立体匹配方法 | |
CN114494039A (zh) | 一种水下高光谱推扫图像几何校正的方法 | |
Paparoditis et al. | 3D data acquisition from visible images | |
Menard et al. | Adaptive stereo matching in correlation scale-space | |
Klette et al. | On design and applications of cylindrical panoramas | |
Wu et al. | Photogrammetric processing of LROC NAC images for precision lunar topographic mapping | |
Manolache et al. | A mathematical model of a 3D-lenticular integral recording system | |
Srivastava et al. | Digital Elevation Model | |
Paar et al. | Cavity surface measuring system using stereo reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CENTRE NATIONAL D'ETUDES SPATIALES, FRANCE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ROUGE, BERNARD;REEL/FRAME:019631/0699 Effective date: 20061025 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |