USRE43206E1 - Apparatus and method for providing panoramic images - Google Patents

Apparatus and method for providing panoramic images Download PDF

Info

Publication number
USRE43206E1
USRE43206E1 US11/541,517 US54151706A USRE43206E US RE43206 E1 USRE43206 E1 US RE43206E1 US 54151706 A US54151706 A US 54151706A US RE43206 E USRE43206 E US RE43206E
Authority
US
United States
Prior art keywords
images
feature points
edge
calculating
panoramic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US11/541,517
Inventor
Jun-Wei Hsieh
Cheng-Chin Chiang
Der-Lor Way
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Transpacific IP Ltd
Original Assignee
Transpacific IP Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Transpacific IP Ltd filed Critical Transpacific IP Ltd
Priority to US11/541,517 priority Critical patent/USRE43206E1/en
Assigned to TRANSPACIFIC IP LTD. reassignment TRANSPACIFIC IP LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
Application granted granted Critical
Publication of USRE43206E1 publication Critical patent/USRE43206E1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/403Edge-driven scaling; Edge-based scaling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Definitions

  • the present invention relates to an apparatus, algorithm, and method for stitching different pieces of images of a scene into a panoramic environment map.
  • panoramic images are stitched together into a panoramic map from several individual images which are acquired by rotating a camera horizontally or vertically.
  • This panoramic map can be used in different applications such as movie special effects, the creation of virtual reality, or games.
  • a typical problem is how to stitch the different pieces of a scene into a larger picture or map.
  • One approach to address this problem is to manually establish correspondences between images to solve unknown parameters of their relative transformation. Because manual methods are tedious for large applications, automatic schemes are preferably used for generating a seamless panoramic image from different pieces of images.
  • One proposed approach uses a nonlinear minimization algorithm for automatically stitching panoramic images by minimizing the discrepancy in intensities between images. This approach has the advantage of not requiring easily identifiable features. However, this technique does not guarantee finding the global minimum if the selection of starting points is not proper. Further because the optimization process is time-consuming, the approach is inefficient. In this invention, the domain of images under consideration is panoramic images.
  • the invention allows users to generate panoramic images from a sequence of images acquired by a camera rotated about its optical center.
  • the invention combines feature extraction, correlation, and relaxation techniques to get a number of reliable and robust matching pairs used to derive registration parameters. Based on the obtained registration parameters, different pieces of consecutive images can-be stitched together to obtain a seamless panoramic image.
  • a method of merging a pair of images to form a seamless panoramic image includes the following steps.
  • a set of feature points along the edges of the images is extracted, each feature point defining an edge orientation.
  • a set of registration parameters is obtained by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images.
  • a seamless panoramic image is rendered using the first and second images with the set of registration parameters.
  • the invention provides a feature-based approach for automatically stitching panoramic images acquired by a rotated camera and obtaining a set of matching pairs from a set of feature points for registration. Since the feature points are extracted along the edges, each feature point specifies an edge orientation. Because the orientation difference between two panoramic images is relatively small, the difference between edge orientations of two feature points is also small if they are good matches. Based on this assumption, edge information of feature points can be used to eliminate in advance many false matches by checking their orientation difference. Moreover, many unnecessary calculations involving cross-correlation can be screened in advance, thereby significantly reducing the search time needed for obtaining correct matching pairs. After checking, by calculating the value of correlation of the remaining matching pairs, a set of possible matches can be selected with a predefined threshold.
  • Embodiments of this aspect of the invention may include one or more of the following features.
  • a set of feature points are first extracted through wavelet formations.
  • the invention uses wavelet transforms to obtain a number of feature points with edge orientations. Such edge information can speed up the entire registration process by eliminating many impossible matches in advance and avoiding many unnecessary calculations of correlation.
  • the method determines a number of reliable and robust matching pairs through relaxation. The method also measures the quality of a matching pair, imposes angle consistency constraint for improving the robustness of registration, and uses a voting concept to get the desired solution from the set of final matching results.
  • the method forms the final panoramic image with the help of the registration results.
  • the method adjusts the intensity differences between consecutive input images and blends the intensities of adjacent images to obtain a seamless panoramic image.
  • the final panoramic images can then be used to build a virtual world.
  • the method selects a number of feature points through wavelet transforms.
  • Each feature point is associated with an edge orientation so that the speed of the registration process is increased.
  • the method uses an angle constraint to construct a set of matching pairs, which are used to obtain reliable matching results through a relaxation and a voting technique.
  • the set of matching results are then used to form the final seamless panoramic image.
  • Constructing an initial set of matching pairs for registration includes comparing the edge orientation differences of feature points in one image and its corresponding feature points in another, calculating the values of correlation of each possible matching pair, and thresholding them with a predefined threshold.
  • Getting reliable matching results through relaxation and a voting technique includes calculating the quality of a matching pair, imposing angle consistency constraint to filter out impossible matching pairs, updating matching results through relaxation, and using the voting technique to obtain the reliable registration parameters. In addition, it refines the final registration results by using the correlation technique with a proper starting point.
  • Forming the final panoramic images includes dynamically adjusting and properly blending the intensity differences between adjacent images.
  • the invention features a system for merging pairs of images to form a panoramic image.
  • the system includes an imaging device which, in operation, acquires a series of images, a storage for storing a series of images, a memory which stores computer code, and at least one processor which executes computer code to extract a set of feature points along the edges of the images, each feature point defining an edge orientation and to obtain a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images, and to render a seamless panoramic image using the first and second images with the set of registration parameters.
  • FIG. 1 illustrates an approach for generating a virtual panoramic world.
  • FIG. 2 shows the geometry for cylindrical mapping.
  • FIG. 3 is a flowchart illustrating the steps used to extract feature points from images using wavelet transform.
  • FIG. 4 is a flowchart illustrating the steps used to obtain an initial set of matches using edge orientations and correlation values.
  • FIG. 5 is a flowchart illustrating the relaxation procedure used to obtain a set of reliable matching pairs.
  • FIG. 6 illustrates adjusting intensities of I a and I b such that their intensity differences are small.
  • FIG. 7 illustrates an example for a blending technique.
  • FIG. 8 shows an architecture for implementing the present invention.
  • FIG. 9 shows a series of input images for stitching.
  • FIG. 10 shows the stitching result obtained by the proposed stitcher.
  • FIG. 11 shows images used to demonstrate the ghost-like effect.
  • FIG. 12 shows how the ghostlike effect is removed: (a) with a wider blending width and (b) with a narrower blending width.
  • FIG. 1 a method of the general steps of generating a seamless panoramic image from different pieces of an image is shown.
  • FIG. 2 illustrates the relationship between the cylindrical surface ⁇ pqrs and film plane ⁇ PQRS of the camera. Note that the plane ⁇ PQRS is tangent to the cylindrical surface ⁇ pqrs.
  • O denote the optical center
  • C the center of image plane
  • f the focal length
  • r the radius of the cylinder.
  • step 12 uses a wavelet transform for feature extraction (step 31).
  • S(x, y) be a 2-D smoothing function.
  • Two wavelets, ⁇ 1 (x, y) and ⁇ 2 (x, y) are the partial derivatives of the smoothing function S(x,y) in the x and y directions, respectively, where
  • ⁇ 1 ⁇ ( x , y ) ⁇ ⁇ S ⁇ ( x , y ) ⁇ x and
  • ⁇ 2 j 1 1 4 j ⁇ ⁇ 1 ⁇ ( x 2 j , y 2 j ) and
  • ⁇ 2 j 2 1 4 j ⁇ ⁇ 2 ⁇ ( x 2 j , y 2 j )
  • these two components are equivalent to the gradients of I(x,y) smoothed by S(x,y) at scale 2 j in the x and y directions.
  • R n reveals a peak whenever a true edge exists and is suppressed by the multiplication process if a point at location (x,y) is not a true edge.
  • the number of scales for multiplication is chosen to be 2.
  • R n (j, x, y) should be normalized as follows:
  • R _ n ⁇ ( j , x , y ) R n ⁇ ( j , x , y ) ⁇ MP ⁇ ( j ) RP n ⁇ ( j ) , where
  • RP n ⁇ ( j ) ⁇ x , y ⁇ ⁇ R n ⁇ ( j , x , y ) ⁇ 2 .
  • an edge point is recognized as a candidate if its corresponding normalized edge correlation R 2 (j, x, y) is larger than its corresponding modulus value.
  • the above mentioned process is equivalent to detecting an edge point with the strongest edge response in a local area.
  • M 2 j ⁇ I ⁇ ( x , y ) max ( x , y ) ⁇ N p ⁇ ⁇ M 2 j ⁇ I ⁇ ( x , y ) ⁇ , where N p is the neighborhood of P(x,y) (step 34 ).
  • a set of feature points is extracted using wavelet transform (step 31 ).
  • N I a and N I b represent the number of elements in the set of points FP I a and FP I b , respectively.
  • A(u) be the angle of an edge point u.
  • a standard method for estimating the orientation of an edge-based feature point u at scale 2 j can be expressed as follows: Arg(W 2 j 2 I(x, y)+W 2 j 2 I(x, y)).
  • the above representation can be very sensitive to noise. Therefore, an edge tracking technique plus a line-fitting model is used to solve the noise problem (step 41 ).
  • N e be its neighborhood. Since P is an edge point, there should exist an edge line passing through it.
  • an edge line passing through P can be determined by searching in all the directions from P. All the edge points on the edge are then used as candidates for determining the orientation of the edge line.
  • the edge connection constraint and the direction consistency constraint are used to restrict the searching domain.
  • the edge connection constraint means that if N e contains another edge l but l does not pass P, all edge points in l will not be included in estimating the orientation of P. In certain cases, however, there will exist more than one edge line passing through P. In these cases, the first line detected is adopted to estimate the orientation. Let l 1 denote this line.
  • the direction consistency constraint means all the edge points along other edge lines whose orientations are inconsistent with l 1 are not included to estimate the orientation of P. In this way, a set of edge points can be selected and then used to estimate the orientation of P using a line-fitting model. In other embodiments, other edge tracking technique can also be applied to provide a better estimation.
  • step 14 FIG. 1 is performed to eliminate impossible false matches in advance, avoiding many unnecessary correlation calculations.
  • step 44 For a feature point p i in FP I a and a feature point q j in FP I b , if they form a good match (step 44 ), the following condition will be satisfied:
  • Condition 1 Adding this criterion will significantly speed up the search time.
  • p i and q i form a good match, the similarity degree between p i and q i should be larger.
  • a cross-correlation which can be used to measure the similarity degree between p i and q j (step 46 ) and is defined as follows (step 32 ):
  • u i and u i u j are the local means of p i and q j , respectively, ⁇ i and ⁇ i are the local variances of p i and q j , respectively; and (2M+1) 2 represents the area of matching window.
  • a pair ⁇ p i q j ⁇ is qualified as a possible matching pair if the following conditions are satisfied (step 33 ):
  • Condition 4 means that given a feature point p i it is desired to find a point q i ⁇ FP I b such that a the value of C I a I b (p i ;q j ) is maximized for all points q k ⁇ FP I b .
  • Condition 3 means that given a feature points q i , it is desired to find a point p i ⁇ FP I a such that the value of C I a I b (p i ;q j ) is maximized. If only Condition 2 is used, it is possible that several points p i match with single point q j .
  • Condition 3 it forces the value of C I a I b of a matching pair to be larger than a threshold.
  • the orientation constraint will be checked first. If the constraint is not satisfied, it is not necessary to check Conditions 2, 3, and 4. In this way, only a few pairs are needed to calculate the cross-correlation measure C I a I b , which is considered a time bottleneck of the whole process.
  • a set of reliable matching pairs is obtained through relaxation.
  • Ne I a (p k ) and Ne I b (q k ) denote the neighbors of p k and q k within an area of radius R, respectively.
  • the proposed method is based on a concept that if ⁇ p i q i ⁇ and ⁇ p j q j ⁇ provide a pair of good matches, the distance between p i and p j should be similar to the one between q i and q j .
  • the method includes an “angle consistency” constraint (step 53 ) within the first iteration of the relaxation process to further eliminate impossible matching pairs. That is, if ⁇ p i q i ⁇ and ⁇ p j q j ⁇ are well matched, the angle between ⁇ right arrow over (p i p j ) ⁇ and ⁇ right arrow over (q i q j ) ⁇ must be close to zero.
  • a counter CA i is used to record the number of matches ⁇ p k q k ⁇ in MP I a ,I b where the angle between ⁇ right arrow over (p i p k ) ⁇ and ⁇ right arrow over (q i q k ) ⁇ is less than a predefined threshold ⁇ .
  • CA i the elements of MP I a ,I b are sorted in increasing order. The first Q% of matches in MP I a ,I b are considered to be impossible matches.
  • a set of reliable matches is obtained.
  • the method uses a “voting” concept to derive a desired offset from the set of reliable matching pairs.
  • T y of T is generally affected more then T x . Therefore, the following method is proposed to refine and correct the solution of T y for the y-component.
  • ⁇ u v ⁇ be the matching pair of CP I a ,I b having the highest quality value.
  • the refined offset can be found by searching the local neighborhood of the point (u+ T ) in another image I h using the correlation technique. However, if there is little texture information within the local neighborhood of the point u, the above approach will not necessarily provide a satisfactory solution.
  • step 17 of FIG. 1 is used to eliminate such intensity discontinuities.
  • the scheme used in step 17 can be divided into two stages. The first stage is used to adjust the intensities of two adjacent images such that their intensities are similar. The second stage is used to blend their image intensities according to a distance measure such that the final composite image appears smooth. Assume I a and I b are two adjacent images with their widths in w a and w b , respectively. Let ⁇ I be the average intensity difference between the overlapping area
  • ⁇ I 1 ⁇ A ⁇ ⁇ ⁇ i ⁇ A ⁇ ( I b ⁇ ( q ⁇ ( i ) ) - I a ⁇ ( p ⁇ ( i ) ) ) , ( 7 )
  • A is the overlapping area of I a and I b
  • is the number of pixels in A
  • p(i) is a pixel in I a
  • q(i) is its corresponding pixel in I b .
  • the gap of average intensity between I a and I b is about ⁇ I.
  • the first stage is used to adjust the intensities of I a and I b as follows:
  • I a ⁇ ( p ⁇ ( x , y ) ) I a ⁇ ( p ⁇ ( x , y ) ) + x ⁇ ⁇ ⁇ ⁇ I 2 ⁇ ⁇ w 1 , ( 8 ) and
  • I b ⁇ ( q ⁇ ( x , y ) ) I b ⁇ ( q ⁇ ( x , y ) ) + ( x - w 2 ) ⁇ ⁇ ⁇ I 2 ⁇ w 2 . ( 9 )
  • the intensities of I a and I b in FIG. 6(a) will be gradually changed to approach the intensity line EF shown in FIG. 6(b) , thereby bringing the intensities between I a and I b closer.
  • a blending technique is then applied.
  • the second stage uses a ray-casting method to blend different pixel intensities together.
  • p i is a pixel in I a
  • q i is its corresponding pixel in I h
  • I a and I h are two boundary lines in I a and I h , respectively.
  • I ⁇ ( r i ) d b t ⁇ I a ⁇ ( p i ) + d a t ⁇ I b ⁇ ( q i ) d a t + d b t ( 10 )
  • d a is the distance between p i and I a
  • d h the distance between q i and I h
  • t is an adjustable parameter.
  • the blending width can be chosen small such that the so-called ghostlike effect is significantly reduced. In one preferred embodiment, the blending width is chosen as one-third of the original width of the overlapping area.
  • one preferred architecture for implementing the real-time stitcher apparatus includes input devices 60 (e.g., digital cameras or scanners) to acquire a series of panoramic images. Then, the panoramic images are stored into external storage 62 such as hard disks for further processing or being directly provided to one or more microprocessors 64 for stitching.
  • the microprocessors 64 perform the stitching including warping, feature extraction, edge orientation estimation, correlation calculation, relaxation, rendering and blending the final panoramic images, etc. All the temporary data are stored in the system RAM memory to speed up stitching. Finally, the stitching result is sent to the display engine for displaying. In many applications this architecture can be implemented using a general personal computer.
  • FIG. 10 the stitched panoramic image obtained by our proposed method is shown.
  • the proposed method can significantly minimize ghost-like effects.
  • FIG. 11 shows two adjacent images with moving objects.
  • the posture of the man in (a) is clearly different from the one in (b).
  • large intensity differences exist between them.
  • the blending width should be chosen to be large.
  • the ghost-like effect will appear in the final composite image (see FIG. 12(a) ).
  • the blending width can be chosen smaller. Therefore, the ghostlike effect is significantly lessened, the quality of (b) is clearly improved over the quality shown in FIG. 12(a) .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method and system of merging a pair of images to form a seamless panoramic image including the following steps. A set of feature points are extracted along the edges of the images, each feature point defining an edge orientation. A set of registration parameters is obtained by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images. A seamless panoramic image is rendered using the first and second images with the set of registration parameters.

Description

FIELD OF THE INVENTION
The present invention relates to an apparatus, algorithm, and method for stitching different pieces of images of a scene into a panoramic environment map.
BACKGROUND
The most common way to electronically represent the real world is with image data. Unlike traditional graph-based systems, there are systems which use panoramic images to construct a virtual world. The major advantage of a system which uses panoramic images is that very vivid and photo-realistic rendering results can be obtained even when using PCs. In addition, the cost of constructing the virtual world is independent of scene complexity. In such systems, panoramic images are stitched together into a panoramic map from several individual images which are acquired by rotating a camera horizontally or vertically. This panoramic map can be used in different applications such as movie special effects, the creation of virtual reality, or games. A typical problem is how to stitch the different pieces of a scene into a larger picture or map. One approach to address this problem is to manually establish correspondences between images to solve unknown parameters of their relative transformation. Because manual methods are tedious for large applications, automatic schemes are preferably used for generating a seamless panoramic image from different pieces of images.
One proposed approach uses a nonlinear minimization algorithm for automatically stitching panoramic images by minimizing the discrepancy in intensities between images. This approach has the advantage of not requiring easily identifiable features. However, this technique does not guarantee finding the global minimum if the selection of starting points is not proper. Further because the optimization process is time-consuming, the approach is inefficient. In this invention, the domain of images under consideration is panoramic images.
SUMMARY OF THE INVENTION
The invention allows users to generate panoramic images from a sequence of images acquired by a camera rotated about its optical center. In general, the invention combines feature extraction, correlation, and relaxation techniques to get a number of reliable and robust matching pairs used to derive registration parameters. Based on the obtained registration parameters, different pieces of consecutive images can-be stitched together to obtain a seamless panoramic image.
In a first aspect, a method of merging a pair of images to form a seamless panoramic image includes the following steps. A set of feature points along the edges of the images is extracted, each feature point defining an edge orientation. A set of registration parameters is obtained by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images. A seamless panoramic image is rendered using the first and second images with the set of registration parameters.
The invention provides a feature-based approach for automatically stitching panoramic images acquired by a rotated camera and obtaining a set of matching pairs from a set of feature points for registration. Since the feature points are extracted along the edges, each feature point specifies an edge orientation. Because the orientation difference between two panoramic images is relatively small, the difference between edge orientations of two feature points is also small if they are good matches. Based on this assumption, edge information of feature points can be used to eliminate in advance many false matches by checking their orientation difference. Moreover, many unnecessary calculations involving cross-correlation can be screened in advance, thereby significantly reducing the search time needed for obtaining correct matching pairs. After checking, by calculating the value of correlation of the remaining matching pairs, a set of possible matches can be selected with a predefined threshold. The set of possible matches are further verified through a relaxation scheme by calculating the quality of their matches. Once all of the correct matching pairs are found, they are then used to derive registration parameters. In this invention, an iterative scheme is applied to increase the reliability in providing matching results. Since only three iteration or fewer are needed and only a few feature points are involved in the matching pairs, the whole procedure can be accomplished very efficiently. Also, as discussed above, because the orientation difference of two feature points is checked in advance (before matching). Many calculations involving cross-correlation are not required and the efficiency of stitching is significantly improved. Compared with conventional algorithms, the proposed scheme offers improved efficiency and reliability for stitching images.
Embodiments of this aspect of the invention may include one or more of the following features. In one embodiment, a set of feature points are first extracted through wavelet formations. Among other advantages, the invention uses wavelet transforms to obtain a number of feature points with edge orientations. Such edge information can speed up the entire registration process by eliminating many impossible matches in advance and avoiding many unnecessary calculations of correlation. The method determines a number of reliable and robust matching pairs through relaxation. The method also measures the quality of a matching pair, imposes angle consistency constraint for improving the robustness of registration, and uses a voting concept to get the desired solution from the set of final matching results.
In other embodiments, the method forms the final panoramic image with the help of the registration results. In particular, the method adjusts the intensity differences between consecutive input images and blends the intensities of adjacent images to obtain a seamless panoramic image. The final panoramic images can then be used to build a virtual world.
Still other embodiments may include one or more of the following features:
For example, the method selects a number of feature points through wavelet transforms. Each feature point is associated with an edge orientation so that the speed of the registration process is increased.
The method uses an angle constraint to construct a set of matching pairs, which are used to obtain reliable matching results through a relaxation and a voting technique. The set of matching results are then used to form the final seamless panoramic image.
Constructing an initial set of matching pairs for registration includes comparing the edge orientation differences of feature points in one image and its corresponding feature points in another, calculating the values of correlation of each possible matching pair, and thresholding them with a predefined threshold.
Getting reliable matching results through relaxation and a voting technique includes calculating the quality of a matching pair, imposing angle consistency constraint to filter out impossible matching pairs, updating matching results through relaxation, and using the voting technique to obtain the reliable registration parameters. In addition, it refines the final registration results by using the correlation technique with a proper starting point.
Forming the final panoramic images includes dynamically adjusting and properly blending the intensity differences between adjacent images.
In another aspect, the invention features a system for merging pairs of images to form a panoramic image. The system includes an imaging device which, in operation, acquires a series of images, a storage for storing a series of images, a memory which stores computer code, and at least one processor which executes computer code to extract a set of feature points along the edges of the images, each feature point defining an edge orientation and to obtain a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images, and to render a seamless panoramic image using the first and second images with the set of registration parameters.
Other advantages and features of the invention will become apparent from the following description, including the claims and the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates an approach for generating a virtual panoramic world.
FIG. 2 shows the geometry for cylindrical mapping.
FIG. 3 is a flowchart illustrating the steps used to extract feature points from images using wavelet transform.
FIG. 4 is a flowchart illustrating the steps used to obtain an initial set of matches using edge orientations and correlation values.
FIG. 5 is a flowchart illustrating the relaxation procedure used to obtain a set of reliable matching pairs.
FIG. 6 illustrates adjusting intensities of Ia and Ib such that their intensity differences are small.
FIG. 7 illustrates an example for a blending technique.
FIG. 8 shows an architecture for implementing the present invention.
FIG. 9 shows a series of input images for stitching.
FIG. 10 shows the stitching result obtained by the proposed stitcher.
FIG. 11 shows images used to demonstrate the ghost-like effect.
FIG. 12 shows how the ghostlike effect is removed: (a) with a wider blending width and (b) with a narrower blending width.
DESCRIPTION
Referring first to the flow diagram of FIG. 1, a method of the general steps of generating a seamless panoramic image from different pieces of an image is shown.
Input Image Warping
In general, it is difficult to seamlessly stitch two adjacent images together to form a panoramic image due to perspective distortion introduced by a camera. To remove the effects of this distortion, these images are preferably reprojected onto a simple geometry, e.g., a cube, a cylinder, or a sphere. In many applications, a cylindrical geometry is preferable since its associated geometrical transformation is simple. In this example, the cylindrical geometry is used. FIG. 2 illustrates the relationship between the cylindrical surface ▭pqrs and film plane ▭PQRS of the camera. Note that the plane ▭PQRS is tangent to the cylindrical surface ▭pqrs. Let O denote the optical center, C the center of image plane, f the focal length and r the radius of the cylinder. Further, assume that P(x, y) is a pixel in the image plane and p(u, v) is its corresponding pixel in the cylindrical map. Using FIG. 2, the coordinates of p (u, v) can be obtained as follows:
u = r∠C OB = r tan - 1 CB _ OC _ = r tan - 1 x f , ( 1 )
and
v = r PB _ OB _ = r x x 2 + f 2 . ( 2 )
Moreover, since the radius r is equal to f, Equations (1) and (2) can be rewritten as follows:
u=f tan−1x/f
and
v = f x x 2 + f 2 . ( 4 )
Based on Eqs. (3) and (4), input images are provided and then (step 10) warped into a cylindrical map for further registration to construct a complete panoramic scene (step 11).
Feature Extraction Using Wavelet Transform
Referring again to FIG. 1 and FIG. 3, once the input images have been warped, useful features required for registration are extracted (step 12). This process advantageously uses a wavelet transform for feature extraction (step 31). In particular, all of the feature points are detected along the edges. First, let S(x, y) be a 2-D smoothing function. Two wavelets, ψ1(x, y) and ψ2(x, y) are the partial derivatives of the smoothing function S(x,y) in the x and y directions, respectively, where
ψ 1 ( x , y ) = S ( x , y ) x
and
ψ 2 ( x , y ) = S ( x , y ) y .
Let
ψ 2 j 1 = 1 4 j ψ 1 ( x 2 j , y 2 j )
and
ψ 2 j 2 = 1 4 j ψ 2 ( x 2 j , y 2 j ) ,
At each scale 2j, the 2-D wavelet transform of a function I(x,y) in L2(R2) can be decomposed into two independent directions as follows:
W2 j 1I(x, y)=I*ψ2 j 1(x, y) and W2 j 2I(x, y)=I*ψ2 j 2(x, y).
Basically, these two components are equivalent to the gradients of I(x,y) smoothed by S(x,y) at scale 2j in the x and y directions. At a specific scale s=2i, the modulus of the gradient vector of f(x,y) can be calculated as follows:
M2 j I(x, y)=√{square root over (|W1 2 j I(x, y)|2+|W2 2 j I(x,y)|2)}{square root over (|W1 2 j I(x, y)|2+|W2 2 j I(x,y)|2)}.
If the local maximum of M2 j I(x, y) are located and thresholded with a preset value, then the edge points of I(x,y) at scale 2j can be detected. Since we are interested in specific feature points for scene registration, additional constraints have to be introduced. In general, noise is the main cause of false detection of edge points. In order to suppress the effect of noise, a criterion called edge correlation is introduced (step 32):
R n ( j , x , y ) = n - 1 i = 0 M 2 j + 1 I ( x , y ) ,
where n is a positive integer indicating the number of scales involved in the multiplication, and j represents the initial scale for edge correlation. Rn reveals a peak whenever a true edge exists and is suppressed by the multiplication process if a point at location (x,y) is not a true edge. Thus, using the relationship for Rn(j, x, y) above, the noise in an image can be suppressed while the true edges can be retained. In one embodiment of the invention, the number of scales for multiplication is chosen to be 2. In order to conserve the energy level, Rn(j, x, y) should be normalized as follows:
R _ n ( j , x , y ) = R n ( j , x , y ) MP ( j ) RP n ( j ) ,
where
MP ( j ) = x , y M 2 j I ( x , y ) 2
and
RP n ( j ) = x , y R n ( j , x , y ) 2 .
During the feature point selection process, an edge point is recognized as a candidate if its corresponding normalized edge correlation R2(j, x, y) is larger than its corresponding modulus value. Basically, the above mentioned process is equivalent to detecting an edge point with the strongest edge response in a local area. The three conditions which will be used to judge whether a point P(x,y) is a feature point or not are as follows:
    • Condition 1: P(x,y) must be an edge point of the image I(x,y). This means that P(x,y) is a local maxima of M2 j I(x, y), and M2 j I(x, y)>a threshold (step 34),
    • Condition 2: R 2(1, x, y)>M2 j I(x, y) (step 36),
    • Condition 3:
M 2 j I ( x , y ) = max ( x , y ) N p { M 2 j I ( x , y ) } ,
where Np is the neighborhood of P(x,y) (step 34).
Obtaining Initial Matches Using Edge Orientations and Cross Correlation
Referring to FIGS. 3 and 4, as part of step 12 of FIG. 1, a set of feature points is extracted using wavelet transform (step 31). Let FPI a ={pi=(px i, py i)} and FPI b ={qi=(qx i, qy i)} represent two sets of feature points extracted from two partially overlapping images Ia and Ib respectively. Assume that NI a and NI b represent the number of elements in the set of points FPI a and FPI b , respectively. Let. A(u) be the angle of an edge point u. A standard method for estimating the orientation of an edge-based feature point u at scale 2j can be expressed as follows:
Arg(W2 j 2I(x, y)+W2 j 2I(x, y)).
However, the above representation can be very sensitive to noise. Therefore, an edge tracking technique plus a line-fitting model is used to solve the noise problem (step 41).
Let P be a feature point and Ne be its neighborhood. Since P is an edge point, there should exist an edge line passing through it. By considering P as a bridge point, an edge line passing through P can be determined by searching in all the directions from P. All the edge points on the edge are then used as candidates for determining the orientation of the edge line. During the searching process, the edge connection constraint and the direction consistency constraint are used to restrict the searching domain. The edge connection constraint means that if Ne contains another edge l but l does not pass P, all edge points in l will not be included in estimating the orientation of P. In certain cases, however, there will exist more than one edge line passing through P. In these cases, the first line detected is adopted to estimate the orientation. Let l1 denote this line. The direction consistency constraint means all the edge points along other edge lines whose orientations are inconsistent with l1 are not included to estimate the orientation of P. In this way, a set of edge points can be selected and then used to estimate the orientation of P using a line-fitting model. In other embodiments, other edge tracking technique can also be applied to provide a better estimation.
After the orientation is estimated, all feature points u will associate with an edge orientation A(u). For a feature point pi in the set of points FPI a and qj in the set of points FPI b , the orientation difference between them is calculated as follows (step 42):
θij=A(qj)−A(pi).
In fact, if pi and qi provide a good match, the value of θij will be small since the orientation of image Ia is similar to that of Ib. Assuming this is the case, step 14 (FIG. 1) is performed to eliminate impossible false matches in advance, avoiding many unnecessary correlation calculations.
For a feature point pi in FPI a and a feature point qj in FPI b , if they form a good match (step 44), the following condition will be satisfied:
|A(pi)−A(qi)|<10°.   Condition 1:
Adding this criterion will significantly speed up the search time. On the other hand, if pi and qi form a good match, the similarity degree between pi and qi should be larger. A cross-correlation which can be used to measure the similarity degree between pi and qj (step 46) and is defined as follows (step 32):
C ( p i ; q j ) = 1 σ i σ j ( 2 M + 1 ) 2 x , y = M x , y = - M [ I a ( x + p x i , y + p y i ) - u i ] [ I b ( x + q x i , y + q y i ) - u j ] , ( 5 )
where ui and uiuj are the local means of pi and qj, respectively, σi and σi are the local variances of pi and qj, respectively; and (2M+1)2 represents the area of matching window. Based on this correlation measure, a pair {pi
Figure USRE043206-20120221-P00001
q j} is qualified as a possible matching pair if the following conditions are satisfied (step 33):
Condition  2:   C I a I b ( p i ; q j ) = Max q k FP I b C ( p i ; q k ) , Condition  3:   C I a I b ( p i ; q j ) = Max p k FP I a C ( p k ; q i ) ,
and
Ci a I b (pi;qj)>Te, where Tc=0.65 (step 35).   Condition 4:
Condition 2 means that given a feature point pi it is desired to find a point qiεFPI b such that a the value of CI a I b (pi;qj) is maximized for all points qkεFPI b . Condition 3 means that given a feature points qi, it is desired to find a point piεFPI a such that the value of CI a I b (pi;qj) is maximized. If only Condition 2 is used, it is possible that several points pi match with single point qj. Conversely, if only Condition 3 is used, several points qj possibly will match with single point pi. As for Condition 3, it forces the value of CI a I b of a matching pair to be larger than a threshold. In a preferred implementation, the orientation constraint will be checked first. If the constraint is not satisfied, it is not necessary to check Conditions 2, 3, and 4. In this way, only a few pairs are needed to calculate the cross-correlation measure CI a I b , which is considered a time bottleneck of the whole process.
Eliminating False Matches through Relaxation
Referring to FIGS. 1 and 5, a set of reliable matching pairs is obtained through relaxation. Let MPI a I b ={pi
Figure USRE043206-20120221-P00001
qi}i=1, 2 . . . be the set of matching pairs which satisfy Conditions 1, 2, 3, and 4 above, where pi is a point in image Ia(x, y) and qi is another point in image Ib(x, y). Let NeI a (pk) and NeI b (qk) denote the neighbors of pk and qk within an area of radius R, respectively. Assume that NPp i q j ={nk 1
Figure USRE043206-20120221-P00001
nk 2}k=1, 2 . . . is the set of matching pairs, where nk 1εNeI a (pi), nk 2εNeI b (qj), and all elements of NPp i q j belong to MPI a I b . The proposed method is based on a concept that if {pi
Figure USRE043206-20120221-P00001
qi} and {pj
Figure USRE043206-20120221-P00001
qj} provide a pair of good matches, the distance between pi and pj should be similar to the one between qi and qj. Based on this assumption, we can measure the quality of a matching pair {pi
Figure USRE043206-20120221-P00001
qi} according to how many matches {nj 1
Figure USRE043206-20120221-P00001
nj 2} in NPP i q j whose distance d(pi, nj 1) is similar to the distance d(qi, nj 2), where d(ui, uj)=|ui−uj∥, the Euclidean distance between two points ui and uj. With this concept, the measure of the quality of a match {pi
Figure USRE043206-20120221-P00001
qi} is defined as follows:
G I a I b ( i ) = { n k i n k 2 } NP Pi4i r ( i , k ) 1 + dist ( i , k ) , ( 6 )
where
dist ( i , k ) = [ d ( p i , n k 1 ) + d ( q i , n k 2 ) ] / 2 , r ( i , k ) = { - u ( i , k ) / T 1 if μ ( i , k ) < T 2 0 , otherwise
with the two predefined thresholds
T 1 and T 2 , and u ( i , k ) = d (p i , r k 1 ) - d ( q i , r k 2 ) dist ( i , k ) .
The contribution of a pair {nk 1
Figure USRE043206-20120221-P00001
nk 2} in NPp i q i monotonically decreases based on the value of dist(i,k). Further, if the value of u(i,k) is larger than the threshold T2, the contribution of {nk 1
Figure USRE043206-20120221-P00001
nk 2} is set to zero.
Referring to FIG. 5, after the quality of match of each pair {pi
Figure USRE043206-20120221-P00001
qi} in MPI a ,I b (step 51), relative quality value of each pair is obtained to GI a I b (i) to further eliminate false matches (step 51). Now, based on the quality value of each candidate match, a relaxation technique is used to eliminate false candidates for further registration. If we define the energy function as follow:
F = p i q i | MP I a I b G I a I b ( i ) .
then the relaxation procedure can be formulated as follow:
 Iterate {
  -Compute the quality for each candidate match
  -Choose the best possible candidates for minimizing F
  according to the quality value GI a I b (i).
} until F converges.

There are several strategies for updating the matching candidates. In one application an update strategy, referred to here as “some-looser-take-nothing” is used to update the matching candidates (step 52). First, according to the quality value of GI a I b (i), we sort each element of MPI a ,I b is sorted in increasing order (step 52). Then, a predetermined percentage Q% of matches in are eliminated as the impossible matches. Thus, the remaining (100−Q%) of matches are selected as potential matches for further relaxation (step 54). In our implementation, Q is set to 25. Three iterations are generally sufficient for achieving relaxation (step 53).
On the other hand, in order to make the matching results more reliable, the method includes an “angle consistency” constraint (step 53) within the first iteration of the relaxation process to further eliminate impossible matching pairs. That is, if {pi
Figure USRE043206-20120221-P00001
qi} and {pj
Figure USRE043206-20120221-P00001
qj} are well matched, the angle between {right arrow over (pipj)} and {right arrow over (qiqj)} must be close to zero. In this case, during this first iteration, for each element {pi
Figure USRE043206-20120221-P00001
qi} in MPI a ,I b , a counter CAi is used to record the number of matches {pk
Figure USRE043206-20120221-P00001
qk} in MPI a ,I b where the angle between {right arrow over (pipk)} and {right arrow over (qiqk)} is less than a predefined threshold θ. According to the value of CAi, the elements of MPI a ,I b are sorted in increasing order. The first Q% of matches in MPI a ,I b are considered to be impossible matches.
Obtaining the Desired Offset from Matches
After applying the relaxation process, a set of reliable matches is obtained. Referring to FIG. 1, the method uses a “voting” concept to derive a desired offset from the set of reliable matching pairs. For example, assume that this set is CPI a ,I b ={ui
Figure USRE043206-20120221-P00001
vi}i=1, 2 . . . Ne, where Ne is the total number of elements in CPI a ,I b . In general, the 2-D point sets {ui} and {vi} satisfy the following relation:
vi=ui+T, for i=1,2,3 . . . , Ne,
where T is the desired solution. However, in real cases, different pairs {ui
Figure USRE043206-20120221-P00001
vi} will lead to different offsets Ti. Therefore, a voting technique is used to measure the quality of different solutions Ti. Let S(i) denote a counter which records the number of solutions Tk consistent with Ti. Two solutions Ti and Tk are said to be consistent if the distance between Ti and Tk is less than a predefined threshold. Since there are Ne elements in CPI a ,I b , the total number of consistency tests will be Ne(Ne−1)/2. After applying the consistency test, the offset T associated with the maximum value of S(i) is chosen as the desired solution.
Note that due to noise and image quality, the positions of feature points will not be precisely located and the accuracy of T is affected. In practical implementation, the y-component Ty of T is generally affected more then Tx . Therefore, the following method is proposed to refine and correct the solution of Ty for the y-component. Let {u
Figure USRE043206-20120221-P00001
v} be the matching pair of CPI a ,I b having the highest quality value. Given point u and the offset T, the refined offset can be found by searching the local neighborhood of the point (u+ T) in another image Ih using the correlation technique. However, if there is little texture information within the local neighborhood of the point u, the above approach will not necessarily provide a satisfactory solution. Let
g x ( k ) = 5 i = 1 I a ( u x + i , k ) - I b ( u x - i , k )
be the horizontal gradient of the point (ux,k) in Ih, where ux is the x-coordinate of u. Instead of using the starting point u directly, we use another starting point ū for refining the desired offset T by searching the point ū whose a horizontal gradient gx is largest along a column of pixels with the same x-coordinate ux. Based on the starting point ū, the final offset can then be accurately obtained with the correlation technique.
Rendering the Final Panoramic Image
In general, when stitching two adjacent images, discontinuities of intensity exist between their common areas. Therefore, step 17 of FIG. 1 is used to eliminate such intensity discontinuities. The scheme used in step 17 can be divided into two stages. The first stage is used to adjust the intensities of two adjacent images such that their intensities are similar. The second stage is used to blend their image intensities according to a distance measure such that the final composite image appears smooth. Assume Ia and Ib are two adjacent images with their widths in wa and wb, respectively. Let ΔI be the average intensity difference between the overlapping area
of Ia and Ib, that is,
ΔI = 1 A i A ( I b ( q ( i ) ) - I a ( p ( i ) ) ) , ( 7 )
where A is the overlapping area of Ia and Ib, |A| is the number of pixels in A, p(i) is a pixel in Ia, and q(i) is its corresponding pixel in Ib.
In particular, referring to FIGS. 6A and 6B, the gap of average intensity between Ia and Ib is about ΔI. According to ΔI, wa, and wb, the first stage is used to adjust the intensities of Ia and Ib as follows:
I a ( p ( x , y ) ) = I a ( p ( x , y ) ) + x Δ I 2 w 1 , ( 8 )
and
I b ( q ( x , y ) ) = I b ( q ( x , y ) ) + ( x - w 2 ) ΔI 2 w 2 . ( 9 )
After this adjusting step, the intensities of Ia and Ib in FIG. 6(a) will be gradually changed to approach the intensity line EF shown in FIG. 6(b), thereby bringing the intensities between Ia and Ib closer. As will be described below, in order to further smooth the intensity discontinuity between Ia and Ib, a blending technique is then applied.
The second stage uses a ray-casting method to blend different pixel intensities together. Referring to FIG. 7, pi is a pixel in Ia, qi is its corresponding pixel in Ih, and Ia and Ih are two boundary lines in Ia and Ih, respectively. With pi and qi, the intensity of the corresponding pixel ri in the composite image I can be obtained as follows:
I ( r i ) = d b t I a ( p i ) + d a t I b ( q i ) d a t + d b t ( 10 )
Where da is the distance between pi and Ia, dh the distance between qi and Ih, and t is an adjustable parameter. Using Equation (10), the intensities in Ia are gradually changed to approach the intensities of pixels in Ih such that the final composite image I looks very smooth. In fact, if the blending area is chosen too large, a “ghostlike” effect will occur, particularly when moving objects in the common overlapping area exist between Ia and Ib. However, since the intensities of Ia and Ib have been adjusted, the blending width can be chosen small such that the so-called ghostlike effect is significantly reduced. In one preferred embodiment, the blending width is chosen as one-third of the original width of the overlapping area.
Architecture for Implementation
Referring to FIG. 8, one preferred architecture for implementing the real-time stitcher apparatus, described above, includes input devices 60 (e.g., digital cameras or scanners) to acquire a series of panoramic images. Then, the panoramic images are stored into external storage 62 such as hard disks for further processing or being directly provided to one or more microprocessors 64 for stitching. The microprocessors 64 perform the stitching including warping, feature extraction, edge orientation estimation, correlation calculation, relaxation, rendering and blending the final panoramic images, etc. All the temporary data are stored in the system RAM memory to speed up stitching. Finally, the stitching result is sent to the display engine for displaying. In many applications this architecture can be implemented using a general personal computer.
Performance of the Invention
Referring to FIG. 9, to analyze the performance of the real-time stitcher apparatus, described above, a series of original panoramic images 9a-9e captured by a rotated camera were provided.
Referring to FIG. 10, the stitched panoramic image obtained by our proposed method is shown. In addition, if moving objects in two adjacent images exist, the proposed method can significantly minimize ghost-like effects. FIG. 11 shows two adjacent images with moving objects. The posture of the man in (a) is clearly different from the one in (b). Besides, large intensity differences exist between them. In order to smooth such large intensity differences, the blending width should be chosen to be large. However, with a large area, the ghost-like effect will appear in the final composite image (see FIG. 12(a)). Contrary to the previous technique, with our proposed blending technique, since the intensities of input images have been adjusted before blending, the blending width can be chosen smaller. Therefore, the ghostlike effect is significantly lessened, the quality of (b) is clearly improved over the quality shown in FIG. 12(a).

Claims (20)

What is claimed is:
1. A method of merging a different pair of images to form a seamless panoramic image comprising:
extracting a set of feature points along the edges of the images, each feature point defining an edge orientations orientation, wherein extracting the set of feature points along the edges of the images includes applying wavelet transforms to the images;
obtaining a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images; and
rendering a seamless panoramic image using the first and second images with the set of registration parameters.
2. The method of claim 1, wherein determining the initial set of matching pairs feature points includes a relaxation technique, including the steps of:
calculating the edge orientation of each feature point;
comparing the an orientation difference between the matching pair;
calculating the a value of correlation of the matching pair; and
comparing the value of correlation with a predefined threshold.
3. The method of claim 1, wherein extracting the set of feature points, includes:
calculating an edge correlation for each image;
locating the a feature point whose edge response is the a maxima within a window;
comparing the maxima with a predefined threshold.
4. The method of claim 1, wherein obtaining the set of reliable registration parameters, comprises:
determining an initial set of matching pairs for registration;
calculating a quality value for the initial set of matching pairs;
updating the matching result according to the quality value of the match;
imposing an angle consistency constraint to filter out impossible matches; and
using a voting technique to obtain the registration parameters.
5. The method of claims 1, wherein rendering the seamless panoramic image comprises:
dynamically adjusting the intensity differences between adjacent images; and
properly blending the an intensity difference between consecutive images.
6. A system for merging a pair of images to form a panoramic image comprising:
an image device which, in operation, acquires a series of the images;
a storage for storing the series of images:
a memory which stores computer code; and
at least one processor which executes the computer code to:
extract different sets of feature points along the edges of each input image, each feature point defining an edge orientation;
extract a set of feature points along the edges of the images by applying wavelet formations to the images, each feature point defining an edge of orientation;
obtain a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images; and
render a seamless panoramic image using the first and second images with the set of registration parameters.
7. The system of claim 6, wherein the processor extracts the set of feature points, including:
calculating an edge correlation for each image;
locating the a feature point whose edge response is the a maxima within a window;
comparing the maxima with a predefined threshold.
8. An apparatus for merging a different pair of images to form a seamless panoramic image comprising:
means for extracting a set of feature points along the edges of the images, one or more feature point defining an edge orientations, wherein said means for extracting the set of feature points along the edges of the images includes means for applying wavelet transforms to the images;
means for obtaining a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images; and
means for rendering a seamless panoramic image using the first and second images with the set of registration parameters.
9. An apparatus as claimed in claim 8, wherein said obtaining means for obtaining a set of registration parameters by determining an initial set of feature points comprises:
means for calculating the edge orientation of one or more feature points;
means for comparing the orientation difference between the matching pair;
means for calculating the value of correlation of the matching pair; and
means for comparing the value of correlation with a predefined threshold.
10. An apparatus as claimed in claim 8, wherein said obtaining means comprises:
means for calculating an edge correlation for one or more images;
means for locating the feature point whose edge response is a maxima within a window; and
means for comparing the maxima with a predefined threshold.
11. An apparatus as claimed in claim 8, wherein said obtaining means comprises:
means for determining an initial set of matching pairs for registration;
means for calculating a quality value for the initial set of matching pairs;
means for updating the matching result according to the quality value of the match;
means for imposing an angle consistency constraint to filter out impossible matches; and
means for using a voting technique to obtain the registration parameters.
12. An apparatus as claimed in claims 8, wherein said rendering comprises:
means for dynamically adjusting the intensity differences between adjacent images; and
means for properly blending the intensity difference between consecutive images.
13. An article of manufacture comprising a storage medium having instructions stored thereon that, if executed, result in merging of a different pair of images to form a seamless panoramic image by:
extracting a set of feature points along the edges of the images, each feature point defining an edge orientations, wherein extracting the set of feature points along the edges of the images includes applying wavelet transforms to the images;
obtaining a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images; and
rendering a seamless panoramic image using the first and second images with the set of registration parameters.
14. An article of manufacture as claimed in claim 13, wherein the instructions, if executed, further result in merging of a different pair of images to form a seamless panoramic image by:
calculating the edge orientation of one or more feature points;
comparing the orientation difference between the matching pair;
calculating the value of correlation of the matching pair; and
comparing the value of correlation with a predefined threshold.
15. An article of manufacture as claimed in claim 13, wherein the instructions, if executed, further result in merging of a different pair of images to form a seamless panoramic image by:
calculating an edge correlation for each image;
locating the feature point whose edge response is a maxima within a window; and
comparing the maxima with a predefined threshold.
16. An article of manufacture as claimed in claim 13, wherein the instructions, if executed, further result in merging of a different pair of images to form a seamless panoramic image by:
determining an initial set of matching pairs for registration;
calculating a quality value for the initial set of matching pairs;
updating the matching result according to the quality value of the match;
imposing an angle consistency constraint to filter out impossible matches; and
using a voting technique to obtain the registration parameters.
17. An article of manufacture as claimed in claims 13, wherein the instructions, if executed, further result in merging of a different pair of images to form a seamless panoramic image by:
dynamically adjusting the intensity differences between adjacent images; and
properly blending the intensity difference between consecutive images.
18. A system for merging a pair of images to form a panoramic image comprising:
a memory capable of storing one or more images acquired from an image device; and
at least one processor coupled to said memory, said processor being capable of:
extracting a set of feature points along the edges of the images by applying wavelet formations to the images, one or more feature point defining an edge of orientation;
obtaining a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images; and
rendering a seamless panoramic image using the first and second images with the set of registration parameters.
19. The system as claimed in claim 18, wherein said processor is further capable of:
calculating an edge correlation for each image;
locating the feature point whose edge response is a maxima within a window; and
comparing the maxima with a predefined threshold.
20. The system as claimed in claim 18, wherein said processor is further capable of:
determining an initial set of matching pairs for registration;
calculating a quality value for the initial set of matching pairs;
updating the matching result according to the quality value of the match;
imposing an angle consistency constraint to filter out impossible matches; and
using a voting technique to obtain the registration parameters.
US11/541,517 2000-02-04 2006-09-28 Apparatus and method for providing panoramic images Expired - Lifetime USRE43206E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/541,517 USRE43206E1 (en) 2000-02-04 2006-09-28 Apparatus and method for providing panoramic images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/498,291 US6798923B1 (en) 2000-02-04 2000-02-04 Apparatus and method for providing panoramic images
US11/541,517 USRE43206E1 (en) 2000-02-04 2006-09-28 Apparatus and method for providing panoramic images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/498,291 Reissue US6798923B1 (en) 2000-02-04 2000-02-04 Apparatus and method for providing panoramic images

Publications (1)

Publication Number Publication Date
USRE43206E1 true USRE43206E1 (en) 2012-02-21

Family

ID=23980416

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/498,291 Ceased US6798923B1 (en) 2000-02-04 2000-02-04 Apparatus and method for providing panoramic images
US11/541,517 Expired - Lifetime USRE43206E1 (en) 2000-02-04 2006-09-28 Apparatus and method for providing panoramic images

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/498,291 Ceased US6798923B1 (en) 2000-02-04 2000-02-04 Apparatus and method for providing panoramic images

Country Status (2)

Country Link
US (2) US6798923B1 (en)
TW (1) TW497366B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120120099A1 (en) * 2010-11-11 2012-05-17 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium storing a program thereof
US20150055839A1 (en) * 2013-08-21 2015-02-26 Seiko Epson Corporation Intelligent Weighted Blending for Ultrasound Image Stitching
US9047692B1 (en) * 2011-12-20 2015-06-02 Google Inc. Scene scan
US20150161480A1 (en) * 2009-01-14 2015-06-11 A9.Com, Inc. Method and system for matching an image using normalized feature vectors

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7373017B2 (en) * 2005-10-04 2008-05-13 Sony Corporation System and method for capturing adjacent images by utilizing a panorama mode
EP1136948A1 (en) * 2000-03-21 2001-09-26 European Community Method of multitime filtering coherent-sensor detected images
US6895126B2 (en) 2000-10-06 2005-05-17 Enrico Di Bernardo System and method for creating, storing, and utilizing composite images of a geographic location
US20030112339A1 (en) * 2001-12-17 2003-06-19 Eastman Kodak Company Method and system for compositing images with compensation for light falloff
US7149984B1 (en) * 2001-12-28 2006-12-12 Sprint Communications Company L.P. Image configuration method
US7224392B2 (en) * 2002-01-17 2007-05-29 Eastman Kodak Company Electronic imaging system having a sensor for correcting perspective projection distortion
US7623733B2 (en) * 2002-08-09 2009-11-24 Sharp Kabushiki Kaisha Image combination device, image combination method, image combination program, and recording medium for combining images having at least partially same background
TWI276044B (en) * 2003-12-26 2007-03-11 Ind Tech Res Inst Real-time image warping method for curve screen
US20060256397A1 (en) * 2005-05-12 2006-11-16 Lexmark International, Inc. Method and system for combining images
US7474802B2 (en) * 2005-07-28 2009-01-06 Seiko Epson Corporation Method and apparatus for automatically estimating the layout of a sequentially ordered series of frames to be used to form a panorama
US7840032B2 (en) * 2005-10-04 2010-11-23 Microsoft Corporation Street-side maps and paths
US20080043020A1 (en) * 2006-08-18 2008-02-21 Microsoft Corporation User interface for viewing street side imagery
US8072482B2 (en) * 2006-11-09 2011-12-06 Innovative Signal Anlysis Imaging system having a rotatable image-directing device
US8224122B2 (en) * 2006-12-15 2012-07-17 Microsoft Corporation Dynamic viewing of wide angle images
US8717412B2 (en) * 2007-07-18 2014-05-06 Samsung Electronics Co., Ltd. Panoramic image production
US8068693B2 (en) * 2007-07-18 2011-11-29 Samsung Electronics Co., Ltd. Method for constructing a composite image
TWI383666B (en) * 2007-08-21 2013-01-21 Sony Taiwan Ltd An advanced dynamic stitching method for multi-lens camera system
EP2044987B1 (en) * 2007-10-03 2013-05-22 Sony Computer Entertainment Europe Ltd. Apparatus and method of on-line reporting
JP4926116B2 (en) * 2008-04-16 2012-05-09 株式会社日立ハイテクノロジーズ Image inspection device
KR101473215B1 (en) * 2008-04-18 2014-12-17 삼성전자주식회사 Apparatus for generating panorama image and method therof
US8947502B2 (en) * 2011-04-06 2015-02-03 Qualcomm Technologies, Inc. In camera implementation of selecting and stitching frames for panoramic imagery
JP4982544B2 (en) * 2009-09-30 2012-07-25 株式会社日立ハイテクノロジーズ Composite image forming method and image forming apparatus
US8385689B2 (en) * 2009-10-21 2013-02-26 MindTree Limited Image alignment using translation invariant feature matching
US9430923B2 (en) 2009-11-30 2016-08-30 Innovative Signal Analysis, Inc. Moving object detection, tracking, and displaying systems
US9766089B2 (en) * 2009-12-14 2017-09-19 Nokia Technologies Oy Method and apparatus for correlating and navigating between a live image and a prerecorded panoramic image
JP5696419B2 (en) * 2010-09-30 2015-04-08 カシオ計算機株式会社 Image processing apparatus and method, and program
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
US8861890B2 (en) 2010-11-24 2014-10-14 Douglas Alan Lefler System and method for assembling and displaying individual images as a continuous image
JP5923824B2 (en) * 2012-02-21 2016-05-25 株式会社ミツトヨ Image processing device
US20140300686A1 (en) * 2013-03-15 2014-10-09 Tourwrist, Inc. Systems and methods for tracking camera orientation and mapping frames onto a panoramic canvas
US10139819B2 (en) 2014-08-22 2018-11-27 Innovative Signal Analysis, Inc. Video enabled inspection using unmanned aerial vehicles
TWI554976B (en) 2014-11-17 2016-10-21 財團法人工業技術研究院 Surveillance systems and image processing methods thereof
TWI552600B (en) 2014-12-25 2016-10-01 晶睿通訊股份有限公司 Image calibrating method for stitching images and related camera and image processing system with image calibrating function
CN104932857B (en) * 2015-06-24 2018-05-22 广东威创视讯科技股份有限公司 The method and system of the cross-platform virtual wall configuration control of joined screen system
RU2626551C1 (en) * 2016-06-07 2017-07-28 Общество с ограниченной ответственностью "СИАМС" Method for generating panoramic images from video stream of frames in real time mode

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0249644A1 (en) 1986-06-14 1987-12-23 ANT Nachrichtentechnik GmbH Method for transmitting television signals with an improved picture quality
EP0257129A1 (en) 1986-08-29 1988-03-02 ANT Nachrichtentechnik GmbH Process for the reproduction of television signals with improved image quality
EP0415648A2 (en) 1989-08-31 1991-03-06 Canon Kabushiki Kaisha Image processing apparatus
TW300369B (en) 1995-07-18 1997-03-11 Sony Co Ltd
US5613013A (en) * 1994-05-13 1997-03-18 Reticula Corporation Glass patterns in image alignment and analysis
US5625408A (en) * 1993-06-24 1997-04-29 Canon Kabushiki Kaisha Three-dimensional image recording/reconstructing method and apparatus therefor
US5850352A (en) 1995-03-31 1998-12-15 The Regents Of The University Of California Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images
TW350183B (en) 1997-07-07 1999-01-11 Umax Data Systems Inc Method of mobile image scanning
US5953076A (en) 1995-06-16 1999-09-14 Princeton Video Image, Inc. System and method of real time insertions into video using adaptive occlusion with a synthetic reference image
US5963664A (en) * 1995-06-22 1999-10-05 Sarnoff Corporation Method and system for image combination using a parallax-based technique
US5987164A (en) * 1997-08-01 1999-11-16 Microsoft Corporation Block adjustment method and apparatus for construction of image mosaics
TW376670B (en) 1998-09-11 1999-12-11 Bing-Fei Wu Textural dividing method for color document
US6009190A (en) * 1997-08-01 1999-12-28 Microsoft Corporation Texture map construction method and apparatus for displaying panoramic image mosaics
US6011581A (en) 1992-11-16 2000-01-04 Reveo, Inc. Intelligent method and system for producing and displaying stereoscopically-multiplexed images of three-dimensional objects for use in realistic stereoscopic viewing thereof in interactive virtual reality display environments
US6044168A (en) 1996-11-25 2000-03-28 Texas Instruments Incorporated Model based faced coding and decoding using feature detection and eigenface coding
US6075905A (en) * 1996-07-17 2000-06-13 Sarnoff Corporation Method and apparatus for mosaic image construction
US6078701A (en) * 1997-08-01 2000-06-20 Sarnoff Corporation Method and apparatus for performing local to global multiframe alignment to construct mosaic images
US6349153B1 (en) * 1997-09-03 2002-02-19 Mgi Software Corporation Method and system for composition images
US6393162B1 (en) * 1998-01-09 2002-05-21 Olympus Optical Co., Ltd. Image synthesizing apparatus
US6393163B1 (en) * 1994-11-14 2002-05-21 Sarnoff Corporation Mosaic based image processing system
US6411339B1 (en) * 1996-10-04 2002-06-25 Nippon Telegraph And Telephone Corporation Method of spatio-temporally integrating/managing a plurality of videos and system for embodying the same, and recording medium for recording a program for the method
US6434276B2 (en) * 1997-09-30 2002-08-13 Sharp Kabushiki Kaisha Image synthesis and communication apparatus
US6466262B1 (en) * 1997-06-11 2002-10-15 Hitachi, Ltd. Digital wide camera
US6473536B1 (en) * 1998-09-18 2002-10-29 Sanyo Electric Co., Ltd. Image synthesis method, image synthesizer, and recording medium on which image synthesis program is recorded
US6486908B1 (en) * 1998-05-27 2002-11-26 Industrial Technology Research Institute Image-based method and system for building spherical panoramas
US6516099B1 (en) * 1997-08-05 2003-02-04 Canon Kabushiki Kaisha Image processing apparatus

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0249644A1 (en) 1986-06-14 1987-12-23 ANT Nachrichtentechnik GmbH Method for transmitting television signals with an improved picture quality
EP0257129A1 (en) 1986-08-29 1988-03-02 ANT Nachrichtentechnik GmbH Process for the reproduction of television signals with improved image quality
EP0415648A2 (en) 1989-08-31 1991-03-06 Canon Kabushiki Kaisha Image processing apparatus
US6011581A (en) 1992-11-16 2000-01-04 Reveo, Inc. Intelligent method and system for producing and displaying stereoscopically-multiplexed images of three-dimensional objects for use in realistic stereoscopic viewing thereof in interactive virtual reality display environments
US5625408A (en) * 1993-06-24 1997-04-29 Canon Kabushiki Kaisha Three-dimensional image recording/reconstructing method and apparatus therefor
US5613013A (en) * 1994-05-13 1997-03-18 Reticula Corporation Glass patterns in image alignment and analysis
US6393163B1 (en) * 1994-11-14 2002-05-21 Sarnoff Corporation Mosaic based image processing system
US5850352A (en) 1995-03-31 1998-12-15 The Regents Of The University Of California Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images
US5953076A (en) 1995-06-16 1999-09-14 Princeton Video Image, Inc. System and method of real time insertions into video using adaptive occlusion with a synthetic reference image
US5963664A (en) * 1995-06-22 1999-10-05 Sarnoff Corporation Method and system for image combination using a parallax-based technique
TW300369B (en) 1995-07-18 1997-03-11 Sony Co Ltd
US6075905A (en) * 1996-07-17 2000-06-13 Sarnoff Corporation Method and apparatus for mosaic image construction
US6411339B1 (en) * 1996-10-04 2002-06-25 Nippon Telegraph And Telephone Corporation Method of spatio-temporally integrating/managing a plurality of videos and system for embodying the same, and recording medium for recording a program for the method
US6044168A (en) 1996-11-25 2000-03-28 Texas Instruments Incorporated Model based faced coding and decoding using feature detection and eigenface coding
US6466262B1 (en) * 1997-06-11 2002-10-15 Hitachi, Ltd. Digital wide camera
TW350183B (en) 1997-07-07 1999-01-11 Umax Data Systems Inc Method of mobile image scanning
US6009190A (en) * 1997-08-01 1999-12-28 Microsoft Corporation Texture map construction method and apparatus for displaying panoramic image mosaics
US6078701A (en) * 1997-08-01 2000-06-20 Sarnoff Corporation Method and apparatus for performing local to global multiframe alignment to construct mosaic images
US5987164A (en) * 1997-08-01 1999-11-16 Microsoft Corporation Block adjustment method and apparatus for construction of image mosaics
US6516099B1 (en) * 1997-08-05 2003-02-04 Canon Kabushiki Kaisha Image processing apparatus
US6349153B1 (en) * 1997-09-03 2002-02-19 Mgi Software Corporation Method and system for composition images
US6434276B2 (en) * 1997-09-30 2002-08-13 Sharp Kabushiki Kaisha Image synthesis and communication apparatus
US6393162B1 (en) * 1998-01-09 2002-05-21 Olympus Optical Co., Ltd. Image synthesizing apparatus
US6486908B1 (en) * 1998-05-27 2002-11-26 Industrial Technology Research Institute Image-based method and system for building spherical panoramas
TW376670B (en) 1998-09-11 1999-12-11 Bing-Fei Wu Textural dividing method for color document
US6473536B1 (en) * 1998-09-18 2002-10-29 Sanyo Electric Co., Ltd. Image synthesis method, image synthesizer, and recording medium on which image synthesis program is recorded

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Jacques Fayolle et al. "Application of Multiscale Charaterization of Edges to Motion Determination" IEEE-1998, pp. 1174-1179. *
Mingu Sun et al. "Measurement of Signal Similarity Using the Maxima of the Wavelet Transform" IEEE-1993, pp. 583-586. *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150161480A1 (en) * 2009-01-14 2015-06-11 A9.Com, Inc. Method and system for matching an image using normalized feature vectors
US9530076B2 (en) * 2009-01-14 2016-12-27 A9.Com, Inc. Method and system for matching an image using normalized feature vectors
US20120120099A1 (en) * 2010-11-11 2012-05-17 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium storing a program thereof
US9047692B1 (en) * 2011-12-20 2015-06-02 Google Inc. Scene scan
US20150154761A1 (en) * 2011-12-20 2015-06-04 Google Inc. Scene scan
US20150055839A1 (en) * 2013-08-21 2015-02-26 Seiko Epson Corporation Intelligent Weighted Blending for Ultrasound Image Stitching
US9076238B2 (en) * 2013-08-21 2015-07-07 Seiko Epson Corporation Intelligent weighted blending for ultrasound image stitching

Also Published As

Publication number Publication date
US6798923B1 (en) 2004-09-28
TW497366B (en) 2002-08-01

Similar Documents

Publication Publication Date Title
USRE43206E1 (en) Apparatus and method for providing panoramic images
US9224189B2 (en) Method and apparatus for combining panoramic image
US6393142B1 (en) Method and apparatus for adaptive stripe based patch matching for depth estimation
US7379583B2 (en) Color segmentation-based stereo 3D reconstruction system and process employing overlapping images of a scene captured from viewpoints forming either a line or a grid
US9280821B1 (en) 3-D reconstruction and registration
EP1986153B1 (en) Method and system for determining objects poses from range images
US9303525B2 (en) Method and arrangement for multi-camera calibration
Zhang et al. Detecting and extracting the photo composites using planar homography and graph cut
US20070008312A1 (en) Method for determining camera position from two-dimensional images that form a panorama
Mistry et al. Image stitching using Harris feature detection
WO1998050885A2 (en) Method and apparatus for performing global image alignment using any local match measure
WO2019071976A1 (en) Panoramic image saliency detection method based on regional growth and eye movement model
Su et al. Non-rigid registration of images with geometric and photometric deformation by using local affine Fourier-moment matching
Ghannam et al. Cross correlation versus mutual information for image mosaicing
Yaman et al. An iterative adaptive multi-modal stereo-vision method using mutual information
Fedorov et al. Affine invariant self-similarity for exemplar-based inpainting
Wang et al. Robust color correction in stereo vision
Li et al. A fast and robust image stitching algorithm
Attard et al. Image mosaicing of tunnel wall images using high level features
US11232323B2 (en) Method of merging images and data processing device
Li et al. Automatic registration of color images to 3D geometry
James et al. Image Forgery detection on cloud
Ancuti et al. Video enhancement using reference photographs
Liao et al. Seam-guided local alignment and stitching for large parallax images
Abhinav et al. Weighted Average Blending Technique for Image Stitching

Legal Events

Date Code Title Description
AS Assignment

Owner name: TRANSPACIFIC IP LTD., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE;REEL/FRAME:025542/0601

Effective date: 20061124

CC Certificate of correction
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 12