KR101692227B1 - A panorama image generation method using FAST algorithm - Google Patents

A panorama image generation method using FAST algorithm Download PDF

Info

Publication number
KR101692227B1
KR101692227B1 KR1020150116079A KR20150116079A KR101692227B1 KR 101692227 B1 KR101692227 B1 KR 101692227B1 KR 1020150116079 A KR1020150116079 A KR 1020150116079A KR 20150116079 A KR20150116079 A KR 20150116079A KR 101692227 B1 KR101692227 B1 KR 101692227B1
Authority
KR
South Korea
Prior art keywords
images
image
panoramic image
minutiae
fast
Prior art date
Application number
KR1020150116079A
Other languages
Korean (ko)
Inventor
김종호
유지상
Original Assignee
광운대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 광운대학교 산학협력단 filed Critical 광운대학교 산학협력단
Priority to KR1020150116079A priority Critical patent/KR101692227B1/en
Application granted granted Critical
Publication of KR101692227B1 publication Critical patent/KR101692227B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06T7/0034
    • H04N5/23238
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a panoramic image generation method using FAST, which generates a panoramic image from a plurality of photographed images photographed toward a plurality of directions, the method comprising the steps of: (a) converting the photographed images into a projected image in plane coordinates in accordance with a cylinder coordinate system step; (b) extracting feature points from a projected image through a feature from accelerated segment test (FAST) method; (d) matching the feature points using the RANSAC method and eliminating the error points; And (e) calculating a homography between the projected images, and performing coordinate transformation and matching of the images using the homography to generate a panoramic image.
According to the panoramic image generation method as described above, it is possible to generate a panoramic image regardless of the input order and direction of the image when the images are matched. In particular, distortion can be corrected and a natural panoramic image can be generated.

Figure 112015079838139-pat00016

Description

A panorama image generation method using a FAST (FAST algorithm)

The present invention relates to a feature point-based panoramic image generation method using features from an accelerated segment test (FAST) for generating a natural panoramic image using a plurality of photographed images.

Particularly, the present invention minimizes the error rate in matching using RANSAC (random sample consensus) after extracting the feature points after performing the cylinder projection, and compensates the heterogeneity around the matching boundary when synthesizing a plurality of images obtained from different directions The present invention relates to a panoramic image generation method using FAST using a blending technique.

Generally, a panorama image refers to a high-resolution image obtained by matching multiple images to one image using image processing. Panorama images are used in various fields. If you make the actual distance as a panorama image like a roaddview, you can indirectly experience the actual distance without going to the place yourself. In addition, it is widely used in medical imaging, and the entire affected part of the patient can easily be confirmed by the image. It is expected that efficient panoramic surveillance can be achieved by applying panoramic image generation technology to cameras such as CCTV.

Among the conventional panorama generation methods, there is a method in which feature points are extracted by a scale invariant feature transform (SIFT) method and a panorama image is generated by homography calculation using RANSAC (random sample consensus). This method is disadvantageous in that the execution speed is slow due to the characteristics of the SIFT method which has high computational complexity in high image quality [Non-Patent Document 1]. In addition, there is a method of matching between pixels rather than a feature point, but there is a problem in that the searching is performed on the entire image, so that the speed is slow and the error in the matching process is large [Non Patent Document 2].

Although the speed up robust feature (SURF) method which improves the existing SIFT method has been performed at a high speed, a faster feature point extraction method is required to generate a panoramic image with many images [Non-Patent Document 3]. Conventional methods have a disadvantage in that the order of the input image must be maintained and an error may occur in the matching process depending on the direction of the image. In addition, when a panoramic image is formed with a plurality of images, there is a problem that the image is stretched.

In order to solve these problems, a new method for generating a natural panoramic image based on a feature point is needed. What is important in creating a panoramic image is to accurately find the feature points between the images to be matched and to calculate the homography. Since the angles and the focal distances of the images are different from each other, if a plurality of images are matched as they are, distortion may occur in which the images are stretched.

[Non-Patent Document 1] M. Brown, and D. G. LOWE, "Automatic panoramic image stitching using invariant features, International Journal of computer vision, Vol. 74, No. 1, Dec. 2006. [Non-Patent Document 2] R. Szeliski, "Image alignment and stitching: a tutorial," Computer graphics and vision, Vol. 2, No. 1, pp. 15 to 16, Jan. 2006 [Non-Patent Document 3] H. Bay, A. Ess, T. Tuytelaars and L. V. Gool, "Speeded Up Robust Features (SURF)," Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 2-8, Jun. 2008 [Non-Patent Document 4] E. Rosten, T. Drummond, "Machine learning for high-speed corner detection ", European conference on computer vision, Vol.1, pp.430-443, 2006 [Non-Patent Document 5] L. Moisan, P. Moulon, and Pascal Monasse, "Automatic homographic registration of a pair of images, with a contrario elimination of outliers," Image Processing on line (IPOL), pp. 2-3, May, 2012 [Non-Patent Document 6] K-W Kwon, A-Y Lee, U. Oh, "Panoramic image composition through scaling and rotation invariant features," 17, No. 5, Jun. 2010. [Non-Patent Document 7] P. J. Burt and E. H. Adelson, "A multiresolution spline with application to image mosaics, ACM Transaction on graphics, Vol. 2, No. 4, pp. 2-5, Oct. 1983. [Non-Patent Document 8] R.Szeliski, H.Y.Shum, "Creating Full View Panoramic Image Mosaics and Environment Maps", Computer Graphics, pp. 251-258, August 1997. [Non-Patent Document 9] David G. Lowe, "Distinctive image features from scale-invariant keypoints," International journal of computer vision, Vol. 60, No. 2, pp. 5-16, Jan. 2004

SUMMARY OF THE INVENTION An object of the present invention is to provide a feature point-based panoramic image generation method using features from an accelerated segment test (FAST) for generating a natural panoramic image using a plurality of images.

In particular, it is an object of the present invention to minimize the error rate in matching using RANSAC (random sample consensus) after extracting the feature points after performing the cylinder projection, and to improve the heterogeneity around the matching boundary when synthesizing a plurality of images obtained from different directions And to provide a panoramic image generation method using FAST using a blending method.

According to an aspect of the present invention, there is provided a panoramic image generation method using FAST, which generates a panoramic image from a plurality of photographed images photographed in a plurality of directions, the method comprising the steps of: (a) Transforming the image into a projection image in plane coordinates; (b) extracting feature points from a projected image through a feature from accelerated segment test (FAST) method; (d) matching the feature points using the RANSAC method and eliminating the error points; And (e) calculating homography between the projected images, and performing coordinate transformation and registration of the images using the homography to generate a panoramic image.

According to another aspect of the present invention, there is provided a panorama image generation method using FAST, wherein in the step (a), forward directional waving is performed from a plane coordinate system of the photographed images to a cylinder coordinate system, and backward waving is performed on cylinder coordinates subjected to forward warping And the projected image is obtained by applying the preceding interpolation method.

Further, the present invention is characterized in that, in the panoramic image generation method using FAST, the backward warping is performed by the following [Expression 1].

[Equation 1]

Figure 112015079838139-pat00001

(X ', y') is the plane coordinate, f is the focal length of the image, x c and y c are the center coordinates of the cylinder coordinate system, s is the projection image Is the variable that determines the scale of

According to another aspect of the present invention, there is provided a method of generating a panoramic image using FAST, wherein the step (b) comprises: comparing a brightness value of a pixel of the projected image with a brightness value of a corresponding pixel, If the detected feature point candidates are adjacent to each other, if the number of neighboring pixels larger than a predetermined threshold value is greater than or equal to a predetermined number, the corresponding pixel is detected as a feature point candidate. If the detected feature point candidates are adjacent to each other, Only feature points candidates are detected as feature points, and remaining feature point candidates are removed.

According to another aspect of the present invention, there is provided a method of generating a panoramic image using FAST, wherein in the step (b), the peripheral pixels are horizontally and vertically arranged in sixteen pixels arranged in a circular shape, And four pixels arranged vertically are selected.

According to the present invention, in the panoramic image generation method using FAST, in the step (b), a minutia point score V is given to the minutia candidate candidates by [Equation 2], and a minutia point score V And if there are adjacent feature point candidates, the feature point candidates of low scores are removed.

[Equation 2]

Figure 112015079838139-pat00002

However, if the brightness value (I p) threshold t brightness value of 16 adjacent pixels than the value obtained by subtracting from a reference pixel p (I p → x) is less establish the adjacent pixels in S dark, based on the pixel brightness value ( I p ) and the threshold value t, if the brightness value of the adjacent pixel is larger than the value obtained by adding the threshold value t, the adjacent pixel is defined as S bright .

According to the present invention, in the panoramic image generation method using FAST, in the step (b), the main directions of the feature points are calculated using the Haar wavelet filter with respect to the extracted feature points, A rectangular window region is formed and divided into feature vectors and the feature vectors are calculated in the divided regions and are expressed as feature point descriptors as feature vectors in each of the divided regions.

In the method of generating a panoramic image using FAST, the method may further include: (c) before the step (d), generating a panoramic image using the position of the feature points matched between the two photographed images And determining the order of the photographed images by determining that the two photographed images overlap each other in a portion where a large number of matching minutiae are located.

According to another aspect of the present invention, there is provided a method for generating a panoramic image using FAST. In the step (d), pixel data of two images are randomly selected to predict a virtual model of the two images, It is repeatedly judged whether the extracted minutiae are correct to obtain matching minutiae, and a predetermined number of minutiae of the best matched minutiae are selected and the remaining minutiae are excluded.

According to another aspect of the present invention, there is provided a method for generating a panoramic image using FAST, the method comprising the steps of: (e) obtaining a homography matrix using extracted minutiae points; .

Further, the present invention is a method for generating a panoramic image using FAST, the method further comprising: (f) correcting the generated panoramic image using a linear weight function.

As described above, according to the panoramic image generation method using FAST according to the present invention, it is possible to obtain a panoramic image regardless of the order and direction of the images when matching the images.

In addition, according to the panoramic image generation method using FAST according to the present invention, it is possible to obtain a natural panoramic image by correcting the distortion as a result of experimenting with a plurality of images.

BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a diagram showing a configuration of an overall system for carrying out the present invention; Fig.
2 is a flowchart illustrating a method of generating a panoramic image using FAST according to an embodiment of the present invention.
FIG. 3 is a result image of the backward cylinder warping according to an embodiment of the present invention. FIG. 3 is a result image of (a) original image and focal distance f = 700 and (c) f = 400.
FIG. 4 is a diagram illustrating an example of a p pixel, which is the center of a minutia point candidate according to the present invention, and 16 neighboring pixels located in the circle.
5 is an illustration of an example of a lower wavelet filter in the x and y directions according to the present invention.
FIG. 6 is an illustration of a 64-dimensional descriptor vector according to the present invention; FIG.
7 is an exemplary view showing feature points extracted by the SURF method according to the present invention.
8 is an exemplary view of a captured image input in any order according to the present invention.
9 is a graph of a linear weight function for color correction according to the present invention.
10A is an exemplary view of an input image for generating a panoramic image according to an experiment of the present invention.
10B is an exemplary view of an input image having an arbitrary direction and order according to an experiment of the present invention.
11 is an exemplary view of a panoramic image which is a resultant image according to an experiment of the present invention.
12 is an example image showing a distortion phenomenon of a plurality of panoramic images according to the experiment of the present invention.
13 is an exemplary image of the image registration result after the cylinder projection according to the experiment of the present invention.
14 is an exemplary view of a panoramic image obtained by matching a plurality of images according to an experiment of the present invention.
FIG. 15 is a table showing process speed comparisons according to the method of extracting feature points when two images are matched according to the experiment of the present invention. FIG.
FIG. 16 is a table showing process speed comparisons according to the method of extracting feature points during image matching according to the experiment of the present invention; FIG.
17 is a table showing comparison of processing speeds according to an experiment according to the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, the present invention will be described in detail with reference to the drawings.

In the description of the present invention, the same parts are denoted by the same reference numerals, and repetitive description thereof will be omitted.

First, examples of the configuration of the entire system for carrying out the present invention will be described with reference to Fig.

1, a method of generating a panoramic image using FAST according to the present invention is a program system on a computer terminal 20 that receives a plurality of photographed images 10 and generates a panoramic image in which a plurality of photographed images are connected to each other . That is, the panoramic image generation method using the FAST can be implemented as a program and installed in the computer terminal 20 and executed. A program installed in the computer terminal 20 can operate as a single program system 30. [

Meanwhile, as another embodiment, the method may be implemented by a single electronic circuit such as an ASIC (on-demand semiconductor) other than a general-purpose computer which is constituted by a program. Or a dedicated terminal 30 that exclusively processes only the work of generating a panoramic image from a plurality of photographed images. This is called a panorama image generation system 30. Other possible forms may also be practiced.

Next, a panoramic image generation method using FAST according to an embodiment of the present invention will be described with reference to FIG. 2 to FIG. In the present invention, a method of extracting feature points from various images and calculating a homography to create a natural panoramic image will be described. A flow chart of the method according to the invention is shown in Fig.

As shown in FIG. 2, the panoramic image generation method using FAST according to the present invention includes the steps of: (a) projecting a photographed image onto a cylinder-shaped coordinate system (S10); (b) feature point extraction step (S20); (c) a video sequence calculation step (S30); (d) a feature point matching step using RANSAC (S40); (e) a panorama image generation step using homography (S50); And (f) a panorama image correction step (S60).

That is, a series of images (or captured images) photographed in various directions are received (or acquired), and a process of projecting the inputted captured image from the existing rectangular coordinate system to the cylinder-shaped coordinate system is performed in order to correct the distortion (S10). The feature points are extracted from the projected image through the FAST (feature from accelerated segment test) method (S20). The vector descriptor of the SURF method is calculated to match the extracted minutiae. The order of the inputted images is automatically calculated using the positions of the minutiae points extracted from the respective images (S30).

Next, when the feature points are matched using the RANSAC method, the error points are removed (S40). The coordinates of the four points that are most accurately matched in each image are found and the homography between the images is calculated [Non-Patent Documents 5 and 6]. After the coordinate transformation of one image using the calculated homography and registration, a panorama image is generated (S50). A natural panoramic image can be generated using a linear weight function to smoothly correct distortions such as a boundary line where the illumination or viewpoint of the image changes in the process of matching the images (S60).

Each step will be described in more detail below.

First, a cylinder-shaped coordinate system is projected on the photographed image (S10).

The photographed image is a series of images photographed in various directions. That is, a plurality of images are acquired at appropriate intervals using a digital camera, a smart phone, a webcam, and the like. If there are too few overlapping regions between images, there is a high probability of errors in matching the feature points.

One photographed image refers to a photographed image when a single camera faces one direction. The present invention intends to finally generate a paranoma image from a series of photographed images. That is, a series of images photographed in various directions are appropriately connected to generate a paranoma image. Such a panoramic image provides a wider field of view (FOV) for a scene around a photographer, as compared to a single image taken in a single direction, so that an observer can immerse in a more realistic image do

A cylinder-shaped coordinate system is projected onto the above-described captured image.

There is a difference between the camera position and the focal distance for each photographed image, so that a distortion phenomenon that occurs when the photographed images are matched can be generated. In order to minimize the distortion, the image in the existing plane coordinate system is projected onto a cylinder-shaped coordinate system [Non-Patent Documents 2, 8].

The cylinder projection means that the photographed image is converted into a two-dimensional image in accordance with a three-dimensional cylindrical coordinate system. When the cylinder is transformed, forward warping is performed from the existing plane coordinate system to the cylinder coordinate system, so that holes and noise are generated in the image because each coordinate is corresponded to the real coordinates after calculation.

To solve this problem, backward warping is performed and linear interpolation is applied to obtain a natural cylinder image. That is, when the image is acquired, it is photographed in plane coordinates. In this case, the linear interpolation is used in the process of performing backward warping to convert to cylinder coordinates and backward warping. Finally, the transformed image is generated in the cylinder coordinates instead of the plane coordinates.

By applying the inverse cylinder warping using Equation (1), image distortion can be minimized.

[Equation 1]

Figure 112015079838139-pat00003

In Equation 1, (x, y) is the cylinder coordinate and (x ', y') is the plane coordinate. f represents the focal length of the image, and x c and y c represent the center coordinates of the cylinder coordinate system. s is a variable that determines the scale of the image after projection. is the degree of warp (angle) of the cylinder coordinate.

When the equation 1 is calculated, the cylinder coordinates x 'and y' are calculated. In this process, linear interpolation is applied.

3 is a result image obtained by performing the inverse cylinder warping while changing the focal length f.

Next, feature points are extracted from the projected image through a feature from accelerated segment test (FAST) (S20).

Feature points are extracted from the projected images in the cylinder coordinate system to find matching points between the images. In the present invention, a feature from accelerated segment test (FAST) is used to extract feature points of an image [Non-Patent Document 4]. The FAST method can detect feature points at a faster rate than conventional methods such as Harris, SIFT, and SURF.

The FAST method largely detects the feature points through three processes. First, it selects the feature point candidates at a high speed. FIG. 4 shows 16 pixels located in a circle centering on a p pixel as a feature point candidate of an image. The feature point candidates are selected by comparing the values obtained by adding or subtracting the threshold value t from the sixteen surrounding pixel brightness values I p? X and the brightness values I p of the p pixels in FIG. If there are 12 or more pixels that are larger than the brightness value obtained by adding the threshold value t to the p pixel and the brightness value obtained by subtracting the threshold value t from the p pixel, the p point is set as the feature point candidate .

In this process, to quickly detect feature point candidates, feature point candidates are selected by comparing only the brightness values of four pixels (1, 5, 9, 13) without comparing all 16 pixels. If the point p is a feature point candidate, three or more of the above four pixels will be lighter than I p + t or darker than I p - t. And judges that the other points are not minutiae points.

That is, if the number of peripheral pixels whose difference from the brightness value of the pixel is greater than a predetermined threshold value is greater than or equal to a predetermined number for one pixel of the projected image and a plurality of peripheral pixels of the pixel, And detects the pixel as a minutia point candidate. As a result, if the brightness value difference between the neighboring pixels and the feature point candidate pixels is larger than the threshold value, the feature point candidate is detected.

The brightness value of the reference pixel p (I p) from the luminance value of the sixteen adjacent pixels than the value obtained by subtracting the threshold value t (I p → x) is smaller the adjacent pixels in S dark, based on the pixel brightness value (I p) If the brightness value of the adjacent pixel is larger than the value obtained by adding the threshold value t, the adjacent pixel is defined as S bright . If the brightness value is smaller than the threshold value t, S is similar . The neighboring pixels of all the feature point candidates of the image can be defined by Equation (2). In Equation (2), S p? X denotes adjacent pixels of the feature point candidate.

&Quot; (2) "

Figure 112015079838139-pat00004

The second step is to score the pixels detected as the feature point candidate using Equation (3). The reason for assigning a score to each feature point is that the feature point becomes random because the threshold t used in the first step is a value that the user has arbitrarily set. A maximum threshold value t that can maintain a state in which the brightness value I p of the reference pixel extracted as the feature point is characteristic of the brightness value I p → x of the adjacent pixel is found, .

&Quot; (3) "

Figure 112015079838139-pat00005

In the last step, if a certain point is detected as a feature point in the image, neighboring surrounding points are detected as a feature point as well. To solve this problem, a non-maximal suppression step is performed. In order to select the strongest point of the feature, we use the score of Equation 3 calculated in the second step. If there is a feature point having a higher V value than the neighbor feature points, only one of the feature points collected by the method of removing the feature points of the low score is detected.

After the feature points are detected in each image, a descriptor including information of each feature point is created. In the present invention, feature points are detected using the FAST method, and then each feature point is defined using a descriptor of a speed up robust feature (SURF) method. Since the feature point detection method of the FAST method is faster than using the hessian detector, which is a feature point detection method of the SURF method, it is expected that the execution speed can be improved. The SURF method improves the scale invariant feature transform (SIFT) method and is widely used for extracting feature points robust against size and rotation [Non-Patent Document 9].

After the feature points are detected by the FAST method, the main directions of the feature points are firstly searched to make the descriptors. In order to express the feature points robustly to the rotation, we use the Haar wavelet filter to calculate the main directions of the feature points. Fig. 5 is a schematic representation of a lower wavelet filter in the x-axis direction and the y-axis direction. The black region has a weight of -1 and the white region has a weight of +1. The wavelet filter can be used to find the magnitude and direction of the gradient. Dx and dy of the surrounding pixels around the detected characteristic point can be found by the lower wavelet response. In this case, the vertical and horizontal components are expressed as the sum of the vectors in the range, the largest vector is defined as the main direction of the feature point, and the remaining vectors are not used.

FIG. 6 shows a rectangular window region divided by 4 × 4 regions based on the principal direction in each feature point. (4 × 4 × 4) vectors are generated by calculating four kinds of feature vectors (Σdx, Σdy, Σ | dx |, Σ | dy |) in each region. Using these vectors, a feature point descriptor robust to the rotation and magnitude of the object can be expressed. Fig. 7 is a result image showing the feature points detected in the image by red dots.

Next, the order of input images is automatically calculated using the positions of the minutiae points extracted from the projected images (or the projected images) (S30).

The captured images are always continuous and not acquired in the right order. A person can judge the order of two images, but since the machine and the computer can not judge themselves, an additional process of determining the order of the images is performed. The method according to the present invention determines the order of images using the positions of matching minutia points between two images. If the positions of the matching feature points in one image are located on the right side of the image, the probability of this image being located on the left of the two images is high. In both images, the positions of the feature points are calculated and the images are arranged in the order of probability. FIG. 8 shows feature points extracted from two images having arbitrary order, and the most matched feature points are connected by white lines. In this way, images input in any order can be generated as normal panorama images while rearranging the order of the images.

Next, the feature points are matched using the RANSAC method, and the points having the error are removed (S40).

A random sample consensus (RANSAC) method is used to compute the matching feature points between two images. The RANSAC method is a method of minimizing the error by predicting an appropriate model from the mixed data of error and noise. In order to determine the model parameters, the minimum data among the whole data is randomly selected and predicted by a virtual model. The optimum solution is obtained by repeatedly calculating the solution whether the selected feature points fit the prediction model. Basically, the distance between the 64-dimensional descriptor vectors of the SURF method is calculated to determine the matching point. The RANSAC method is used to determine the four points that are most accurately matched among the many feature points between two images.

That is, when the feature points are matched, the point having an error is removed by using the RANSAC algorithm. The selection of the minimum data at random from the entire data means that four feature points are randomly selected among all the feature points detected in the image to calculate the homography described below. Naturally, random selection was made, so the position of the minutiae does not coincide when homography is calculated and then the image is converted. Here, the model refers to a case where the positions of the minutiae points of the pre-conversion image coincide with the positions of the minutiae points of the image after conversion. The most accurate homography matrix is obtained by repeating the RANSAC algorithm until the position of the minutiae before the conversion and the position of the minutiae after the conversion are closest (when they match).

Next, a homography between images is calculated, and a panorama image is generated by coordinate transformation of an image using the calculated homography (S50).

Knowing the coordinates of the four points that are relatively accurately matched in both images, a homography matrix can be calculated using a direct linear transformation (DLT) method. Homography refers to the relationship between the coordinates of the reference image and the coordinates of the corresponding object image in a matrix as shown in Equation (4). In Equation (4), x i , y i are existing coordinates,

Figure 112015079838139-pat00006
,
Figure 112015079838139-pat00007
Is the post-transformation coordinate using the matrix and w i is the scale constant. By expanding Equation (4) and rearranging w i , the equation (5) is obtained. Equation (5) is rearranged to calculate a 3x3 homography matrix and expressed by a matrix as shown in Equation (6). The homography matrices H 1 to H 9 can be obtained using Equation (6), and the coordinates of the existing image can be transformed using Equation (4).

&Quot; (4) "

Figure 112015079838139-pat00008

&Quot; (5) "

Figure 112015079838139-pat00009

&Quot; (6) "

Figure 112015079838139-pat00010

As described above, x i , y i are the coordinates of the existing minutiae points before conversion,

Figure 112015079838139-pat00011
,
Figure 112015079838139-pat00012
Is the coordinates of the minutiae points after matrix transformation. In Equation (6), n is the number of feature points extracted from the RANSAC algorithm. Preferably, n is 4, meaning four feature points described as necessary in the RANSAC algorithm. That is, the homography value can be calculated by substituting the coordinates of the four minutiae points in the image into the respective matrices.

When the original image is transformed using the homography matrix, the coordinates are changed, which is the coordinate of the panoramic image.

Next, the generated panorama image is corrected using the linear weight function (S60).

When matching a plurality of images to generate a panoramic image, an unnatural boundary line may be generated after matching since the illumination and the viewpoint are different for each input image. The bilinear weighted function of FIG. 9 is used to compensate for this discontinuity in boundary lines or colors [Non-Patent Document 7]. It is possible to remove the boundary line of the matching region and generate a natural panoramic image by assigning different weights to the pixel positions of the overlapping matching regions.

The linear weighting function is used to remove the boundaries according to the color difference between two images when the images are matched. The weight means how much the color is included. If the weight is 0, the color is not included. If the weight is 1, the color is included. The sum of the weights of the two images is always 1. For example, assume that the left image has a color value of 50 and the right image has a color value of 70. The color value of the center of the matching region is 50 × 0.5 + 70 × 0.5 = 60, that is, the center color value of the matched image is 60. In this way, the weight of the left image of the two images to be matched decreases from 1 to 0 from left to right. Conversely, the right image of the two images is reduced from 1 to 0 from right to left.

Next, the effect of the present invention through experiments will be described in more detail with reference to FIGS. 10 to 22. FIG.

Experimental environment is i5 Intel CPU, 8GB RAM, GeForce GTX460 graphics card and implemented in Visual Studio 2010. OpenCV 2.4.6 was used to extract feature points in images. The images used in the experiments were downloaded from the following site.

http://mpac.ee.ntu.edu.tw/~sutony/vfx_stitching/pano.htm#Download

Experiments were carried out to measure the speed of natural panorama images, the effect of cylinder projection, and the speed of process according to feature point extraction method, regardless of the order and direction of the input image.

10A is a left and right image of 800x600 size input to generate a panoramic image. FIG. 10B shows an input image in an arbitrary direction and in order, and a natural panoramic result image is generated as shown in FIG. 12A and 12B are images obtained by inputting four images each having a size of 324x484. When a plurality of images are matched before the cylinder projection, distortion occurs as shown in FIGS. 12A and 12B. Figs. 13A and 13B show the result image after the cylinder projection. As a result of the cylinder projection, distortions disappear when multiple images are matched and a natural result image is generated. FIG. 14 is a panoramic image generated by inputting 10 images in a random order and using the proposed method. Even if multiple images were matched, it was possible to generate panoramic images faster and more natural than the conventional methods.

FIG. 15 compares the process speeds of two images of 800x600 size when generating panoramic images using the existing SIFT, SURF method and FAST method according to the present invention. Especially, it can be confirmed that the execution time of the FAST method is significantly reduced in the process of extracting feature points. As a result, FAST method is 10 times faster than SIFT method. The table of FIG. 16 shows the processing speed for each panorama image when 10 panoramic images having a size of 324x484 are generated using the FAST method according to the present invention. In the method according to the present invention, it takes about 3.5 seconds to make 10 panoramic images, and it is confirmed that it is much faster than the SIFT and SURF methods. The table of FIG. 17 shows the result of measuring the execution speed while changing the size and the length of the sequence. The larger the size of the image, and the longer the overall execution time is, the more the more images are matched.

The present invention has described a panoramic image generation method using a FAST method capable of extracting feature points faster than the conventional SIFT and SURF methods. Also, by adding a calibration process using a cylinder projection and a color blending method, a more natural panoramic image can be generated. Also, it is possible to generate a panoramic image with arbitrary image regardless of the order and direction of the input image. A natural panoramic image was created by matching about 10 images.

Although the present invention has been described in detail with reference to the above embodiments, it is needless to say that the present invention is not limited to the above-described embodiments, and various modifications may be made without departing from the spirit of the present invention.

10: photographed image 20: computer terminal
30: Program system

Claims (11)

A panoramic image generation method using FAST, which generates a panoramic image from a plurality of photographed images photographed in a plurality of directions,
(a) converting the photographed images into a projected image in plane coordinates in accordance with a cylinder coordinate system;
(b) extracting feature points from a projected image through a feature from accelerated segment test (FAST) method;
(d) matching the feature points using the RANSAC method and eliminating the error points; And
(e) calculating a homography between the projected images, and performing coordinate transformation and matching of the images using the homography to generate a panoramic image,
In the step (b), the main directions of the feature points are calculated using the Haar wavelet filter for the extracted feature points, the rectangular window region is formed based on the main direction, and the feature vectors are divided in the divided regions And calculating a feature point descriptor as feature vectors in each of the divided regions to generate a panoramic image using FAST.
The method according to claim 1,
In the step (a), forward watermarking is performed in a plane coordinate system of the photographed images in a cylinder coordinate system, backward warping is performed in a cylinder coordinate in which forward warping is performed, and a projected image is acquired by applying a linear interpolation method A panoramic image generation method using FAST.
3. The method of claim 2,
Wherein the backward warping is performed according to Equation (1).
[Equation 1]
Figure 112015079838139-pat00013

(X ', y') is the plane coordinate, f is the focal length of the image, x c and y c are the center coordinates of the cylinder coordinate system, s is the projection image Is the variable that determines the scale of
The method according to claim 1,
In the step (b), the number of peripheral pixels having a difference between the brightness of the pixel and a plurality of peripheral pixels of the pixel is greater than a predetermined threshold value, If the determined number of pixels is more than a predetermined number, the corresponding pixel is detected as a feature point candidate. If the detected feature point candidates are adjacent to each other, only feature point candidates having the greatest difference in brightness value from neighboring pixels are detected as feature points, A panoramic image generation method using FAST.
5. The method of claim 4,
In the step (b), the peripheral pixels are selected as four pixels positioned horizontally and vertically among the sixteen pixels arranged in the circumference of the circle spaced apart by two pixel distances from the corresponding pixel. A method of creating a panoramic image.
5. The method of claim 4,
In the step (b), a minutia point score V is given to the minutia candidate candidates by Equation (2), and if there are adjacent minutiae candidates having a minutia point score V higher than the minutiae candidate among the minutiae candidates, A method of panoramic image generation using FAST.
[Equation 2]
Figure 112015079838139-pat00014

However, if the brightness value (I p) threshold t brightness value of 16 adjacent pixels than the value obtained by subtracting from a reference pixel p (I p → x) is less establish the adjacent pixels in S dark, based on the pixel brightness value ( I p ) and the threshold value t, if the brightness value of the adjacent pixel is larger than the value obtained by adding the threshold value t, the adjacent pixel is defined as S bright .
delete The method of claim 1,
(c) calculating a sequence of images using the positions of the minutiae matching between the two photographic images of the photographed images before the step (d), wherein, in the part where the minutiae matching are positioned by a predetermined number or more Further comprising the step of determining the order of the photographed images by determining that the two photographed images overlap with each other.
The method according to claim 1,
In step (d), a virtual model of two images is selected by randomly selecting pixel data on two images, and repeatedly determining whether the extracted minutiae are correct for the predicted virtual model to determine matching minutiae And selecting a predetermined number of minutiae from among the minutiae that best fit and excluding the remaining minutiae.
The method according to claim 1,
Wherein the step (e) comprises: obtaining a homography matrix using the extracted feature points; and transforming the coordinates of the projection image into the coordinates of the panoramic image using the obtained homography matrix.

The method according to claim 1,
The method further comprises: (f) correcting the generated panoramic image using a linear weight function.
KR1020150116079A 2015-08-18 2015-08-18 A panorama image generation method using FAST algorithm KR101692227B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150116079A KR101692227B1 (en) 2015-08-18 2015-08-18 A panorama image generation method using FAST algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150116079A KR101692227B1 (en) 2015-08-18 2015-08-18 A panorama image generation method using FAST algorithm

Publications (1)

Publication Number Publication Date
KR101692227B1 true KR101692227B1 (en) 2017-01-03

Family

ID=57797096

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150116079A KR101692227B1 (en) 2015-08-18 2015-08-18 A panorama image generation method using FAST algorithm

Country Status (1)

Country Link
KR (1) KR101692227B1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108428211A (en) * 2017-02-15 2018-08-21 阿里巴巴集团控股有限公司 Processing method, device and the machine readable media of image
KR20190014771A (en) * 2017-08-03 2019-02-13 (주)아이피티브이코리아 Method and system for stiching ultra high resolution image
CN112884635A (en) * 2021-01-25 2021-06-01 中交广州航道局有限公司 Submarine environment visualization method and device based on ROV carrying dual-frequency forward-looking sonar
CN113658080A (en) * 2021-08-23 2021-11-16 宁波棱镜空间智能科技有限公司 Method and device for geometric correction of line-scanning cylinder based on feature point matching
CN114119437A (en) * 2021-11-10 2022-03-01 哈尔滨工程大学 GMS-based image stitching method for improving moving object distortion
KR20230060888A (en) * 2021-10-28 2023-05-08 재단법인대구경북과학기술원 Method and apparatus for stitching medical images
KR102552326B1 (en) 2023-01-16 2023-07-06 (주)글로벌시스템스 A making method of big landscape photographs using multiple image panoramas about an area of surveillance
US11715178B2 (en) 2020-08-24 2023-08-01 Samsung Electronics Co., Ltd. Method and apparatus for generating image
KR102680900B1 (en) * 2023-12-13 2024-07-04 주식회사 싸인텔레콤 Apparatus and method for generating and transmitting real-time autonomous cooperative driving support data consisting of neighboring vehicle detection data and location measurement data from panoramic image data

Non-Patent Citations (12)

* Cited by examiner, † Cited by third party
Title
[비특허문헌 1] M. Brown, and D. G. LOWE, "Automatic panoramic image stitching using invariant features," International journal of computer vision, Vol. 74, No. 1, Dec. 2006.
[비특허문헌 2] R. Szeliski, "Image alignment and stitching: a tutorial," Computer graphics and vision, Vol. 2, No.1, pp.15~16, Jan. 2006
[비특허문헌 3] H. Bay, A. Ess, T. Tuytelaars and L. V. Gool "Speeded-Up Robust Features (SURF)," Computer vision and image understanding (CVIU), Vol. 110, No. 3, pp. 2-8, Jun. 2008
[비특허문헌 4] E.Rosten, T.Drummond, "Machine learning for high-speed corner detection", European conference on computer vision, Vol.1, pp.430-443, 2006
[비특허문헌 5] L. Moisan, P. Moulon, and Pascal Monasse, "Automatic homographic registration of a pair of images, with a contrario elimination of outliers," Image processing on line(IPOL), pp. 2-3, May, 2012
[비특허문헌 6] K-W Kwon, A-Y Lee, U. Oh, "Panoramic image composition algorithm through scaling and rotation invariant features," Information processing society journal, Vol. 17, No. 5, Jun. 2010.
[비특허문헌 7] P. J. Burt and E. H. Adelson, "A multiresolution spline with application to image mosaics," ACM Transaction on graphics, Vol. 2, No. 4, pp. 2-5, Oct. 1983.
[비특허문헌 8] R.Szeliski, H.Y.Shum, "Creating Full View Panoramic Image Mosaics and Environment Maps", Computer Graphics, pp.251-258, August 1997.
[비특허문헌 9] David G. Lowe, "Distinctive image features from scale-invariant keypoints," International journal of computer vision, Vol. 60, No. 2, pp. 5-16, Jan. 2004
Ebtsam Adel ET AL., Image Stitching based on Feature Extraction Techniques: A Survey, International Journal of Computer Applications, Volume 99, No.6, August 2014* *
박시영, 김종호, 유지상, 영상의 특징점 추적을 통한 고속 파노라마 생성 기법, 한국통신학회 2015년도 동계종합학술발표회, 312-314P, 2015.01. *
박시영, 김종호, 유지상, 영상의 특징점 추적을 통한 고속 파노라마 생성 기법, 한국통신학회 2015년도 동계종합학술발표회, 312-314P, 2015.01.*

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108428211A (en) * 2017-02-15 2018-08-21 阿里巴巴集团控股有限公司 Processing method, device and the machine readable media of image
KR20190014771A (en) * 2017-08-03 2019-02-13 (주)아이피티브이코리아 Method and system for stiching ultra high resolution image
KR101990491B1 (en) * 2017-08-03 2019-06-20 (주)아이피티브이코리아 Method and system for stiching ultra high resolution image
US11715178B2 (en) 2020-08-24 2023-08-01 Samsung Electronics Co., Ltd. Method and apparatus for generating image
CN112884635A (en) * 2021-01-25 2021-06-01 中交广州航道局有限公司 Submarine environment visualization method and device based on ROV carrying dual-frequency forward-looking sonar
CN113658080B (en) * 2021-08-23 2023-12-22 宁波棱镜空间智能科技有限公司 Linear scanning cylinder geometric correction method and device based on characteristic point matching
CN113658080A (en) * 2021-08-23 2021-11-16 宁波棱镜空间智能科技有限公司 Method and device for geometric correction of line-scanning cylinder based on feature point matching
KR20230060888A (en) * 2021-10-28 2023-05-08 재단법인대구경북과학기술원 Method and apparatus for stitching medical images
KR102655362B1 (en) * 2021-10-28 2024-04-04 재단법인대구경북과학기술원 Method and apparatus for stitching medical images
CN114119437A (en) * 2021-11-10 2022-03-01 哈尔滨工程大学 GMS-based image stitching method for improving moving object distortion
CN114119437B (en) * 2021-11-10 2024-05-14 哈尔滨工程大学 GMS-based image stitching method for improving distortion of moving object
KR102552326B1 (en) 2023-01-16 2023-07-06 (주)글로벌시스템스 A making method of big landscape photographs using multiple image panoramas about an area of surveillance
KR102680900B1 (en) * 2023-12-13 2024-07-04 주식회사 싸인텔레콤 Apparatus and method for generating and transmitting real-time autonomous cooperative driving support data consisting of neighboring vehicle detection data and location measurement data from panoramic image data

Similar Documents

Publication Publication Date Title
KR101692227B1 (en) A panorama image generation method using FAST algorithm
US11361459B2 (en) Method, device and non-transitory computer storage medium for processing image
US10306141B2 (en) Image processing apparatus and method therefor
US10726539B2 (en) Image processing apparatus, image processing method and storage medium
US9262811B2 (en) System and method for spatio temporal video image enhancement
EP3104331A1 (en) Digital image manipulation
US20060171687A1 (en) Generation of still image from a plurality of frame images
Adel et al. Image stitching system based on ORB feature based technique and compensation blending
US11398007B2 (en) Video generation device, video generation method, program, and data structure
KR20130112311A (en) Apparatus and method for reconstructing dense three dimension image
CN110505398B (en) Image processing method and device, electronic equipment and storage medium
CN114612352A (en) Multi-focus image fusion method, storage medium and computer
JP6604908B2 (en) Image processing apparatus, control method thereof, and control program
JP5878451B2 (en) Marker embedding device, marker detecting device, marker embedding method, marker detecting method, and program
WO2008102898A1 (en) Image quality improvement processig device, image quality improvement processig method and image quality improvement processig program
CN113298187A (en) Image processing method and device, and computer readable storage medium
CN110557556A (en) Multi-object shooting method and device
KR101105675B1 (en) Method and apparatus of inpainting for video data
JP6006675B2 (en) Marker detection apparatus, marker detection method, and program
GB2585197A (en) Method and system for obtaining depth data
KR20160000533A (en) The method of multi detection and tracking with local feature point for providing information of an object in augmented reality
CN107251089B (en) Image processing method for motion detection and compensation
JP6118295B2 (en) Marker embedding device, marker detection device, method, and program
JP7110397B2 (en) Image processing device, image processing method and image processing program
JP6006676B2 (en) Marker embedding device, marker detecting device, marker embedding method, marker detecting method, and program

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20191202

Year of fee payment: 4