CN111415300A - Splicing method and system for panoramic image - Google Patents
Splicing method and system for panoramic image Download PDFInfo
- Publication number
- CN111415300A CN111415300A CN202010380754.XA CN202010380754A CN111415300A CN 111415300 A CN111415300 A CN 111415300A CN 202010380754 A CN202010380754 A CN 202010380754A CN 111415300 A CN111415300 A CN 111415300A
- Authority
- CN
- China
- Prior art keywords
- feature points
- scale
- image
- gray
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 45
- 238000005070 sampling Methods 0.000 claims abstract description 39
- 238000012545 processing Methods 0.000 claims abstract description 25
- 230000004044 response Effects 0.000 claims abstract description 16
- 238000001914 filtration Methods 0.000 claims abstract description 11
- 238000012216 screening Methods 0.000 claims abstract description 11
- 230000011218 segmentation Effects 0.000 claims abstract description 11
- 238000012360 testing method Methods 0.000 claims abstract description 11
- 238000010276 construction Methods 0.000 claims abstract description 9
- 238000010586 diagram Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 230000001629 suppression Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 10
- 230000004927 fusion Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 239000000835 fiber Substances 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000005764 inhibitory process Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/61—Scene description
Abstract
The invention provides a method and a system for splicing panoramic images, which comprises the steps of converting original images acquired by different image acquisition equipment into gray maps, and carrying out iterative downsampling processing on the gray maps based on a Gaussian scale pyramid to acquire scale pyramid construction maps of the gray maps; acquiring feature points in the gray scale image and coordinate information of the feature points by using a FAST algorithm of an accelerated segmentation test; in response to the fact that the density degree of the matched feature points in different gray-scale images is larger than a preset range, reducing the area containing the matched feature points based on the data volume and the density degree of the matched feature points, and screening the matched feature points by utilizing a random sampling consistency algorithm; establishing a relative coordinate relationship between the characteristic points and the original image, and splicing the original image by utilizing the coordinate information of the characteristic points and the relative coordinate relationship to obtain an initial synthetic image; and performing smoothness processing on the initial synthetic image by using Gaussian filtering to generate a panoramic image. By using the method, the splicing efficiency of the panoramic image can be greatly improved.
Description
Technical Field
The invention relates to the technical field of image splicing, in particular to a method and a system for splicing panoramic images.
Background
In the early image fusion development process, image fusion mainly depends on post-processing, and can be roughly divided into a simple image splicing stage and a panoramic VR stage according to the emphasis points of the image fusion. In the image splicing stage, two images with lower resolution are spliced to obtain an image with higher resolution and wider visual field, and the method is mainly used for the fields of expanding visual range, displaying regional scenes and the like; with the gradual improvement of the image splicing technology, the second stage of image fusion is mainly the synthesis of a static panoramic image, a panoramic image is formed by performing post-processing on a pre-shot image, the image is imaged on a spherical or cylindrical model, and the panoramic image is widely used for scenic spot display, map street view display, company and school panoramic display and the like.
The process from the acquisition and shooting of multiple cameras to the final generation of a complete panoramic image mainly comprises image registration, image fusion processing methods and the like, and each link is related to the quality of the final splicing effect. The image stitching method based on feature matching benefits from excellent performance effect and gradually becomes a research hotspot and a mainstream, but the image stitching technology based on feature matching still has limitations, and at present, two problems mainly exist: (1) the real-time problem is as follows: the image is a combination of a plurality of image sequences, and the huge data size of the image requires that the image stitching technology must have high real-time performance, that is, the used stitching algorithm needs extremely short operation time. Since the most time-consuming step in image stitching is image registration, how to select a registration method with high real-time performance is the key to improve the real-time performance of image stitching. Meanwhile, a large number of repeated frame images exist in a general image, and repeated frame images can cause repeated calculation and further delay the calculation speed, so that a certain logic method is needed to reduce the repeated calculation. (2) Problem of visual effect: because the image acquisition process can be influenced by various external factors such as uneven ambient light, change of shooting angles, noise interference and the like, how to select a registration and fusion algorithm with better effect, improve the accuracy of registration and fusion and eliminate bad splicing line traces is the key for finally splicing the images.
Disclosure of Invention
The invention provides a method and a system for splicing panoramic images, which are used for solving the technical problems in the process of splicing the panoramic images, and aims to solve the technical problems in the image splicing technology in the prior art that the real-time operation efficiency is low and the image splicing effect is poor.
In one aspect, the present invention provides a method for panoramic image stitching, comprising the steps of:
s1: converting original images acquired by different image acquisition equipment into gray-scale images, and performing iterative down-sampling processing on the gray-scale images based on a Gaussian scale pyramid to acquire a scale pyramid construction diagram of the gray-scale images;
s2: acquiring feature points and coordinate information of the feature points in the gray-scale image by using a FAST algorithm of an accelerated segmentation test, wherein the feature points meet the condition that the relative positions of any two adjacent scale layers in the scale pyramid structural image are maximum or minimum values in comparison;
s3: in response to the fact that the density degree of the matched feature points in different gray-scale images is larger than a preset range, reducing the area containing the matched feature points based on the data volume and the density degree of the matched feature points, and screening the matched feature points by utilizing a random sampling consistency algorithm;
s4: establishing a relative coordinate relationship between the characteristic points and the original image, and splicing the original image by utilizing the coordinate information of the characteristic points and the relative coordinate relationship to obtain an initial synthetic image;
s5: and performing smoothness processing on the initial synthetic image by using Gaussian filtering to generate a panoramic image.
Preferably, the scale pyramid configuration diagram in step S1 specifically includes: and carrying out iterative convolution on the gray level image by utilizing a Gaussian convolution kernel, and carrying out down-sampling to form a Gaussian scale pyramid structure with gradually blurred images from top to bottom. The FAST algorithm can have better adaptability by virtue of the constructed Gaussian scale pyramid structure.
Further preferably, the down-sampling specifically includes: the scale pyramid structure diagram comprises five layers of structures, wherein the first layer of structure is an original image layer of a gray scale diagram, 1.5 times of the first layer of structure is subjected to down-sampling to obtain a second layer of structure, 2 times of the first layer of structure is subjected to down-sampling to obtain a third layer of structure, 1.5 times of the second layer of structure is subjected to down-sampling to obtain a fourth layer of structure, and 2 times of the third layer of structure is subjected to down-sampling to obtain a fifth layer of structure. By means of the down-sampling mode, a Gaussian scale pyramid structure with gradually fuzzy pictures from top to bottom and gradually increased scales can be formed, and a FAST algorithm is convenient to perform.
Preferably, the step S2 of acquiring the feature points in the gray-scale map by using the FAST algorithm for the accelerated segmentation test specifically includes:
s21: carrying out corner detection on each layer of structure in the scale pyramid structural diagram by using an improved FAST algorithm to obtain corner information of each layer;
s22: carrying out non-maximum value inhibition on each layer of structure to obtain candidate characteristic points;
s23: and positioning the scale and the coordinate position of the candidate feature point.
Further preferably, the obtaining of the candidate feature points specifically includes the following steps:
filtering the circle center pixel points in response to the fact that the gray difference absolute value of two pixel points and the circle center pixel point in the y-axis direction passing through the circle center on the circular neighborhood of any pixel point on the gray scale map is smaller than a first threshold value;
and marking the circle center pixel points as candidate characteristic points in response to the fact that the gray difference absolute value of two pixel points and the circle center pixel point in the y-axis direction of passing the circle center on the circular neighborhood of any pixel point on the gray scale map is larger than a first threshold, and the number of the gray difference absolute values of four pixel points and the circle center pixel point in the y-axis direction and the x-axis direction of passing the circle center on the circular neighborhood is larger than or equal to 3, wherein the first threshold is set to be 30.
Preferably, step S3 specifically includes:
s31: dividing different gray-scale images into a plurality of regions, responding to the fact that the number of the matched feature points in the regions is smaller than a second threshold value, and directly screening the matched feature points in the regions by using a random sampling consistency algorithm, wherein the second threshold value is set to be 25;
s32: and in response to the number of the matched feature points in the region being greater than the second threshold, continuing to segment the region, and iterating step S31 to construct a cluster point set, and randomly deleting the matched feature points of different gray-scale maps in blocks, where the number of the matched feature points in the cluster point set is less than a third threshold, and the third threshold is set to 150.
Preferably, step S4 specifically includes: and mapping the coordinate information of the feature points in the gray scale image to the original image, and splicing a plurality of original images containing the matched feature points based on the matched feature points in different original images and the relative coordinate relationship. By means of the mapping relation of the relative coordinate system, the images can be spliced based on the same characteristic points.
Preferably, the pixel value at the splice of the initial composite image is the average of the neighboring pixels. Smooth transitions at the splice can be achieved using the pixel average settings at the splice.
According to a second aspect of the invention, a computer-readable storage medium is proposed, on which one or more computer programs are stored, which when executed by a computer processor implement the above-mentioned method.
According to a third aspect of the present invention, there is provided a stitching system for panoramic images, the system comprising:
a pretreatment unit: converting original images acquired by different image acquisition equipment into gray-scale images, and performing iterative down-sampling processing on the gray-scale images based on a Gaussian scale pyramid to acquire a scale pyramid construction diagram of the gray-scale images;
a feature point acquisition unit: acquiring feature points and coordinate information of the feature points in the gray-scale image by using a FAST algorithm of an accelerated segmentation test, wherein the feature points meet the condition that the relative positions of any two adjacent scale layers in the scale pyramid structural image are maximum or minimum values in comparison;
a matching unit: in response to the fact that the density degree of the matched feature points in different gray-scale images is larger than a preset range, reducing the area containing the matched feature points based on the data volume and the density degree of the matched feature points, and screening the matched feature points by utilizing a random sampling consistency algorithm;
splicing unit: establishing a relative coordinate relationship between the characteristic points and the original image, and splicing the original image by utilizing the coordinate information of the characteristic points and the relative coordinate relationship to obtain an initial synthetic image;
a smoothing unit: and performing smoothness processing on the initial synthetic image by using Gaussian filtering to generate a panoramic image.
The invention provides a method and a system for splicing panoramic images, which overcome the limitation of a FAST algorithm by constructing a Gaussian scale pyramid, and solve the problems that the FAST algorithm has no scale invariance, can not be accurately matched when facing an image with large scale transformation, and the image with changed scale is frequently encountered in image splicing. The region containing the matched feature points is reduced based on the data volume and the density of the matched feature points, the single-step calculation speed is higher during registration, the calculation process is simplified, good stability is achieved, the effect and the efficiency of feature point registration are considered, and the method has good adaptivity in the image splicing process.
Drawings
The accompanying drawings are included to provide a further understanding of the embodiments and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments and together with the description serve to explain the principles of the invention. Other embodiments and many of the intended advantages of embodiments will be readily appreciated as they become better understood by reference to the following detailed description. Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of a stitching method for panoramic images according to an embodiment of the present application;
FIG. 3 is a flow diagram of a method for feature point acquisition in accordance with an exemplary embodiment of the present application;
FIG. 4 is a flow chart of a method of feature point registration for a particular embodiment of the present application;
FIG. 5 is a block diagram of a stitching system for panoramic images according to an embodiment of the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which the stitching method for panoramic images of the embodiments of the present application may be applied.
As shown in FIG. 1, system architecture 100 may include a data server 101, a network 102, and a host server 103. Network 102 serves as a medium for providing a communication link between data server 101 and host server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The main server 103 may be a server that provides various services, such as a data processing server that processes information uploaded by the data server 101. The data processing server may perform stitching for the panoramic image.
It should be noted that the stitching method for the panoramic image provided in the embodiment of the present application is generally executed by the host server 103, and accordingly, the apparatus for the stitching method for the panoramic image is generally disposed in the host server 103.
The data server and the main server may be hardware or software. When the hardware is used, the hardware can be implemented as a distributed server cluster consisting of a plurality of servers, or can be implemented as a single server. When software, it may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module.
It should be understood that the number of data servers, networks, and host servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 shows a flowchart of a stitching method for panoramic images according to an embodiment of the present application. As shown in fig. 2, the method comprises the steps of:
s201: original images acquired by different image acquisition devices are converted into gray-scale images, iterative down-sampling processing is carried out on the gray-scale images based on a Gaussian scale pyramid, and a scale pyramid structural image of the gray-scale images is acquired. The original image is converted into the gray-scale image, so that subsequent characteristic points can be conveniently obtained, and the efficiency is improved.
In a specific embodiment, a gaussian scale pyramid is constructed by using a gaussian convolution kernel to perform iterative convolution on a gray level image of an original image, and continuously and repeatedly performing down-sampling until a gaussian scale pyramid with gradually blurred picture from top to bottom and gradually increased scale is formed. Preferably, the first layer of the scale pyramid is a grayscale image layer of the original image, the second layer is obtained by 1.5 times of downsampling of the first layer, the third layer is obtained by 2 times of downsampling of the first layer, the fourth layer is obtained by 1.5 times of downsampling of the second layer, and the fifth layer is obtained by 2 times of downsampling of the third layer. Alternatively, the structure of the gaussian scale pyramid may be other levels or down-sampling multiples except the level relationship and the down-sampling multiple, for example, a bottom-up gaussian scale pyramid is formed, an up-sampling mode is adopted, the sampling multiple may be selected to be 0.5 times or other multiples, and a suitable gaussian scale pyramid and sampling multiple are selected according to an image scene and application requirements of practical application, so that the technical effect of the present invention can be achieved.
S202: and acquiring the characteristic points and the coordinate information of the characteristic points in the gray-scale image by utilizing a FAST algorithm of an accelerated segmentation test, wherein the characteristic points meet the maximum or minimum value in the comparison of the relative positions of any two adjacent scale layers in the scale pyramid structural image. By utilizing the excellent feature extraction performance and the excellent operation speed of the FAST algorithm for the accelerated segmentation test, the efficiency and the accuracy of feature extraction are ensured.
In a specific embodiment, fig. 3 shows a flowchart of a method for feature point acquisition in a specific embodiment of the present application, and as shown in fig. 3, the method for feature point acquisition specifically includes the following steps:
s301: and carrying out corner detection on each layer of structure in the scale pyramid structure by using an improved FAST algorithm to obtain corner information of each layer.
In a specific embodiment, the FAST corner detection algorithm conventionally comprises the following steps: firstly, taking any pixel point P on the image as a center, taking 3 pixels with radius as a circle, and 16 pixel points are arranged on the circle. And then defining a threshold t, respectively solving the gray level difference value between the 16 pixel points and the central pixel point on the circle, and if the number of points of which the difference value exceeds the threshold t is more than n, considering P as a characteristic point. The method comprises the steps that an improved FAST algorithm is adopted for corner detection, specifically, a circle center pixel point is filtered in response to the fact that the gray difference absolute value of two pixel points and the circle center pixel point in the y-axis direction of the circle center passing through the circle center on the circular neighborhood of any pixel point on a gray scale image is smaller than a first threshold value; in response to the fact that the gray difference absolute value of two pixel points and a circle center pixel point in the y-axis direction of passing through the circle center on the circular neighborhood of any pixel point on the gray map is larger than a first threshold, and the number of the gray difference absolute values of four pixel points and the circle center pixel point in the y-axis direction and the x-axis direction of passing through the circle center on the circular neighborhood is larger than or equal to 3, the circle center pixel point is marked as a candidate feature point, preferably, the first threshold is set to be 30. Alternatively, the first threshold may be a value other than 30, for example, other gray scale difference values such as 20, 35, 40, etc., and the technical effect of the present invention can also be achieved by selecting a suitable first threshold according to the actual application.
S302: and carrying out non-maximum value inhibition on each layer of structure to obtain candidate characteristic points. And carrying out spatial non-maximum suppression on each layer of image with corner information, wherein candidate feature points, namely, extreme points with FAST score values larger or smaller than 26 neighborhood points in the space, and are excluded otherwise.
In a specific embodiment, the FAST score value is characterized as an absolute value of a gray value difference between each pixel point of 16 pixel points on the circular neighborhood of the feature point and the center pixel, and then the maximum value of the cumulative sum of the first threshold is subtracted.
S303: and positioning the scale and the coordinate position of the candidate feature point. The method comprises the steps of firstly carrying out two-dimensional quadratic function difference operation on an extreme point and corresponding points of an upper layer and a lower layer in the x direction and the y direction, and then carrying out one-dimensional difference operation on a scale direction to obtain the accurate coordinate position and the scale of the extreme point.
It should be appreciated that although the FAST algorithm is used to obtain the feature points in the present invention, alternatively, other feature point extraction algorithms such as the SIFT algorithm and the SURF algorithm may be selected to extract the feature points, and the technical effects of the present invention may also be achieved. The SIFT algorithm is adopted to extract remarkable features and strong matching capability, and has strong stability in the face of other interference factors such as scale, noise, rotation, illumination and the like, and the application of the SIFT algorithm is very wide due to strong robustness. The SURF feature detector is realized based on a second-order Hessian matrix determinant, the Hessian matrix of all pixel points in images with different scales is obtained, the local maximum value is selected as a candidate feature point, and the position where the extreme value is obtained is the position of the image feature point.
S203: and in response to the fact that the density degree of the matched feature points in different gray-scale images is larger than a preset range, reducing the area containing the matched feature points based on the data volume and the density degree of the matched feature points, and screening the matched feature points by using a random sampling consistency algorithm. By means of the step, the efficiency of feature point registration can be greatly improved, and the method has good adaptability in practical use.
In a specific embodiment, fig. 4 is a flowchart illustrating a method of feature point registration of a specific embodiment of the present application, as shown in fig. 4, the registration method includes the following steps:
s401: and dividing the different gray-scale images into a plurality of areas, and directly screening the matched feature points in the areas by using a random sampling consistency algorithm in response to the number of the matched feature points in the areas being smaller than a second threshold value, wherein the second threshold value is set to be 25. Preferably, the different gray-scale maps are divided into 9 sub-regions, which can effectively simplify the calculation process. Alternatively, the number of the divided areas may be set to 4, 6, or other numbers, and the divided areas are specifically set according to the requirement of the calculated amount, so that the technical effect of the present invention can be achieved.
S402: responding to the fact that the number of the matched feature points in the region is larger than a second threshold value, continuously dividing the region, building a clustering point set in an iteration step S401, randomly deleting the matched feature points of different gray level graphs in a blocking mode, wherein the number of the matched feature points in the clustering point set is smaller than a third threshold value, and the third threshold value is set to be 150. Preferably, the calculation scale is reduced by continuously dividing the image into 9 areas and using iterative calculation, so that the calculation speed is higher. And the effect and the efficiency are considered through the constraint of the third threshold, and the self-adaptive performance is good.
In a specific embodiment, a random sampling consistency algorithm is generally directly adopted for feature point registration in a traditional image stitching process, the number of matched feature points is lack of constraint, the iteration times are uncertain, and the calculation time rises exponentially with the increase of the number of matched points, and even the microsecond level can reach the second level. For real-time panoramic image splicing, the stability of the system is seriously influenced by overlarge algorithm processing time fluctuation or overlong algorithm processing time, and the processing time of the algorithm is required to have relative stability while a more efficient algorithm is pursued for real-time panoramic fusion. Based on the defects, the matching result is screened to control the data scale, and the data scale can be reduced, but when the matching result is too few, the random sampling consistency algorithm is directly used for solving the problem, and the method has better self-adaptability under all conditions.
S204: and establishing a relative coordinate relationship between the characteristic points and the original images, and splicing the original images by utilizing the coordinate information of the characteristic points and the relative coordinate relationship to obtain an initial synthetic image. And mapping the coordinate information of the feature points in the gray scale image to the original image, and splicing a plurality of original images containing the matched feature points based on the matched feature points in different original images and the relative coordinate relationship.
S205: and performing smoothness processing on the initial synthetic image by using Gaussian filtering to generate a panoramic image. After the images are transformed to the same coordinate for splicing, the directly spliced images often have the problems of splicing seams, uneven illumination, double images, ghost images and the like, the images need to be subjected to some smoothing treatment, the average value of adjacent pixels is used as the pixel value of a spliced part, and Gaussian filtering operation is carried out on the whole image to increase the smoothness of the image.
With continued reference to FIG. 5, FIG. 5 illustrates a stitching system framework for panoramic images, in accordance with an embodiment of the present invention. The system specifically comprises a preprocessing unit 501, a feature point acquisition unit 502, a matching unit 503, a splicing unit 504 and a smoothing unit 505.
In a specific embodiment, the preprocessing unit 501: converting original images acquired by different image acquisition equipment into gray-scale images, and performing iterative down-sampling processing on the gray-scale images based on a Gaussian scale pyramid to acquire a scale pyramid construction diagram of the gray-scale images; feature point acquisition section 502: acquiring feature points and coordinate information of the feature points in the gray-scale image by using a FAST algorithm of an accelerated segmentation test, wherein the feature points meet the condition that the relative positions of any two adjacent scale layers in the scale pyramid structural image are maximum or minimum values in comparison; the matching unit 503: in response to the fact that the density degree of the matched feature points in different gray-scale images is larger than a preset range, reducing the area containing the matched feature points based on the data volume and the density degree of the matched feature points, and screening the matched feature points by utilizing a random sampling consistency algorithm; the splicing unit 504: establishing a relative coordinate relationship between the characteristic points and the original image, and splicing the original image by utilizing the coordinate information of the characteristic points and the relative coordinate relationship to obtain an initial synthetic image; the smoothing processing unit 505: and performing smoothness processing on the initial synthetic image by using Gaussian filtering to generate a panoramic image.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
To the I/O interface 605, AN input section 606 including a keyboard, a mouse, and the like, AN output section 607 including a network interface card such as a liquid crystal display (L CD), a speaker, and the like, a storage section 608 including a hard disk, and the like, and a communication section 609 including a network interface card such as a L AN card, a modem, and the like, the communication section 609 performs communication processing via a network such as the internet, a drive 610 is also connected to the I/O interface 605 as necessary, a removable medium 611 such as a magnetic disk, AN optical disk, a magneto-optical disk, a semiconductor memory, and the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted into the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable storage medium of the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware.
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable storage medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: converting original images acquired by different image acquisition equipment into gray-scale images, and performing iterative down-sampling processing on the gray-scale images based on a Gaussian scale pyramid to acquire a scale pyramid construction diagram of the gray-scale images; acquiring feature points and coordinate information of the feature points in the gray-scale image by using a FAST algorithm of an accelerated segmentation test, wherein the feature points meet the condition that the relative positions of any two adjacent scale layers in the scale pyramid structural image are maximum or minimum values in comparison; in response to the fact that the density degree of the matched feature points in different gray-scale images is larger than a preset range, reducing the area containing the matched feature points based on the data volume and the density degree of the matched feature points, and screening the matched feature points by utilizing a random sampling consistency algorithm; establishing a relative coordinate relationship between the characteristic points and the original image, and splicing the original image by utilizing the coordinate information of the characteristic points and the relative coordinate relationship to obtain an initial synthetic image; and performing smoothness processing on the initial synthetic image by using Gaussian filtering to generate a panoramic image.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
Claims (10)
1. A stitching method for panoramic images is characterized by comprising the following steps:
s1: converting original images acquired by different image acquisition equipment into gray-scale images, and performing iterative down-sampling processing on the gray-scale images based on a Gaussian scale pyramid to acquire scale pyramid construction images of the gray-scale images;
s2: acquiring feature points in the gray scale image and coordinate information of the feature points by using a FAST algorithm of an accelerated segmentation test, wherein the feature points meet the condition that the relative position of any two adjacent scale layers in the scale pyramid structural image is a maximum or minimum value in comparison;
s3: in response to the fact that the density degree of matching feature points in different gray-scale images is larger than a preset range, reducing a region containing the matching feature points based on the data volume and the density degree of the matching feature points, and screening the matching feature points by utilizing a random sampling consistency algorithm;
s4: establishing a relative coordinate relationship between the feature points and the original images, and splicing the original images by using the coordinate information of the feature points and the relative coordinate relationship to obtain an initial synthetic image;
s5: and performing smoothness processing on the initial synthetic image by using Gaussian filtering to generate a panoramic image.
2. The stitching method for panoramic images according to claim 1, wherein the scale pyramid construction diagram in the step S1 specifically includes: and carrying out iterative convolution on the gray level image by utilizing a Gaussian convolution core, and carrying out down-sampling to form a Gaussian scale pyramid structure with gradually blurred pictures from top to bottom.
3. The stitching method for panoramic images according to claim 1 or 2, characterized in that the down-sampling specifically comprises: the scale pyramid constructional diagram comprises a five-layer structure, the first layer structure is an original image layer of the gray-scale diagram, 1.5 times of the first layer structure is subjected to down-sampling to obtain a second layer structure, 2 times of the first layer structure is subjected to down-sampling to obtain a third layer structure, 1.5 times of the second layer structure is subjected to down-sampling to obtain a fourth layer structure, and 2 times of the third layer structure is subjected to down-sampling to obtain a fifth layer structure.
4. The stitching method for panoramic images according to claim 1, wherein the step S2 of obtaining the feature points in the gray-scale map by using a FAST algorithm for an accelerated segmentation test specifically comprises:
s21: carrying out corner detection on each layer of structure in the scale pyramid construction diagram by using an improved FAST algorithm to obtain corner information of each layer;
s22: carrying out non-maximum suppression on each layer of structure to obtain candidate feature points;
s23: and positioning the scale and the coordinate position of the candidate characteristic point.
5. The stitching method for the panoramic image according to claim 4, wherein the obtaining of the candidate feature points specifically comprises the following steps:
responding to that the gray difference absolute value of two pixel points and a circle center pixel point in the y-axis direction passing through the circle center on the circular neighborhood of any pixel point on the gray scale map is smaller than a first threshold value, and filtering the circle center pixel point;
and in response to that the gray difference absolute value between two pixel points in the circle center passing y-axis direction on the circular neighborhood responding to any pixel point on the gray map and the pixel point of the circle center is greater than the first threshold, and the number of the gray difference absolute values between four pixel points in the circle center passing y-axis direction and the four pixel points in the circle center passing x-axis direction on the circular neighborhood and the pixel point of the circle center is greater than or equal to 3, marking the pixel point of the circle center as the candidate feature point, wherein the first threshold is set to be 30.
6. The stitching method for panoramic images according to claim 1, wherein the step S3 specifically includes:
s31: dividing different gray maps into a plurality of areas, and responding to the number of the matched feature points in the areas being smaller than a second threshold value, directly screening the matched feature points in the areas by using the random sampling consistency algorithm, wherein the second threshold value is set to be 25;
s32: in response to the number of the matching feature points in the region being greater than a second threshold, continuing to segment the region, and iterating the step S31 to construct a cluster point set, and randomly deleting the matching feature points of different gray maps in blocks, where the number of the matching feature points in the cluster point set is less than a third threshold, and the third threshold is set to 150.
7. The stitching method for panoramic images according to claim 1, wherein the step S4 specifically includes: mapping the coordinate information of the feature points in the gray-scale image to the original image, and splicing a plurality of original images containing the matched feature points based on the matched feature points in different original images and the relative coordinate relationship.
8. The stitching method for panoramic images according to claim 1, wherein the pixel value at the stitching of the initial synthesized image is an average value of neighboring pixels.
9. A computer-readable storage medium having one or more computer programs stored thereon, which when executed by a computer processor perform the method of any one of claims 1 to 8.
10. A stitching system for panoramic images, characterized in that the system comprises:
a pretreatment unit: converting original images acquired by different image acquisition equipment into gray-scale images, and performing iterative down-sampling processing on the gray-scale images based on a Gaussian scale pyramid to acquire scale pyramid construction images of the gray-scale images;
a feature point acquisition unit: acquiring feature points in the gray scale image and coordinate information of the feature points by using a FAST algorithm of an accelerated segmentation test, wherein the feature points meet the condition that the relative position of any two adjacent scale layers in the scale pyramid structural image is a maximum or minimum value in comparison;
a matching unit: in response to the fact that the density degree of matching feature points in different gray-scale images is larger than a preset range, reducing a region containing the matching feature points based on the data volume and the density degree of the matching feature points, and screening the matching feature points by utilizing a random sampling consistency algorithm;
splicing unit: establishing a relative coordinate relationship between the feature points and the original images, and splicing the original images by using the coordinate information of the feature points and the relative coordinate relationship to obtain an initial synthetic image;
a smoothing unit: and performing smoothness processing on the initial synthetic image by using Gaussian filtering to generate a panoramic image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010380754.XA CN111415300A (en) | 2020-05-08 | 2020-05-08 | Splicing method and system for panoramic image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010380754.XA CN111415300A (en) | 2020-05-08 | 2020-05-08 | Splicing method and system for panoramic image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111415300A true CN111415300A (en) | 2020-07-14 |
Family
ID=71495012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010380754.XA Pending CN111415300A (en) | 2020-05-08 | 2020-05-08 | Splicing method and system for panoramic image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111415300A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112270643A (en) * | 2020-09-04 | 2021-01-26 | 深圳市菲森科技有限公司 | Three-dimensional imaging data splicing method and device, electronic equipment and storage medium |
CN112308782A (en) * | 2020-11-27 | 2021-02-02 | 深圳开立生物医疗科技股份有限公司 | Panoramic image splicing method and device, ultrasonic equipment and storage medium |
CN113066012A (en) * | 2021-04-23 | 2021-07-02 | 深圳壹账通智能科技有限公司 | Scene image confirmation method, device, equipment and storage medium |
CN113225606A (en) * | 2021-04-30 | 2021-08-06 | 上海哔哩哔哩科技有限公司 | Video barrage processing method and device |
CN115359114A (en) * | 2022-08-16 | 2022-11-18 | 中建一局集团第五建筑有限公司 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
CN116402693A (en) * | 2023-06-08 | 2023-07-07 | 青岛瑞源工程集团有限公司 | Municipal engineering image processing method and device based on remote sensing technology |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102006425A (en) * | 2010-12-13 | 2011-04-06 | 交通运输部公路科学研究所 | Method for splicing video in real time based on multiple cameras |
CN109493278A (en) * | 2018-10-24 | 2019-03-19 | 北京工业大学 | A kind of large scene image mosaic system based on SIFT feature |
CN110246168A (en) * | 2019-06-19 | 2019-09-17 | 中国矿业大学 | A kind of feature matching method of mobile crusing robot binocular image splicing |
-
2020
- 2020-05-08 CN CN202010380754.XA patent/CN111415300A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102006425A (en) * | 2010-12-13 | 2011-04-06 | 交通运输部公路科学研究所 | Method for splicing video in real time based on multiple cameras |
CN109493278A (en) * | 2018-10-24 | 2019-03-19 | 北京工业大学 | A kind of large scene image mosaic system based on SIFT feature |
CN110246168A (en) * | 2019-06-19 | 2019-09-17 | 中国矿业大学 | A kind of feature matching method of mobile crusing robot binocular image splicing |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112270643A (en) * | 2020-09-04 | 2021-01-26 | 深圳市菲森科技有限公司 | Three-dimensional imaging data splicing method and device, electronic equipment and storage medium |
CN112308782A (en) * | 2020-11-27 | 2021-02-02 | 深圳开立生物医疗科技股份有限公司 | Panoramic image splicing method and device, ultrasonic equipment and storage medium |
CN113066012A (en) * | 2021-04-23 | 2021-07-02 | 深圳壹账通智能科技有限公司 | Scene image confirmation method, device, equipment and storage medium |
CN113066012B (en) * | 2021-04-23 | 2024-04-09 | 深圳壹账通智能科技有限公司 | Scene image confirmation method, device, equipment and storage medium |
CN113225606A (en) * | 2021-04-30 | 2021-08-06 | 上海哔哩哔哩科技有限公司 | Video barrage processing method and device |
CN113225606B (en) * | 2021-04-30 | 2022-09-23 | 上海哔哩哔哩科技有限公司 | Video barrage processing method and device |
CN115359114A (en) * | 2022-08-16 | 2022-11-18 | 中建一局集团第五建筑有限公司 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
CN115359114B (en) * | 2022-08-16 | 2023-07-25 | 中建一局集团第五建筑有限公司 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
CN116402693A (en) * | 2023-06-08 | 2023-07-07 | 青岛瑞源工程集团有限公司 | Municipal engineering image processing method and device based on remote sensing technology |
CN116402693B (en) * | 2023-06-08 | 2023-08-15 | 青岛瑞源工程集团有限公司 | Municipal engineering image processing method and device based on remote sensing technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112348815B (en) | Image processing method, image processing apparatus, and non-transitory storage medium | |
CN111415300A (en) | Splicing method and system for panoramic image | |
CN109583345B (en) | Road recognition method, device, computer device and computer readable storage medium | |
CN107945111B (en) | Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor | |
CN111667030B (en) | Method, system and storage medium for realizing remote sensing image target detection based on deep neural network | |
JP7390497B2 (en) | Image processing methods, apparatus, computer programs, and electronic devices | |
CN104217459B (en) | A kind of spheroid character extracting method | |
CN112084859B (en) | Building segmentation method based on dense boundary blocks and attention mechanism | |
EP2916325A1 (en) | Method and device for processing a picture | |
Chen et al. | A new process for the segmentation of high resolution remote sensing imagery | |
CN111951172A (en) | Image optimization method, device, equipment and storage medium | |
KR102628115B1 (en) | Image processing method, device, storage medium, and electronic device | |
CN110992366A (en) | Image semantic segmentation method and device and storage medium | |
CN114758337B (en) | Semantic instance reconstruction method, device, equipment and medium | |
CN114937050A (en) | Green curtain matting method and device and electronic equipment | |
JP2020149611A (en) | Information processing device and stored image selection method | |
CN113609984A (en) | Pointer instrument reading identification method and device and electronic equipment | |
CN111382647A (en) | Picture processing method, device, equipment and storage medium | |
CN112686896A (en) | Glass defect detection method based on frequency domain and space combination of segmentation network | |
CN112052863B (en) | Image detection method and device, computer storage medium and electronic equipment | |
CN111985535A (en) | Method and device for optimizing human body depth map through neural network | |
CN113537026B (en) | Method, device, equipment and medium for detecting graphic elements in building plan | |
CN114581448B (en) | Image detection method, device, terminal equipment and storage medium | |
CN113158856B (en) | Processing method and device for extracting target area in remote sensing image | |
CN114723746B (en) | Focal region depth omics feature extraction method and device based on knowledge distillation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200714 |