CN116416125A - Image stitching method and terminal for image sequence - Google Patents

Image stitching method and terminal for image sequence Download PDF

Info

Publication number
CN116416125A
CN116416125A CN202310531116.7A CN202310531116A CN116416125A CN 116416125 A CN116416125 A CN 116416125A CN 202310531116 A CN202310531116 A CN 202310531116A CN 116416125 A CN116416125 A CN 116416125A
Authority
CN
China
Prior art keywords
image
points
images
point
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310531116.7A
Other languages
Chinese (zh)
Inventor
陈兵
邹兴文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tumaisi Chengdu Technology Co ltd
Original Assignee
Tumaisi Chengdu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tumaisi Chengdu Technology Co ltd filed Critical Tumaisi Chengdu Technology Co ltd
Priority to CN202310531116.7A priority Critical patent/CN116416125A/en
Publication of CN116416125A publication Critical patent/CN116416125A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image stitching method and a terminal for an image sequence, which are used for acquiring two frames of images with overlapping areas; extracting characteristic points of the two frames of images respectively, and matching the characteristic points of the two frames of images to obtain a mapping matrix between the two frames of images; according to the mapping matrix, respectively carrying out coordinate transformation on the two frames of images, and determining an overlapping area of the two frames of images; the weights of all pixel points in the overlapping area of each frame of image are calculated through twice weight calculation, weighted average is carried out on the overlapping area of the two frames of images according to the weights, the spliced image is obtained, the weights are calculated twice in the process of image fusion, the weight average is carried out in the process of fusion, the smooth and seamless fusion can be carried out on the images with holes in the process of splicing, the image splicing effect is improved, and the steps are repeated, so that the image splicing aiming at an image sequence is realized.

Description

Image stitching method and terminal for image sequence
The scheme is a divisional application taking an invention patent with an application date of 2019, 06 month and 26 days, an application number of 201910561644.0 and a name of an image splicing method and a terminal as a main application.
Technical Field
The present invention relates to image processing neighborhoods, and in particular, to an image stitching method and terminal for an image sequence.
Background
In recent years, with the rapid development of industrial technology and machine vision, there is also an increasing demand for higher quality, larger resolution images. For example, in the medical research neighborhood, various forms of cells need to be observed, and the field angle of a microscope is small, so that an image acquired by a camera is only a local feature, and therefore, more information cannot be observed. Such problems not only occur in medical research, but also in many areas such as military reconnaissance, aerial photography, geodetic mapping, virtual reality, intelligent traffic control, etc., where large field-of-view images are required to view the required information. At present, a wide-angle lens is used for image acquisition instead of a common lens, and an image with a larger field of view can be obtained, but as the field angle of the lens becomes larger, distortion caused by the image becomes larger, the quality of the acquired image is seriously affected, and more images occupy more storage space. Therefore, in order to solve the above-described problems, a method of image stitching is conceived.
The image splicing technology is to splice the sequence images with small view angles and low resolution of the same scene into a seamless large view field image with high quality and high resolution through the image matching and fusion technology. The stitched image contains all the information of the sequence image and solves the problems that occur in the above-mentioned applications.
However, when the existing image stitching technology is used for image fusion, the weight is simply calculated once and weighted sum is carried out, so that the stitched image is often poor in effect, and a hole appears in the stitched part.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the image stitching method and the terminal for the image sequence can be used for carrying out smooth and seamless fusion on images with holes in the stitching process, and the stitching effect is improved.
In order to solve the technical problems, the invention adopts a technical scheme that:
an image stitching method for an image sequence, comprising the steps of:
s1, acquiring two frames of images with overlapping areas;
s2, respectively extracting characteristic points of the two frames of images, and matching the characteristic points of the two frames of images to obtain a mapping matrix between the two frames of images;
s3, respectively carrying out coordinate transformation on the two frames of images according to the mapping matrix, and determining an overlapping area of the two frames of images;
s4, calculating the weight of each pixel point of the overlapping area in each frame of image through twice weight calculation, and carrying out weighted average on the overlapping area of the two frames of images according to the weight to obtain a spliced image;
s5, according to the spliced image, a frame of fused image is cut out from the initial position of the overlapping area of the two frames of images to be used as a new first frame of image, a new frame of image is acquired in real time to be used as a new second frame of image, and the step S2 is executed in a return mode.
In order to solve the technical problems, the invention adopts another technical scheme that:
an image stitching terminal for an image sequence, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the above method for stitching images for an image sequence when executing the computer program.
The invention has the beneficial effects that: in the process of splicing two frames of pictures, the weights of all pixel points in the overlapping area of each frame of image are calculated through twice weight calculation respectively, weighted average is carried out on the overlapping area of the two frames of images according to the weights, the spliced image is obtained through twice weight calculation, and weighted average is carried out when the weights are calculated, so that smooth and seamless fusion can be carried out on the images with holes in the splicing process, the image splicing effect is improved, and the steps are repeated, thereby realizing the image splicing of an image sequence.
Drawings
FIG. 1 is a flow chart of steps of an image stitching method for an image sequence according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an image stitching terminal for an image sequence according to an embodiment of the present invention;
FIG. 3 is a schematic view of the area division structure according to the embodiment of the present invention;
FIG. 4 is a schematic diagram of a coordinate system rotation according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of 8 directions of sub-region gradient information according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a first frame image according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a second frame image according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an image after fusing a first frame image and a second frame image according to an embodiment of the present invention;
description of the reference numerals:
1. an image stitching terminal for an image sequence; 2. a memory; 3. a processor.
Detailed Description
In order to describe the technical contents, the achieved objects and effects of the present invention in detail, the following description will be made with reference to the embodiments in conjunction with the accompanying drawings.
Referring to fig. 1, an image stitching method for an image sequence includes the steps of:
s1, acquiring two frames of images with overlapping areas;
s2, respectively extracting characteristic points of the two frames of images, and matching the characteristic points of the two frames of images to obtain a mapping matrix between the two frames of images;
s3, respectively carrying out coordinate transformation on the two frames of images according to the mapping matrix, and determining an overlapping area of the two frames of images;
and S4, calculating the weights of all pixel points in the overlapping area of each frame of image through twice weight calculation, and carrying out weighted average on the overlapping area of the two frames of images according to the weights to obtain the spliced image.
From the above description, the beneficial effects of the invention are as follows: in the process of splicing two frames of pictures, the weights of all pixel points in the overlapping area of each frame of image are calculated through twice weight calculation respectively, the overlapping area of the two frames of images is weighted and averaged according to the weights, the spliced image is obtained, the weights are calculated twice when the weights are calculated, and the weighted and averaged when the fusion is carried out, so that the smooth and seamless fusion of the images with holes in the splicing process can be carried out, and the image splicing effect is improved.
Further, in the step S4, the calculating weights of the pixel points in the overlapping area in each frame of image through two weight calculation includes:
two weight spaces equal to the overlapping area of two frames of images are opened up, the method comprises the steps of respectively storing weights of pixel points in an overlapping area of two frames of images, and initializing weight space data to 0;
respectively setting data corresponding to non-zero pixel points in an overlapping region of the images of the corresponding frames in a weight space corresponding to the two frames of images as a first preset weight value, and setting data corresponding to boundaries in the overlapping region of the images of the corresponding frames as a second preset weight value;
respectively calculating twice weights for pixel points of an overlapping area in each frame of image:
in the first weight calculation, from the first row to the last row, from the first column to the last column, each pixel point in the overlapping region is performed:
sequentially taking out four neighborhood points from the left neighborhood point of the pixel point according to the clockwise direction, respectively adding preset values to the four neighborhood points, taking the minimum value of the weights of the four neighborhood points added with the preset values and the current pixel point as a first weight of the current pixel point, and updating the corresponding weight of the current pixel point in a corresponding weight space as the first weight;
in the second weight calculation, from the last row to the first row, from the last column to the first column, each pixel point in the overlapping region is performed:
and sequentially taking out four neighborhood points from the neighborhood point on the right of the pixel point according to the clockwise direction, respectively adding preset values to the four neighborhood points, and taking the minimum value of the weights of the four neighborhood points added with the preset values and the current pixel point as a second weight of the current pixel point, wherein the second weight is used for weighted average.
From the above description, the weights of the two times are respectively from the neighborhood point of the upper left corner and the neighborhood point of the lower right corner of the pixel point, the four neighborhood points are sequentially taken out, the weights of the corresponding pixel points are determined based on the four neighborhood points, and the calculation of the second weight is performed on the basis of the first weight data, so that the smoothness and the seamless performance of the fused image after weighted averaging are ensured, and the occurrence of a hole in the splicing process is avoided.
Further, the extracting the feature points of the image in the step S2 includes:
carrying out Gaussian filtering on the image, and calculating a gradient value of each pixel point in the filtered image;
calculating a response value of each pixel point according to the gradient value;
calculating a response value maximum value point in a preset neighborhood of each pixel point according to the response value, and taking the pixel point corresponding to the response value maximum value point as a characteristic point;
and according to the response values, the characteristic points are arranged in a descending order, and the first N characteristic points are taken out to serve as the characteristic points of the image.
From the above description, it can be seen that the response values of the pixels are determined according to the gradient values of the pixels, the response values are ordered, and the first several feature points with larger response values are taken as the feature points of the image, so that the accuracy of the determined feature points of the image is ensured.
Further, the step of arranging the feature points in a descending order according to the response value, and the step of extracting the first N feature points as the feature points of the image includes:
selecting a preset multiple of the maximum response value as a first threshold;
determining a characteristic point with a response value larger than the first threshold value as a first characteristic point;
calculating the distance between the characteristic points in the characteristic points with the response value smaller than or equal to the first threshold value, and taking the characteristic points with the distance smaller than the second threshold value as second characteristic points;
and taking the first characteristic point and the second characteristic point as characteristic points of the image.
As is clear from the above description, by determining the threshold value, selecting an appropriate feature point based on the threshold value, and for feature points that do not satisfy the threshold value condition, feature points that satisfy the distance condition are retained by determining the distance between the feature points, so that feature points that accurately characterize the image with a strong response value can be extracted uniformly and robustly.
Further, the step S2 of matching the feature points of the two frames of images to obtain a mapping matrix between the two frames of images includes:
s21, executing the feature points of each image:
establishing a horizontal rectangular coordinate system by taking the characteristic points as the centers, and taking the pixel points in the n+ n neighborhood of the characteristic points;
dividing the coordinate system into an area every preset degree A according to the anticlockwise direction, and dividing the pixel points in the n-n neighborhood into 360/A areas;
counting the gradient amplitude of each pixel point in each region and accumulating to obtain an accumulated amplitude value;
selecting an angle of a region corresponding to the maximum accumulated amplitude value as a main direction;
rotating the coordinate system to a position consistent with the main direction by taking the characteristic point as the center, and taking the pixel points in the m-m neighborhood of the pixel point;
dividing the pixel points in the n-n neighborhood into i-i sub-regions;
respectively calculating gradient information of each sub-region, and taking the gradient information of the i sub-regions as descriptors of the characteristic points;
s22, calculating the Euclidean distance of a descriptor between each characteristic point of one frame of image and each characteristic point of the other frame of image, and taking two characteristic points, of which the ratio of the minimum distance to the next minimum distance in the Euclidean distance of the descriptor is smaller than a preset ratio, as coarse matching point pairs;
s23, according to the rough matching point pairs, two groups of point pairs are randomly taken out, and a mapping matrix of the two groups of point pairs is calculated;
s24, selecting one group of points in the rough matching point pair as a first group of points, and calculating a second group of points mapped by the first group of points according to the mapping matrix;
s25, calculating residual errors between the first group of points and the second group of points, and counting the number of points meeting preset residual error conditions;
s26, judging whether the number of points meeting the residual condition is larger than the preset number, if yes, the mapping matrix is the mapping matrix between the two frames of images, otherwise, returning to the step S23.
According to the description, the description is carried out on the characteristic points of the image through the description sub, the points in the image with rotation can be accurately described, the distance ratio between the characteristic points of the two frames of images is calculated according to the description sub of the characteristic points, most of wrong characteristic point pairs can be removed, most of correct point pairs are reserved, the residual errors between the point groups before and after mapping through the mapping matrix are randomly sampled and calculated, the wrong matching point pairs can be completely removed, the correct mapping matrix between the two frames of images is finally calculated according to the correct point pairs, the matching degree between the two frames of images is improved, and the guarantee is provided for the follow-up correct fusion.
Referring to fig. 2, an image stitching terminal for an image sequence includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the following steps when executing the computer program:
s1, acquiring two frames of images with overlapping areas;
s2, respectively extracting characteristic points of the two frames of images, and matching the characteristic points of the two frames of images to obtain a mapping matrix between the two frames of images;
s3, respectively carrying out coordinate transformation on the two frames of images according to the mapping matrix, and determining an overlapping area of the two frames of images;
and S4, calculating the weights of all pixel points in the overlapping area of each frame of image through twice weight calculation, and carrying out weighted average on the overlapping area of the two frames of images according to the weights to obtain the spliced image. From the above description, the beneficial effects of the invention are as follows: in the process of splicing two frames of pictures, the weights of all pixel points in the overlapping area of each frame of image are calculated through twice weight calculation respectively, the overlapping area of the two frames of images is weighted and averaged according to the weights, the spliced image is obtained, the weights are calculated twice when the weights are calculated, and the weighted and averaged when the fusion is carried out, so that the smooth and seamless fusion of the images with holes in the splicing process can be carried out, and the image splicing effect is improved.
Further, in the step S4, the calculating weights of the pixel points in the overlapping area in each frame of image through two weight calculation includes:
opening up two weight spaces with the same size as the overlapping area of the two frames of images, respectively storing the weights of pixel points in the overlapping area of the two frames of images, and initializing the weight space data to 0;
respectively setting data corresponding to non-zero pixel points in an overlapping region of the images of the corresponding frames in a weight space corresponding to the two frames of images as a first preset weight value, and setting data corresponding to boundaries in the overlapping region of the images of the corresponding frames as a second preset weight value;
respectively calculating twice weights for pixel points of an overlapping area in each frame of image:
in the first weight calculation, from the first row to the last row, from the first column to the last column, each pixel point in the overlapping region is performed:
sequentially taking out four neighborhood points from the left neighborhood point of the pixel point according to the clockwise direction, respectively adding preset values to the four neighborhood points, taking the minimum value of the weights of the four neighborhood points added with the preset values and the current pixel point as a first weight of the current pixel point, and updating the corresponding weight of the current pixel point in a corresponding weight space as the first weight;
in the second weight calculation, from the last row to the first row, from the last column to the first column, each pixel point in the overlapping region is performed:
and sequentially taking out four neighborhood points from the neighborhood point on the right of the pixel point according to the clockwise direction, respectively adding preset values to the four neighborhood points, and taking the minimum value of the weights of the four neighborhood points added with the preset values and the current pixel point as a second weight of the current pixel point, wherein the second weight is used for weighted average.
From the above description, the weights of the two times are respectively from the neighborhood point of the upper left corner and the neighborhood point of the lower right corner of the pixel point, the four neighborhood points are sequentially taken out, the weights of the corresponding pixel points are determined based on the four neighborhood points, and the calculation of the second weight is performed on the basis of the first weight data, so that the smoothness and the seamless performance of the fused image after weighted averaging are ensured, and the occurrence of a hole in the splicing process is avoided.
Further, the extracting the feature points of the image in the step S2 includes:
carrying out Gaussian filtering on the image, and calculating a gradient value of each pixel point in the filtered image;
calculating a response value of each pixel point according to the gradient value;
calculating a response value maximum value point in a preset neighborhood of each pixel point according to the response value, and taking the pixel point corresponding to the response value maximum value point as a characteristic point;
and according to the response values, the characteristic points are arranged in a descending order, and the first N characteristic points are taken out to serve as the characteristic points of the image.
From the above description, it can be seen that the response values of the pixels are determined according to the gradient values of the pixels, the response values are ordered, and the first several feature points with larger response values are taken as the feature points of the image, so that the accuracy of the determined feature points of the image is ensured.
Further, the step of arranging the feature points in a descending order according to the response value, and the step of extracting the first N feature points as the feature points of the image includes:
selecting a preset multiple of the maximum response value as a first threshold;
determining a characteristic point with a response value larger than the first threshold value as a first characteristic point;
calculating the distance between the characteristic points in the characteristic points with the response value smaller than or equal to the first threshold value, and taking the characteristic points with the distance smaller than the second threshold value as second characteristic points;
and taking the first characteristic point and the second characteristic point as characteristic points of the image.
As is clear from the above description, by determining the threshold value, selecting an appropriate feature point based on the threshold value, and for feature points that do not satisfy the threshold value condition, feature points that satisfy the distance condition are retained by determining the distance between the feature points, so that feature points that accurately characterize the image with a strong response value can be extracted uniformly and robustly.
Further, the step S2 of matching the feature points of the two frames of images to obtain a mapping matrix between the two frames of images includes:
s21, executing the feature points of each image:
establishing a horizontal rectangular coordinate system by taking the characteristic points as the centers, and taking the pixel points in the n+ n neighborhood of the characteristic points;
dividing the coordinate system into an area every preset degree A according to the anticlockwise direction, and dividing the pixel points in the n-n neighborhood into 360/A areas;
counting the gradient amplitude of each pixel point in each region and accumulating to obtain an accumulated amplitude value;
selecting an angle of a region corresponding to the maximum accumulated amplitude value as a main direction;
rotating the coordinate system to a position consistent with the main direction by taking the characteristic point as the center, and taking the pixel points in the m-m neighborhood of the pixel point;
dividing the pixel points in the n-n neighborhood into i-i sub-regions;
respectively calculating gradient information of each sub-region, and taking the gradient information of the i sub-regions as descriptors of the characteristic points;
s22, calculating the Euclidean distance of a descriptor between each characteristic point of one frame of image and each characteristic point of the other frame of image, and taking two characteristic points, of which the ratio of the minimum distance to the next minimum distance in the Euclidean distance of the descriptor is smaller than a preset ratio, as coarse matching point pairs;
s23, according to the rough matching point pairs, two groups of point pairs are randomly taken out, and a mapping matrix of the two groups of point pairs is calculated;
s24, selecting one group of points in the rough matching point pair as a first group of points, and calculating a second group of points mapped by the first group of points according to the mapping matrix;
s25, calculating residual errors between the first group of points and the second group of points, and counting the number of points meeting preset residual error conditions;
s26, judging whether the number of points meeting the residual condition is larger than the preset number, if yes, the mapping matrix is the mapping matrix between the two frames of images, otherwise, returning to the step S23.
According to the description, the description is carried out on the characteristic points of the image through the description sub, the points in the image with rotation can be accurately described, the distance ratio between the characteristic points of the two frames of images is calculated according to the description sub of the characteristic points, most of wrong characteristic point pairs can be removed, most of correct point pairs are reserved, the residual errors between the point groups before and after mapping through the mapping matrix are randomly sampled and calculated, the wrong matching point pairs can be completely removed, the correct mapping matrix between the two frames of images is finally calculated according to the correct point pairs, the matching degree between the two frames of images is improved, and the guarantee is provided for the follow-up correct fusion.
Example 1
Referring to fig. 1, an image stitching method for an image sequence includes the steps of:
s1, acquiring two frames of images with overlapping areas;
specifically, a proper memory space is opened up, a first frame image acquired in real time is stored in a certain position of the memory space, and the starting point coordinate of the first frame image is assumed to be P (x, y);
acquiring a second frame image with an overlapping area with the first frame image in real time by moving a target or a camera;
s2, respectively extracting characteristic points of the two frames of images, and matching the characteristic points of the two frames of images to obtain a mapping matrix between the two frames of images;
firstly, judging the channel number of the image, if the image is an RGB three-channel image, converting the RGB three-channel image into a single-channel gray image according to a gray image conversion formula, and if the image is a single-channel image, not converting the image;
the RGB to gray image formula is as follows:
Gray=R*0.299+G*0.587+B*0.114;
wherein R, G, B respectively represents the values of three color channels of red, green and blue corresponding to the pixel points in the image, and Gray is the Gray value corresponding to the converted pixel points;
then, gaussian filtering is carried out on the brightness image to obtain a filtered image;
calculating a gradient value of each pixel point in the filtered image, including:
calculating the gradient value of each pixel point in the horizontal direction and the vertical direction,
wherein, the gradient value of the pixel point (i, j) in the horizontal direction is as follows:
I x (i,j)=I(i,j)-I(i,j+1);
gradient value of pixel (i, j) in vertical direction:
I y (i,j)=I(i,j)-I(i+1,j);
calculating a response value of each pixel point according to the gradient value, wherein the response value formula is as follows:
I resp =I x 2 *I y 2 -I x *I y -k*(I x 2 +I y 2 )
wherein I is x I is the gradient of the pixel in the horizontal direction y K is an adjusting coefficient for the gradient of the pixel point in the vertical direction;
calculating a response value maximum value point in a preset neighborhood of each pixel point according to the response value, taking the pixel point corresponding to the response value maximum value point as a characteristic point, wherein the preset neighborhood is preferably 3*3;
the characteristic points are arranged in a descending order according to the response values, and the first N characteristic points are taken out to serve as the characteristic points of the image;
preferably, the step of arranging the feature points in a descending order according to the response value, and the step of extracting the first N feature points as the feature points of the image includes:
selecting a preset multiple of the maximum response value as a first threshold, wherein the preset multiple can be 0-1 times, preferably 0.8 times;
determining characteristic points with response values larger than the first threshold value as first characteristic points, marking the characteristic points as a characteristic point set p1, and marking a set formed by characteristic points with response values smaller than or equal to the first threshold value as a characteristic point set p2;
calculating the distance between every two feature points in the feature point set p2, taking the feature points with the distance smaller than a second threshold value as second feature points, and marking the feature points as feature point sets p3, wherein the second threshold value is preferably 10;
taking the characteristic point set p1 and the characteristic point set p3 as characteristic point sets of the image;
s3, respectively carrying out coordinate transformation on the two frames of images according to the mapping matrix, and determining an overlapping area of the two frames of images;
according to the mapping matrix, taking the first frame image as a reference, transforming the second frame image into the coordinate system of the first frame image in an image interpolation mode, calculating a starting point position St (x 1, y 1) and an end point position Et (x 2, y 2) of the two frame images which are started to be overlapped by the mapping matrix, and respectively cutting out corresponding overlapped areas from the two frame images according to the starting point position St (x 1, y 1) and the end point position Et (x 2, y 2);
and S4, calculating the weights of all pixel points in the overlapping area of each frame of image through twice weight calculation, and carrying out weighted average on the overlapping area of the two frames of images according to the weights to obtain the spliced image.
In the process of splicing images, after the relative position relation between a first frame image and a second frame image is found, four coordinates of an overlapping area of the two frame images can be obtained, then the weights corresponding to the overlapping area of the two frame images are calculated, at the moment, two spaces which are equal to the overlapping area of the two frame images are opened up, the weights of pixels of the overlapping area of the two frame images are respectively stored, weight space data are initialized to 0, data corresponding to non-zero pixel values in the overlapping area of the first frame image in the weight space are set to a first preset weight value, preferably 255, and data corresponding to boundaries in the overlapping area of the first frame image in the weight space are set to a second preset weight value, preferably 0; setting data corresponding to a non-zero pixel value in a second frame image overlapping region in a weight space as a first preset weight value, preferably setting 255, and setting data corresponding to a boundary in the second frame image overlapping region in the weight space as a second preset weight value, preferably setting 0; because the input is an 8-bit three-channel image, the maximum pixel value is 255, therefore, the value is set to 255, and if the data of the image is not 8 bits, the weight value corresponding to the non-zero pixel point can be changed in real time according to the bit number of the image;
by the initial setting of the weight values, the edges of the two spliced images can be smoothly transited;
the calculating the weights of the pixel points in the overlapping area in each frame of image through twice weight calculation comprises the following steps:
in the first weight calculation, from the first row to the last row, from the first column to the last column, each pixel point in the overlapping region is performed:
sequentially taking out four neighborhood points from the left neighborhood point of the pixel point according to the clockwise direction, respectively adding preset values, preferably adding 1,2,1 and 2 to the weights of the four neighborhood points, taking the minimum value of the weights of the four neighborhood points and the current pixel point after the preset values are added as a first weight of the current pixel point, and updating the corresponding weight of the pixel point in a corresponding weight space as the first weight;
the weights are calculated as follows:
i, j denote pixel rows, column index numbers, I (I, j) denote weights corresponding to the pixels (I, j), respectively:
d1=1,d2=2;
I(i-1,j-1)+d2 I(i-1,j)+d1 I(i-1,j+1)+d2
I(i,j-1)+d1 I(i,j)
the weight corresponding to pixel (i, j):
I(i,j)=min(I(i-1,j-1)+d2,I(i-1,j)+d1,I(i-1,j+1)+d2,I(i,j-1)+d1,I(i,j));
in the second weight calculation, from the last row to the first row, from the last column to the first column, each pixel point in the overlapping region is performed:
sequentially taking out four neighborhood points from the neighborhood point on the right of the pixel point according to the clockwise direction, respectively adding preset values, preferably adding 1,2,1 and 2 to the four neighborhood points, taking the minimum value of the four neighborhood points and the current pixel point after the preset values are added as a second weight of the current pixel point, and updating the weight corresponding to the pixel point in a corresponding weight space to be the second weight, wherein the second weight is used for weighted average;
the weights are calculated as follows:
I(i,j) I(i,j+1)+d1
I(i+1,j-1)+d2 I(i+1,j)+d1 I(i+1,j+1)+d2
the weight corresponding to pixel (i, j):
I(i,j)=min(I(i,j+1)+d1,I(i+1,j+1)+d2,I(i+1,j)+d1,I(i+1,j-1)+d2,I(i,j));
by adding the weight with d1 or d2, the fusion precision can be achieved, the data does not need to be changed into floating point data, and more neighborhood points do not need to participate in calculation, so that the calculation speed is improved;
multiplying the intercepted overlapping area by the corresponding weight according to the second weight, adding and averaging the overlapping areas after multiplying the overlapping areas obtained by weighting the two frames of images to obtain a new image after fusion, putting the new image between a starting point position St (x 1, y 1) and an end point position Et (x 2, y 2) of the overlapping area, and adding the rest image parts of the second frame of images except the overlapping area to obtain a new image after splicing;
and according to the spliced image, a frame of fused image is intercepted from the starting point position St (x 1, y 1) to replace a previous frame of image, the image is used as a new first frame of image, a new frame of image is acquired in real time and is used as a new second frame of image, and the step S2 is executed in a return mode, so that rapid real-time splicing of the continuously acquired image sequence is realized.
Example two
The present embodiment is different from the first embodiment in that:
the step S2 of matching the feature points of the two frames of images to obtain a mapping matrix between the two frames of images includes:
s21, executing the feature points of each image:
establishing a horizontal rectangular coordinate system by taking the characteristic points as the centers, and taking pixel points in the n_n neighborhood of the characteristic points, wherein n is preferably 11;
dividing the coordinate system into an area every preset degree M in the anticlockwise direction, dividing the pixel points in the n neighborhood into 360/A areas, preferably, dividing M into 36 areas, wherein as shown in figure 3, the angle AOB, the angle BOC and the angle COD are all 10 degrees, and representing the divided areas;
counting the gradient amplitude of each pixel point in each region and accumulating to obtain an accumulated amplitude value;
selecting an angle of a region corresponding to the maximum accumulated amplitude value as a main direction;
rotating the coordinate system to a position consistent with the main direction by taking the characteristic point as a center, for example, as shown in fig. 4, assuming that M is 10 degrees and the accumulated amplitude value of the third counterclockwise region is maximum, the angle of the region is 30 degrees, namely the main direction, and rotating the coordinate system counterclockwise by 30 degrees at the moment;
and taking the pixels in the neighborhood of the feature point m, preferably, when n takes 11, m is
Figure SMS_1
I.e. m is n
Figure SMS_2
The data in the original n-n region can be completely acquired, wherein the rotation aim is that after two images are rotated relatively, the feature descriptors of the feature points can be accurately described, and the pixel points of the feature points rotated according to the main direction can be accurately found;
dividing the pixel points in the n neighborhood into i sub-regions, preferably i is 4, namely four sub-regions in each quadrant;
respectively calculating gradient information of each sub-region, wherein the gradient information of each sub-region is gradient information of a central pixel point of the sub-region in 8 directions, namely, the gradient information of the i sub-regions is divided into 8 directions by taking the central pixel point as a circle center according to 45 degrees as a part, and the gradient information of the i sub-regions is used as a descriptor of the characteristic points, namely, the descriptor of each characteristic point is a vector in 1 (8 x, i) dimension, namely, a vector in 1 row, 8 x, i column;
as shown in fig. 5, k is a central pixel point in one sub-region, eight arrow directions in the figure are 8 directions, and gradient information of the sub-region corresponding to k is gradient information in the eight arrow directions;
s22, calculating the Euclidean distance of a descriptor between each characteristic point of one frame of image and each characteristic point of the other frame of image, and taking two characteristic points, of which the ratio of the minimum distance to the next minimum distance in the Euclidean distance of the descriptor is smaller than a preset ratio, as a rough matching point pair, wherein the preset ratio is preferably 0.8;
s23, according to the rough matching point pairs, two groups of point pairs are randomly taken out, and a mapping matrix of the two groups of point pairs is calculated;
s24, selecting one group of points in the rough matching point pair as a first group of points, and calculating a second group of points mapped by the first group of points according to the mapping matrix;
s25, calculating residual errors between the first group of points and the second group of points, and counting the number of points meeting preset residual error conditions;
s26, judging whether the number of points meeting residual conditions is larger than a preset number, if yes, the mapping matrix is the mapping matrix between the two frames of images, otherwise, returning to the step S23;
wherein, steps S23-S26 are specifically as follows:
according to the determined rough matching point pairs, the rough matching point pairs of the two frames of images are respectively a first characteristic point group Lp { Lp1, lp2, … Lpn } and a second characteristic point group Rp { Rp1, rp2, …, rpn };
two sets of point pairs, namely four characteristic points, such as Lp1, rp1 and Lp4, rp4, are randomly selected from corresponding positions in Lp and Rp;
calculating a mapping matrix H according to the two selected point pairs;
selecting one of the feature point sets as a first set of points, such as Rp, and calculating Rp' =H, which is the reflection point of Rp, according to the mapping matrix H -1 *Rp;
Calculating residual error Errp=Rp-Rp 'of Rp and Rp', and counting the number of points smaller than 4 in Errp;
if the number of points smaller than 4 in Errp is 0.7 times greater than the number of Rp points, then considering H as a correct mapping matrix, otherwise, continuing to randomly select two groups of characteristic point pairs from the coarse matching point pairs, and returning to execute the step S23;
preferably, a mapping matrix corresponding to a point satisfying the residual condition most may be used as the correct mapping matrix;
the fused schematic diagrams adopting the method are shown in fig. 6-8, wherein fig. 6 is a first frame image, fig. 7 is a second frame image, and fig. 8 is a fused image.
Example III
Referring to fig. 2, an image stitching terminal 1 for an image sequence includes a memory 2, a processor 3, and a computer program stored in the memory 1 and executable on the processor 3, wherein the processor 3 implements the steps of the first embodiment when executing the computer program.
Example IV
Referring to fig. 2, an image stitching terminal 1 for an image sequence includes a memory 2, a processor 3, and a computer program stored in the memory 1 and executable on the processor 3, wherein the processor 3 implements the steps in the second embodiment when executing the computer program.
In summary, according to the image stitching method and the terminal for the image sequence, in the image matching process, the feature points are determined based on the response values, the mapping matrix is determined based on the feature points, in the image fusion process, the weight of the overlapped area is determined through two weight calculations, and the finally stitched image is obtained through weighted average; determining characteristic points of the image based on the response values of the pixel points, and realizing uniform and robust extraction to points with stronger response values through threshold adjustment; through describing the characteristic points, the points in the rotated image can be accurately described, the distance ratio among the characteristic points is calculated according to the descriptors of the characteristic points, most of wrong characteristic point pairs can be removed, most of correct point pairs are reserved, and finally, the wrong matching point pairs are completely removed through random sampling, so that a correct mapping matrix is obtained, the characteristic points can be accurately and rapidly found by adopting a corner extraction algorithm, the rotation invariance is realized, and the mutually matched characteristic points can be accurately found without pyramid layering of the image; the images with holes in the splicing process can be fused smoothly and seamlessly through twice weight calculation, and the fusion algorithm can be fused in the horizontal direction, the vertical direction or any direction at the same time, so that the phenomenon of uneven transition in the horizontal direction and the vertical direction is avoided, and the splicing effect is ensured.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent modifications made by the present invention and the accompanying drawings, or direct or indirect application in the relevant technical field, are included in the scope of the present invention.

Claims (8)

1. An image stitching method for an image sequence, comprising the steps of:
s1, acquiring two frames of images with overlapping areas;
s2, respectively extracting characteristic points of the two frames of images, and matching the characteristic points of the two frames of images to obtain a mapping matrix between the two frames of images;
s3, respectively carrying out coordinate transformation on the two frames of images according to the mapping matrix, and determining an overlapping area of the two frames of images;
s4, calculating the weight of each pixel point of the overlapping area in each frame of image through twice weight calculation, and carrying out weighted average on the overlapping area of the two frames of images according to the weight to obtain a spliced image;
s5, according to the spliced image, a frame of fused image is cut out from the initial position of the overlapping area of the two frames of images to be used as a new first frame of image, a new frame of image is acquired in real time to be used as a new second frame of image, and the step S2 is executed in a return mode.
2. The method for image stitching according to claim 1, wherein the calculating weights of the pixels in the overlapping area in each frame of image by two weight calculations in step S4 includes:
opening up two weight spaces with the same size as the overlapping area of the two frames of images, respectively storing the weights of pixel points in the overlapping area of the two frames of images, and initializing the weight space data to 0;
respectively setting data corresponding to non-zero pixel points in an overlapping region of the images of the corresponding frames in a weight space corresponding to the two frames of images as a first preset weight value, and setting data corresponding to boundaries in the overlapping region of the images of the corresponding frames as a second preset weight value;
respectively calculating twice weights for pixel points of an overlapping area in each frame of image:
in the first weight calculation, from the first row to the last row, from the first column to the last column, each pixel point in the overlapping region is performed:
sequentially taking out four neighborhood points from the left neighborhood point of the pixel point according to the clockwise direction, respectively adding preset values to the four neighborhood points, taking the minimum value of the weights of the four neighborhood points added with the preset values and the current pixel point as a first weight of the current pixel point, and updating the corresponding weight of the current pixel point in a corresponding weight space as the first weight;
in the second weight calculation, from the last row to the first row, from the last column to the first column, each pixel point in the overlapping region is performed:
and sequentially taking out four neighborhood points from the neighborhood point on the right of the pixel point according to the clockwise direction, respectively adding preset values to the four neighborhood points, and taking the minimum value of the weights of the four neighborhood points added with the preset values and the current pixel point as a second weight of the current pixel point, wherein the second weight is used for weighted average.
3. The method for image stitching according to claim 2, wherein in step S4, according to the weight, the overlapping area of the two frames of images is weighted and averaged, so as to obtain the stitched image specifically as follows:
and multiplying the intercepted overlapping area by the corresponding weight according to the second weight, adding and averaging the overlapping areas after multiplying the overlapping areas obtained by weighting the two frames of images to obtain a new fused image, placing the new image between the starting point position and the end point position of the overlapping area, and adding the rest image parts of the second frame of image except the overlapping area to obtain a new spliced image.
4. The image stitching method for image sequences according to claim 1, wherein the step S3 is specifically:
according to the mapping matrix, the first frame image is used as a reference, the second frame image is transformed into the coordinate system of the first frame image by adopting an image interpolation mode, a starting point position St (x 1, y 1) and an end point position Et (x 2, y 2) of the two frame images which are started to be overlapped are calculated by the mapping matrix, and corresponding overlapped areas are respectively cut out from the two frame images according to the starting point position St (x 1, y 1) and the end point position Et (x 2, y 2).
5. The image stitching method according to claim 1, wherein the extracting feature points of the image in step S2 includes:
carrying out Gaussian filtering on the image, and calculating a gradient value of each pixel point in the filtered image;
calculating a response value of each pixel point according to the gradient value;
calculating a response value maximum value point in a preset neighborhood of each pixel point according to the response value, and taking the pixel point corresponding to the response value maximum value point as a characteristic point;
and according to the response values, the characteristic points are arranged in a descending order, and the first N characteristic points are taken out to serve as the characteristic points of the image.
6. The method according to claim 5, wherein the step of arranging the feature points in a descending order according to the response value, and the step of extracting the first N feature points as the feature points of the image comprises:
selecting a preset multiple of the maximum response value as a first threshold;
determining a characteristic point with a response value larger than the first threshold value as a first characteristic point;
calculating the distance between the characteristic points in the characteristic points with the response value smaller than or equal to the first threshold value, and taking the characteristic points with the distance smaller than the second threshold value as second characteristic points;
and taking the first characteristic point and the second characteristic point as characteristic points of the image.
7. The method for image stitching according to any one of claims 1-6, wherein the matching the feature points of the two frames of images in step S2 to obtain the mapping matrix between the two frames of images includes:
s21, executing the feature points of each image:
establishing a horizontal rectangular coordinate system by taking the characteristic points as the centers, and taking the pixel points in the n+ n neighborhood of the characteristic points;
dividing the coordinate system into an area every preset degree A according to the anticlockwise direction, and dividing the pixel points in the n-n neighborhood into 360/A areas;
counting the gradient amplitude of each pixel point in each region and accumulating to obtain an accumulated amplitude value;
selecting an angle of a region corresponding to the maximum accumulated amplitude value as a main direction;
rotating the coordinate system to a position consistent with the main direction by taking the characteristic point as the center, and taking the pixel points in the m-m neighborhood of the pixel point;
dividing the pixel points in the n-n neighborhood into i-i sub-regions;
respectively calculating gradient information of each sub-region, and taking the gradient information of the i sub-regions as descriptors of the characteristic points;
s22, calculating the Euclidean distance of a descriptor between each characteristic point of one frame of image and each characteristic point of the other frame of image, and taking two characteristic points, of which the ratio of the minimum distance to the next minimum distance in the Euclidean distance of the descriptor is smaller than a preset ratio, as coarse matching point pairs;
s23, according to the rough matching point pairs, two groups of point pairs are randomly taken out, and a mapping matrix of the two groups of point pairs is calculated;
s24, selecting one group of points in the rough matching point pair as a first group of points, and calculating a second group of points mapped by the first group of points according to the mapping matrix;
s25, calculating residual errors between the first group of points and the second group of points, and counting the number of points meeting preset residual error conditions;
s26, judging whether the number of points meeting the residual condition is larger than the preset number, if yes, the mapping matrix is the mapping matrix between the two frames of images, otherwise, returning to the step S23.
8. An image stitching terminal for image sequences, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of an image stitching method for image sequences according to any of the preceding claims 1-7 when the computer program is executed.
CN202310531116.7A 2019-06-26 2019-06-26 Image stitching method and terminal for image sequence Pending CN116416125A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310531116.7A CN116416125A (en) 2019-06-26 2019-06-26 Image stitching method and terminal for image sequence

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910561644.0A CN110276717B (en) 2019-06-26 2019-06-26 Image stitching method and terminal
CN202310531116.7A CN116416125A (en) 2019-06-26 2019-06-26 Image stitching method and terminal for image sequence

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910561644.0A Division CN110276717B (en) 2019-06-26 2019-06-26 Image stitching method and terminal

Publications (1)

Publication Number Publication Date
CN116416125A true CN116416125A (en) 2023-07-11

Family

ID=67963317

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202310531116.7A Pending CN116416125A (en) 2019-06-26 2019-06-26 Image stitching method and terminal for image sequence
CN202310531035.7A Pending CN116433475A (en) 2019-06-26 2019-06-26 Image stitching method and terminal based on feature point extraction
CN201910561644.0A Active CN110276717B (en) 2019-06-26 2019-06-26 Image stitching method and terminal

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202310531035.7A Pending CN116433475A (en) 2019-06-26 2019-06-26 Image stitching method and terminal based on feature point extraction
CN201910561644.0A Active CN110276717B (en) 2019-06-26 2019-06-26 Image stitching method and terminal

Country Status (1)

Country Link
CN (3) CN116416125A (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112714282A (en) * 2020-12-22 2021-04-27 北京百度网讯科技有限公司 Image processing method, apparatus, device and program product in remote control
CN114040179B (en) * 2021-10-20 2023-06-06 重庆紫光华山智安科技有限公司 Image processing method and device
CN114373153B (en) * 2022-01-12 2022-12-27 北京拙河科技有限公司 Video imaging optimization system and method based on multi-scale array camera

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006425B (en) * 2010-12-13 2012-01-11 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras
CN104732485B (en) * 2015-04-21 2017-10-27 深圳市深图医学影像设备有限公司 The joining method and system of a kind of digital X-ray image
WO2018000359A1 (en) * 2016-06-30 2018-01-04 北京深迈瑞医疗电子技术研究院有限公司 Method and system for enhancing ultrasound contrast images and ultrasound contrast imaging device
CN105957018B (en) * 2016-07-15 2018-12-14 武汉大学 A kind of unmanned plane images filter frequency dividing joining method
US9940695B2 (en) * 2016-08-26 2018-04-10 Multimedia Image Solution Limited Method for ensuring perfect stitching of a subject's images in a real-site image stitching operation
TWI617195B (en) * 2017-06-22 2018-03-01 宏碁股份有限公司 Image capturing apparatus and image stitching method thereof
CN107958441B (en) * 2017-12-01 2021-02-12 深圳市科比特航空科技有限公司 Image splicing method and device, computer equipment and storage medium
CN109636714A (en) * 2018-08-30 2019-04-16 沈阳聚声医疗系统有限公司 A kind of image split-joint method of ultrasonic wide-scene imaging
CN109658363A (en) * 2018-10-22 2019-04-19 长江大学 The multilayer sub-block overlapping histogram equalizing method and system that sub-block adaptively merges

Also Published As

Publication number Publication date
CN110276717B (en) 2023-05-05
CN110276717A (en) 2019-09-24
CN116433475A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN110276717B (en) Image stitching method and terminal
CN108122191B (en) Method and device for splicing fisheye images into panoramic image and panoramic video
CN107665483B (en) Calibration-free convenient monocular head fisheye image distortion correction method
CN104408701B (en) A kind of large scene video image joining method
CN110111248B (en) Image splicing method based on feature points, virtual reality system and camera
CN110992263B (en) Image stitching method and system
CN109767388B (en) Method for improving image splicing quality based on super pixels, mobile terminal and camera
CN109389555B (en) Panoramic image splicing method and device
CN112085659B (en) Panorama splicing and fusing method and system based on dome camera and storage medium
CN111553939B (en) Image registration algorithm of multi-view camera
CN106447602A (en) Image mosaic method and device
CN112215880B (en) Image depth estimation method and device, electronic equipment and storage medium
CN112837419B (en) Point cloud model construction method, device, equipment and storage medium
CN107492080B (en) Calibration-free convenient monocular head image radial distortion correction method
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN111815517A (en) Self-adaptive panoramic stitching method based on snapshot pictures of dome camera
CN110223235A (en) A kind of flake monitoring image joining method based on various features point combinations matches
CN111640065B (en) Image stitching method and imaging device based on camera array
CN112767480A (en) Monocular vision SLAM positioning method based on deep learning
CN108093188B (en) A method of the big visual field video panorama splicing based on hybrid projection transformation model
CN112465702B (en) Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video
WO2022126921A1 (en) Panoramic picture detection method and device, terminal, and storage medium
CN110084754A (en) A kind of image superimposing method based on improvement SIFT feature point matching algorithm
CN105488764B (en) Fisheye image correcting method and device
CN112419172B (en) Remote sensing image processing method for correcting and deblurring inclined image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination