CN114626991A - Image stitching method, device, equipment, medium and computer program product - Google Patents

Image stitching method, device, equipment, medium and computer program product Download PDF

Info

Publication number
CN114626991A
CN114626991A CN202210516921.8A CN202210516921A CN114626991A CN 114626991 A CN114626991 A CN 114626991A CN 202210516921 A CN202210516921 A CN 202210516921A CN 114626991 A CN114626991 A CN 114626991A
Authority
CN
China
Prior art keywords
image
spliced
pixel
pixel point
gyroscope information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210516921.8A
Other languages
Chinese (zh)
Other versions
CN114626991B (en
Inventor
陈一航
吴海浪
胡思行
蒋念娟
沈小勇
吕江波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Simou Intelligent Technology Co ltd
Shenzhen Smartmore Technology Co Ltd
Original Assignee
Suzhou Simou Intelligent Technology Co ltd
Shenzhen Smartmore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Simou Intelligent Technology Co ltd, Shenzhen Smartmore Technology Co Ltd filed Critical Suzhou Simou Intelligent Technology Co ltd
Priority to CN202210516921.8A priority Critical patent/CN114626991B/en
Publication of CN114626991A publication Critical patent/CN114626991A/en
Application granted granted Critical
Publication of CN114626991B publication Critical patent/CN114626991B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to the technical field of image processing, and provides an image splicing method, an image splicing device, computer equipment, a storage medium and a computer program product. The image splicing method and device can improve image splicing efficiency and reduce image splicing calculated amount and memory consumption. The method comprises the following steps: the method comprises the steps of obtaining a first image and a second image to be spliced, splicing the first image and the second image according to gyroscope information of the two images to obtain a first spliced image, optimizing the gyroscope information of the second image according to a matching feature point set, splicing the first spliced image and the second image according to the gyroscope information of the first image and the gyroscope information of the second image after optimization to obtain a second spliced image, and splicing the overlapping area of the second spliced image according to the shooting angle of the two images to obtain a spliced image of the first image and the second image.

Description

Image stitching method, device, equipment, medium and computer program product
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image stitching method, an image stitching apparatus, a computer device, a storage medium, and a computer program product.
Background
With the development of image processing technology, the requirement for image processing efficiency is higher and higher. The image stitching technology is an important image processing technology, and can stitch a plurality of shot images into an image with a large visual angle.
The traditional technology is generally used for splicing images through a Multiband blending algorithm, but the efficiency of image splicing through the technology is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image stitching method, an image stitching apparatus, a computer device, a computer readable storage medium, and a computer program product.
In a first aspect, the present application provides an image stitching method. The method comprises the following steps:
acquiring a first image and a second image to be spliced;
performing first splicing on the first image and the second image according to gyroscope information corresponding to the first image and the second image to obtain a first spliced image;
optimizing gyroscope information of the second image according to the first matching feature point set; the first matching feature point set is a feature point set matched with the first image and the second image in an overlapping region of the first spliced image;
performing second splicing on the first image and the second image according to the gyroscope information of the first image and the optimized gyroscope information of the second image to obtain a second spliced image;
and performing seam splicing processing on the overlapping area of the second spliced image based on the shooting angles corresponding to the first image and the second image to obtain a spliced image of the first image and the second image.
In one embodiment, the stitching processing on the overlapping area of the second stitched image based on the shooting angle corresponding to the first image and the second image includes:
calculating a first shooting angle set formed by each pixel point in the overlapping area of the second spliced image and the shooting center and the shooting main axis corresponding to the first image, and calculating a second shooting angle set formed by each pixel point in the overlapping area of the second spliced image and the shooting center and the shooting main axis corresponding to the second image;
determining pixel distribution weights of pixel points in the overlapping area of the second spliced image according to the difference value of the first shooting angle and the second shooting angle corresponding to the pixel points in the overlapping area of the second spliced image;
and performing seam splicing treatment on the overlapping region of the second spliced image according to the pixel distribution weight.
In one embodiment, determining the pixel distribution weight of each pixel point in the overlapping region of the second stitched image according to the difference between the first shooting angle and the second shooting angle corresponding to each pixel point in the overlapping region of the second stitched image includes:
determining a difference threshold interval condition which is satisfied by the difference between the first shooting angle and the second shooting angle corresponding to the pixel point aiming at each pixel point in the overlapping area of the second spliced image;
if the difference value threshold interval condition met by the difference value is a first difference value threshold interval condition, determining pixel distribution weight of the pixel point as pixel information of the pixel point of the first image corresponding to the image-taking pixel point;
the first difference threshold interval condition is an interval smaller than the first difference threshold.
In one embodiment, the method further comprises:
if the difference value threshold interval condition met by the difference value is a second difference value threshold interval condition, determining that the pixel distribution weight of the pixel point is part of the pixel information of the pixel point of the first image corresponding to the pixel point and part of the pixel information of the pixel point of the second image corresponding to the pixel point according to the difference value;
the second difference threshold interval condition is an interval which is greater than or equal to the first difference threshold and less than or equal to the second difference threshold.
In one embodiment, the method further comprises:
if the difference value threshold interval condition met by the difference value is a third difference value threshold interval condition, determining pixel distribution weight of the pixel point as pixel information of the pixel point of the second image corresponding to the image-taking pixel point;
and the third difference threshold interval condition is an interval greater than the second difference threshold.
In one embodiment, the first stitching the first image and the second image according to the gyroscope information corresponding to the first image and the second image includes:
and mapping the second image to the camera coordinate system of the first image according to the camera internal reference information and the gyroscope information corresponding to the first image and the second image.
In one embodiment, mapping the second image to the camera coordinate system of the first image according to the camera internal reference information and the gyroscope information corresponding to the first image and the second image comprises:
calculating an affine transformation matrix of the second image relative to the first image according to the camera internal reference information and gyroscope information corresponding to the first image and the second image;
the second image is mapped to the camera coordinate system of the first image according to an affine transformation matrix.
In one embodiment, optimizing gyroscope information for the second image based on the first set of matched feature points comprises:
performing inverse affine transformation on the matching feature points corresponding to the second image in the first matching feature point set to obtain a second matching feature point set of the first image and the second image;
and optimizing the gyroscope information of the second image according to the second matching feature point set.
In one embodiment, performing the second stitching on the first image and the second image according to the gyroscope information of the first image and the optimized gyroscope information of the second image includes:
and mapping the first image and the second image to a cylindrical coordinate system according to the camera internal reference information and gyroscope information of the first image, and the camera internal reference information and optimized gyroscope information of the second image.
In one embodiment, acquiring a first image and a second image to be stitched comprises:
and keeping the exposure parameters and the white balance parameters of the shooting device, and shooting by the shooting device to obtain a first image and a second image.
In a second aspect, the application further provides an image stitching device. The device comprises:
the image acquisition module is used for acquiring a first image and a second image to be spliced;
the first spliced image obtaining module is used for carrying out first splicing on the first image and the second image according to gyroscope information corresponding to the first image and the second image to obtain a first spliced image;
the gyroscope information optimization module is used for optimizing gyroscope information of the second image according to the first matching feature point set; the first matched feature point set is a feature point set matched by the first image and the second image in an overlapping region of the first spliced image;
the second spliced image obtaining module is used for carrying out second splicing on the first image and the second image according to the gyroscope information of the first image and the optimized gyroscope information of the second image to obtain a second spliced image;
and the spliced image obtaining module is used for carrying out splicing processing on the overlapping area of the second spliced image based on the shooting angles corresponding to the first image and the second image to obtain a spliced image of the first image and the second image.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring a first image and a second image to be spliced; performing first splicing on the first image and the second image according to gyroscope information corresponding to the first image and the second image to obtain a first spliced image; optimizing gyroscope information of the second image according to the first matching feature point set; the first matching feature point set is a feature point set matched with the first image and the second image in an overlapping region of the first spliced image; performing second splicing on the first image and the second image according to the gyroscope information of the first image and the optimized gyroscope information of the second image to obtain a second spliced image; and performing seam splicing processing on the overlapping area of the second spliced image based on the shooting angles corresponding to the first image and the second image to obtain a spliced image of the first image and the second image.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a first image and a second image to be spliced; performing first splicing on the first image and the second image according to gyroscope information corresponding to the first image and the second image to obtain a first spliced image; optimizing gyroscope information of the second image according to the first matching feature point set; the first matching feature point set is a feature point set matched with the first image and the second image in an overlapping region of the first spliced image; performing second splicing on the first image and the second image according to the gyroscope information of the first image and the optimized gyroscope information of the second image to obtain a second spliced image; and performing seam splicing processing on the overlapping area of the second spliced image based on the shooting angles corresponding to the first image and the second image to obtain a spliced image of the first image and the second image.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring a first image and a second image to be spliced; performing first splicing on the first image and the second image according to gyroscope information corresponding to the first image and the second image to obtain a first spliced image; optimizing gyroscope information of the second image according to the first matching feature point set; the first matching feature point set is a feature point set matched with the first image and the second image in an overlapping region of the first spliced image; performing second splicing on the first image and the second image according to the gyroscope information of the first image and the optimized gyroscope information of the second image to obtain a second spliced image; and performing seam splicing processing on the overlapping area of the second spliced image based on the shooting angles corresponding to the first image and the second image to obtain a spliced image of the first image and the second image.
The image stitching method, the image stitching device, the computer equipment, the storage medium and the computer program product acquire the first image and the second image to be stitched, and according to the gyroscope information corresponding to the first image and the second image, performing first splicing on the first image and the second image to obtain a first spliced image, optimizing gyroscope information of the second image according to a first matching feature point set, wherein the first matching feature point set is a feature point set matched with the first image and the second image in an overlapping area of the first spliced image, and according to the gyroscope information of the first image and the optimized gyroscope information of the second image, and performing second splicing on the first image and the second image to obtain a second spliced image, and performing seam splicing processing on the overlapping area of the second spliced image based on the shooting angles corresponding to the first image and the second image to obtain a spliced image of the first image and the second image. The scheme obtains two images to be spliced, splices the two images according to gyroscope information corresponding to the two images to obtain a spliced overlapping area, extracts a feature point set of the two images in the overlapping area and matches the feature point set to obtain a matched feature point set, thereby being capable of obviously reducing the time required in feature extraction, optimizes the gyroscope information of a second image according to the matched feature point set, splices the two images according to the gyroscope information of a first image and the optimized gyroscope information of the second image, splices the overlapping area of the spliced images based on shooting angles corresponding to the two images to obtain spliced images of the two images, can repeat the steps to splice the images one by one to obtain spliced images of the images, thereby improving the efficiency of image splicing, and the calculated amount of image splicing and the consumption of memory resources are reduced.
Drawings
FIG. 1 is a schematic flow chart diagram of an image stitching method in one embodiment;
FIG. 2 is a schematic diagram of a first image in one embodiment;
FIG. 3 is a diagram of a second image in one embodiment;
FIG. 4 is a schematic illustration of an overlap region of a first stitched image in one embodiment;
FIG. 5 is a schematic diagram illustrating a capturing angle corresponding to each pixel point in an overlapping area of the second stitched image in one embodiment;
FIG. 6 is a block diagram showing the structure of an image stitching apparatus according to an embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In an embodiment, as shown in fig. 1, an image stitching method is provided, and this embodiment is exemplified by applying the method to a terminal or a server, and includes the following steps:
step S101, a first image and a second image to be spliced are obtained.
In this step, the first image and the second image may be images captured by a capturing device such as a camera, and may be two adjacent images during capturing, where the first image and the second image may be a horizontally captured image, a vertically captured image, or an obliquely captured image, and the first image is not limited herein, the first image may be as shown in fig. 2, the second image may be as shown in fig. 3, for example, when the camera performs horizontally (or vertically) capturing from left to right (or from right to left, and from top to bottom, and is not limited herein), the first image may be a first captured image, the second image may be a second captured image, it is understood that the first image may also be a third captured image, the second image may be a fourth captured image, and is not limited herein, an overlapped image area may exist between two adjacent images, and the overlapped image area is shown in fig. 4, i.e. the same subject will be present on both images.
And S102, performing first splicing on the first image and the second image according to gyroscope information corresponding to the first image and the second image to obtain a first spliced image.
In this step, the gyroscope information corresponding to the first image and the second image refers to gyroscope information corresponding to the first image and gyroscope information corresponding to the second image, for example, gyroscope information corresponding to shooting of the first image and gyroscope information corresponding to shooting of the second image by shooting equipment such as a camera, and the gyroscope information may include sensor information; the first stitching may refer to placing the first image and the second image in the same coordinate system (for example, mapping the second image to the camera coordinate system of the first image), and obtaining a first stitched image due to an overlapped image area between the first image and the second image, that is, the stitching of the first image and the second image is achieved.
Specifically, according to gyroscope information corresponding to the first image and gyroscope information corresponding to the second image, the first image and the second image are subjected to first splicing to obtain a first spliced image.
And S103, optimizing gyroscope information of the second image according to the first matching feature point set.
The first matching feature point set is a feature point set matched with the first image and the second image in the overlapping region of the first spliced image.
Specifically, after the first image and the second image are subjected to first stitching to obtain a first stitched image, an overlapping region of the first image and the second image can be determined on the first stitched image, a first feature point set of the overlapping region of the second image and the first image on the first image and a second feature point set of the overlapping region of the second image and the first image on the second image can be extracted by using a lightweight feature extraction algorithm (ORB), the first feature point set and the second feature point set are matched to obtain a first matched feature point set, gyroscope information of the second image is calculated and optimized according to the first matched feature point set, and therefore more accurate gyroscope information of the second image is obtained.
And step S104, performing second splicing on the first image and the second image according to the gyroscope information of the first image and the optimized gyroscope information of the second image to obtain a second spliced image.
The second stitching may refer to placing the first image and the second image in the same coordinate system (for example, mapping the first image and the second image to a cylindrical coordinate system), and achieving stitching of the first image and the second image due to the overlapped image area between the first image and the second image, so as to obtain a second stitched image.
And S105, performing splicing processing on the overlapping area of the second spliced image based on the shooting angles corresponding to the first image and the second image to obtain a spliced image of the first image and the second image.
In this step, the shooting angle may refer to an included angle between an image point on the first image and the second image and a corresponding shooting device (e.g., a camera); the seam splicing processing may refer to that, for each pixel point on the overlapping region, according to a difference value of two shooting angles (shooting angles when the first image and the second image are shot) corresponding to each pixel point, a pixel point of the first image or a pixel point of the second image is selected for replacement for a pixel point with a larger difference value (i.e., two included angles have obvious deviation), and a pixel point with a smaller difference value (i.e., two included angles have no obvious deviation) is replaced with a pixel point of the first image and a pixel point of the second image after pixel value fusion is performed according to the assigned weights.
Specifically, after a first image and a second image are mapped to a cylindrical coordinate system to obtain a second stitched image, an overlapped image area and a non-overlapped image area are formed on the second stitched image, the non-overlapped image area does not need to be subjected to pixel point determination processing, the overlapped image area needs to be subjected to pixel point determination processing, and for the overlapped image area, based on the shooting angles corresponding to the first image and the second image, stitching processing based on the shooting angles is performed on the overlapped image area, so that selection of each pixel point in the overlapped image area is determined, and the stitched image of the first image and the second image is obtained.
Illustratively, in the process of gradually stitching the first image and the second image, that is, in the process of gradually mapping the first image and the second image to the cylindrical coordinate system (cylindrical surface), the first image may be mapped to the cylindrical coordinate system first, and then the second image may be mapped to the cylindrical coordinate system, in the mapping process, if a certain pixel point is not mapped by the previous image, the pixel point is directly mapped (filled), and if the certain pixel point is mapped by a pixel point of the previous image (for example, a pixel point of the first image), whether to replace the pixel point of the previous image is determined based on the shooting angles corresponding to the first image and the second image, and alpha blending (alpha blending, transparency blending) according to the sizes of the shooting angles may be used for replacement.
Optionally, the above steps may be repeated to stitch the multiple images one by one, specifically, after the first image to be stitched and the second image are obtained to be stitched to obtain a stitched image, the third image may be obtained, the stitched image obtained by stitching the first image and the second image may be used as another first image again (or the second image may be used as another first image again), and the third image may be used as another second image, so that the technical solution of the present application is used to stitch the another first image and the another second image, and it can be understood that the above steps may be repeated to stitch the multiple images one by one to obtain a stitched image of the multiple images.
The image stitching method includes the steps of obtaining a first image and a second image to be stitched, conducting first stitching on the first image and the second image according to gyroscope information corresponding to the first image and the second image to obtain a first stitched image, optimizing gyroscope information of the second image according to a first matching feature point set, conducting second stitching on the first image and the second image according to the first matching feature point set and the optimized gyroscope information of the first image and the second image to obtain a second stitched image, conducting stitching processing on an overlapping area of the second stitched image based on shooting angles corresponding to the first image and the second image, and obtaining the stitched image of the first image and the second image. The scheme obtains two images to be spliced, splices the two images according to gyroscope information corresponding to the two images to obtain a spliced overlapping area, extracts a feature point set of the two images in the overlapping area and matches the feature point set to obtain a matched feature point set, thereby being capable of obviously reducing the time required in feature extraction, optimizes the gyroscope information of a second image according to the matched feature point set, splices the two images according to the gyroscope information of a first image and the optimized gyroscope information of the second image, splices the overlapping area of the spliced images based on shooting angles corresponding to the two images to obtain spliced images of the two images, can repeat the steps to splice the images one by one to obtain spliced images of the images, thereby improving the efficiency of image splicing, and the calculated amount of image splicing and the consumption of memory resources are reduced.
In an embodiment, the performing, in step S105, the stitching processing on the overlapping area of the second stitched image based on the corresponding shooting angles of the first image and the second image specifically includes: calculating a first shooting angle set formed by each pixel point in the overlapping area of the second spliced image and the shooting center and the shooting main axis corresponding to the first image, and calculating a second shooting angle set formed by each pixel point in the overlapping area of the second spliced image and the shooting center and the shooting main axis corresponding to the second image; determining pixel distribution weights of pixel points in the overlapping area of the second spliced image according to the difference value of the first shooting angle and the second shooting angle corresponding to the pixel points in the overlapping area of the second spliced image; and performing seam splicing treatment on the overlapping area of the second spliced image according to the pixel distribution weight.
In this embodiment, the first capturing angle is an included angle formed by each pixel point in the overlapping region of the second stitched image and the capturing center and the capturing main axis corresponding to the first image, the second capturing angle is an included angle formed by each pixel point in the overlapping region of the second stitched image and the capturing center and the capturing main axis corresponding to the second image, as shown in fig. 5, fig. 5 includes a capturing center (camera center), an image plane (e.g., first image and second image) and a capturing main axis (primary axis), the first capturing angle and the second capturing angle may be included angles from each pixel point in the overlapping region of the second stitched image to the two camera centers and the main axis, for example, the first capturing angle is @ xCP in fig. 51The second shooting angle is ≈ xCP in fig. 52The difference between the first shooting angle and the second shooting angle may be an included angle difference d = ≈ xCP1-∠xCP2(ii) a The pixel distribution weight of each pixel point may be the pixel distribution weight of the first image pixel point and the pixel distribution weight of the second image pixel point corresponding to each pixel point.
Specifically, aiming at each pixel point in the overlapping area of the second spliced image, a first included angle [ xCP ] formed by the pixel point, a shooting center corresponding to the first image and a shooting main shaft is calculated1And calculating a second included angle xCP formed by the shooting center corresponding to the pixel point and the second image and the shooting main shaft2Calculating an included angle difference d = =xCP1-∠xCP2Determining the pixel distribution weight of each pixel point in the overlapping region of the second stitched image according to the included angle difference d by setting a threshold, for example, the threshold is 1 °, if d is<If the pixel value is-1 deg., the pixel point is not replaced (i.e. the pixel point is determined as the pixel point corresponding to the first image, and the pixel point corresponding to the second image is not used, i.e. the pixel distribution weight is 100% of the pixel value of the pixel point corresponding to the first image and 0% of the pixel value of the pixel point corresponding to the second image), if d is>1 degree, directly replacing the point (i.e. the pixel point is determined as the pixel point corresponding to the second image, and the pixel point corresponding to the first image is not used, i.e. the pixel is assigned with the weight to take the valueTaking 0% of the pixel point of the first image corresponding to the pixel point, taking 100% of the pixel point of the second image corresponding to the pixel point, and if the pixel is-1 degree<=d<If the value is not less than 1 degree, then the partial pixel value of the pixel point of the first image corresponding to the pixel point and the partial pixel value of the pixel point of the second image corresponding to the pixel point are taken according to the included angle difference d, if the pixel point is replaced by the result after alpha blending, if the pixel point is replaced by alpha P1 st+(1-α)*PNo. 2The obtained pixel point is substituted for the pixel point, wherein alpha = -0.5 x d +0.5, P1 stRepresenting the pixel value, P, of the pixel point corresponding to the first image2 nd (2)And representing the pixel value of the pixel point corresponding to the second image.
According to the technical scheme of the embodiment, the selection of each pixel point in the overlapping area is determined or the pixel points with the fused pixel values are used for replacing the pixel points, so that the efficiency and the accuracy of image splicing are improved, and the calculated amount of the image splicing and the memory resource consumption are reduced.
In an embodiment, the method may further determine the pixel distribution weight of each pixel point in the overlapping region of the second stitched image by the following steps, specifically including: determining a difference threshold interval condition which is satisfied by the difference between the first shooting angle and the second shooting angle corresponding to the pixel point aiming at each pixel point in the overlapping area of the second spliced image; and if the difference value threshold interval condition met by the difference value is the first difference value threshold interval condition, determining the pixel distribution weight of the pixel point as the pixel information of the pixel point of the first image corresponding to the image capturing pixel point.
In this embodiment, the difference threshold interval condition may include three difference threshold interval conditions, where the first difference threshold interval condition is an interval smaller than the first difference threshold; the first difference threshold may be-1 °; the pixel information may refer to pixel values.
Specifically, for each pixel point in the overlapping region of the second stitched image, which difference threshold interval condition the included angle difference d of each pixel point satisfies is determined, and if the difference threshold interval condition that the included angle difference d satisfies is the first difference threshold interval condition (e.g., d < -1 °), the pixel point is not replaced (i.e., the pixel point is determined as the pixel point corresponding to the first image, and the pixel point corresponding to the second image is not used, i.e., the pixel allocation weight is 100% of the pixel value of the pixel point of the first image corresponding to the pixel point, and 0% of the pixel value of the pixel point of the second image corresponding to the pixel point is taken).
According to the technical scheme, the difference threshold interval condition met by each pixel point in the overlapping region of the second spliced image is determined, so that the efficiency of determining the pixel distribution weight of each pixel point in the overlapping region of the second spliced image is improved, and the subsequent image splicing efficiency is improved.
In an embodiment, the method may further determine the pixel distribution weight of each pixel point in the overlapping region of the second stitched image by the following steps, specifically including: and if the difference threshold interval condition met by the difference is a second difference threshold interval condition, determining that the pixel distribution weight of the pixel point is the partial pixel information of the pixel point of the first image corresponding to the pixel point and the partial pixel information of the pixel point of the second image corresponding to the pixel point according to the difference.
In this embodiment, the second difference threshold interval condition is an interval greater than or equal to the first difference threshold and less than or equal to the second difference threshold, where the second difference threshold may be 1 °.
Specifically, if the included angle difference d satisfies the difference threshold interval condition, it is the second difference threshold interval condition (e.g., -1 °)<=d<=1 °), determining the pixel distribution weight of the pixel point as taking the partial pixel value of the pixel point of the first image corresponding to the pixel point and the partial pixel value of the pixel point of the second image corresponding to the pixel point according to the included angle difference d, if the pixel point is replaced by the result after alpha blending, if alpha P is used1 st+(1-α)*P2 nd (2)The obtained pixel point is substituted for the pixel point, wherein alpha = -0.5 x d +0.5, P1 stA pixel value, P, representing a pixel point corresponding to the first image2 nd (2)And representing the pixel value of the pixel point corresponding to the second image.
According to the technical scheme, the accuracy of determining the pixel distribution weight of each pixel point in the overlapping region of the second spliced image is improved by determining the difference threshold interval condition met by each pixel point in the overlapping region of the second spliced image, so that the accuracy of image splicing is improved subsequently.
In an embodiment, the method may further determine the pixel distribution weight of each pixel point in the overlapping region of the second stitched image by the following steps, specifically including: and if the difference value threshold interval condition met by the difference value is a third difference value threshold interval condition, determining that the pixel distribution weight of the pixel point is the pixel information of the pixel point of the second image corresponding to the pixel point.
In this embodiment, the third difference threshold interval condition is an interval greater than the second difference threshold.
Specifically, if the differential threshold interval condition that the included angle difference d satisfies is the third differential threshold interval condition (if d >1 °), the point is directly replaced (i.e., the pixel point is determined as the pixel point corresponding to the second image, and the pixel point corresponding to the first image is not used, i.e., the pixel distribution weight is 0% of the pixel point of the first image corresponding to the pixel point, and 100% of the pixel point of the second image corresponding to the pixel point is taken).
According to the technical scheme, the difference threshold interval condition met by each pixel point in the overlapping region of the second spliced image is determined, so that the efficiency of determining the pixel distribution weight of each pixel point in the overlapping region of the second spliced image is improved, and the subsequent image splicing efficiency is improved.
In an embodiment, the performing the first stitching on the first image and the second image according to the gyroscope information corresponding to the first image and the second image in step S102 specifically includes: and mapping the second image to the camera coordinate system of the first image according to the camera internal reference information and the gyroscope information corresponding to the first image and the second image.
In this embodiment, the camera reference information corresponding to the first image and the second image refers to the camera reference information corresponding to the first image and the camera reference information corresponding to the second image, for example, the camera reference information corresponding to the first image captured by a capturing device such as a camera and the camera reference information corresponding to the second image captured by the capturing device.
Specifically, the second image is mapped to the camera coordinate system of the first image according to the camera internal reference information and the gyroscope information corresponding to the first image and the second image, and the second image in the camera coordinate system of the first image is obtained, so that the overlapping region in the first image and the overlapping region in the second image can be obtained, where the overlapping region in the first image is the effective region shown in fig. 4.
According to the technical scheme of the embodiment, the second image is mapped to the camera coordinate system of the first image according to the camera internal reference information and the gyroscope information corresponding to the first image and the second image, so that the first image and the second image are spliced, a more accurate overlapping area of the first image and the second image is obtained, and the overlapping area is subjected to feature point extraction and feature point matching subsequently, wherein the overlapping area generally accounts for 30% -40% of the whole image.
In an embodiment, the method may further map the second image to the camera coordinate system of the first image by: calculating an affine transformation matrix of the second image relative to the first image according to the camera internal reference information and gyroscope information corresponding to the first image and the second image; the second image is mapped to the camera coordinate system of the first image according to an affine transformation matrix.
Specifically, an initial affine transformation matrix H of the second image with respect to the first image is calculated by using the first image as a reference (reference) according to the camera internal reference information and the gyroscope information corresponding to the first image and the second image, and the second image and the corresponding overlapping region in the camera coordinate system of the first image are obtained by mapping the second image to the camera coordinate system of the first image through the initial affine transformation matrix H.
According to the technical scheme of the embodiment, the second image is mapped to the camera coordinate system of the first image through the affine transformation matrix, so that the efficiency and accuracy of mapping the second image to the camera coordinate system of the first image are improved, and the subsequent improvement of the efficiency and accuracy of image splicing is facilitated.
In an embodiment, the optimizing, according to the first matching feature point set, the gyroscope information of the second image in step S103 specifically includes: performing inverse affine transformation on matching feature points corresponding to the second image in the first matching feature point set to obtain a second matching feature point set of the first image and the second image; and optimizing the gyroscope information of the second image according to the second matching feature point set.
Specifically, after the second image is mapped to the camera coordinate system of the first image by the initial affine transformation matrix H, feature point extraction is performed on the first image and the second image corresponding to the overlapping region of the first image and the second image by orb feature extraction method (i.e., feature point extraction is performed on the first image and the second image in the overlapping region of the first stitched image), so as to obtain a first feature point set corresponding to the first image and a second feature point set corresponding to the second image, the first feature point set and the second feature point set are matched, so as to obtain a first matched feature point set, inverse affine transformation is performed on the matched feature points corresponding to the second image in the camera coordinate system of the first image in the first matched feature point set (i.e., inverse matrix transformation of the initial affine transformation matrix H), so as to obtain a second matched feature point set of the first image and the second image (i.e., obtain a second matched feature point set of the first image and the second image in the camera coordinate system of the first image and the second image A second set of matching feature points for the second image) to optimize gyroscope information for the second image based on the second set of matching feature points.
According to the technical scheme, inverse affine transformation is carried out on the matching feature points corresponding to the second image in the first matching feature point set, so that the obtained more accurate second matching feature point set is used for optimizing the gyroscope information of the second image, the accuracy of the gyroscope information of the optimized second image is improved, and the subsequent improvement of the accuracy of image splicing is facilitated.
In an embodiment, the second stitching, performed according to the gyroscope information of the first image and the optimized gyroscope information of the second image in step S104, of the first image and the second image specifically includes: and mapping the first image and the second image to a cylindrical coordinate system according to the camera internal reference information and gyroscope information of the first image, and the camera internal reference information and optimized gyroscope information of the second image.
Specifically, the first image is mapped to the cylindrical coordinate system according to the camera internal reference information and the gyroscope information of the first image, the second image is mapped to the cylindrical coordinate system according to the camera internal reference information and the optimized gyroscope information of the second image, and pixel points can be mapped to the cylindrical coordinate system one by one according to the sequence, so that the second splicing of the first image and the second image is realized.
According to the technical scheme, the first image and the second image are mapped to the cylindrical coordinate system, so that the second spliced image in the cylindrical coordinate system is obtained, and the accuracy of image splicing is improved.
In an embodiment, the acquiring the first image and the second image to be stitched in step S101 specifically includes: and keeping the exposure parameters and the white balance parameters of the shooting device, and shooting by the shooting device to obtain a first image and a second image.
Specifically, the exposure parameters and white balance parameters of the photographing apparatus are held (fixed), and the first image and the second image are photographed by the photographing apparatus.
According to the technical scheme of the embodiment, the exposure parameters and the white balance parameters of the shooting equipment are kept fixed through control, so that the acquired images (the first image and the second image) are free from large color difference, the color brightness of the images is basically consistent, the problem of image color and exposure difference is solved, color and exposure correction is not needed, and the efficiency and the accuracy of image splicing are improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides an image splicing device for realizing the image splicing method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the image stitching apparatus provided below can be referred to as limitations on the image stitching method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 6, an image stitching apparatus is provided, and the apparatus 600 may include:
the image obtaining module 601 is configured to obtain a first image and a second image to be stitched;
a first stitched image obtaining module 602, configured to perform first stitching on the first image and the second image according to gyroscope information corresponding to the first image and the second image, so as to obtain a first stitched image;
a gyroscope information optimizing module 603, configured to optimize gyroscope information of the second image according to the first matching feature point set; the first matched feature point set is a feature point set matched by the first image and the second image in an overlapping region of the first spliced image;
a second stitched image obtaining module 604, configured to perform second stitching on the first image and the second image according to the gyroscope information of the first image and the optimized gyroscope information of the second image, to obtain a second stitched image;
a stitched image obtaining module 605, configured to perform stitching processing on an overlapping area of the second stitched image based on the shooting angles corresponding to the first image and the second image, so as to obtain a stitched image of the first image and the second image.
In an embodiment, the stitched image obtaining module 605 is further configured to calculate a first capturing angle set formed by each pixel point in the overlapping region of the second stitched image and the capturing center and the capturing main axis corresponding to the first image, and calculate a second capturing angle set formed by each pixel point in the overlapping region of the second stitched image and the capturing center and the capturing main axis corresponding to the second image; determining pixel distribution weights of pixel points in the overlapping area of the second spliced image according to the difference value of the first shooting angle and the second shooting angle corresponding to the pixel points in the overlapping area of the second spliced image; and performing splicing treatment on the overlapping area of the second spliced image according to the pixel distribution weight.
In an embodiment, the stitched image obtaining module 605 is further configured to determine, for each pixel point in the overlapping area of the second stitched image, a difference threshold interval condition that a difference between the first shooting angle and the second shooting angle that corresponds to the pixel point satisfies; if the difference value threshold interval condition met by the difference value is a first difference value threshold interval condition, determining that the pixel distribution weight of the pixel point is the pixel information of the pixel point of the first image corresponding to the pixel point; the first difference threshold interval condition is an interval smaller than a first difference threshold.
In an embodiment, the stitched image obtaining module 605 is further configured to determine that the pixel allocation weight of the pixel point is a part of the pixel information of the pixel point of the first image corresponding to the pixel point and a part of the pixel information of the pixel point of the second image corresponding to the pixel point according to the difference value if the difference threshold interval condition that the difference value satisfies is a second difference threshold interval condition; the second difference threshold interval condition is an interval which is greater than or equal to the first difference threshold and less than or equal to a second difference threshold.
In an embodiment, the stitched image obtaining module 605 is further configured to determine, if the difference threshold interval condition that the difference satisfies is a third difference threshold interval condition, that the pixel distribution weight of the pixel point is the pixel information of the pixel point of the second image corresponding to the pixel point; wherein the third difference threshold interval condition is an interval greater than the second difference threshold.
In an embodiment, the first stitched image obtaining module 602 is further configured to map the second image to the camera coordinate system of the first image according to camera internal reference information and gyroscope information corresponding to the first image and the second image.
In an embodiment, the first stitched image obtaining module 602 is further configured to calculate an affine transformation matrix of the second image with respect to the first image according to camera internal reference information and gyroscope information corresponding to the first image and the second image; mapping the second image to a camera coordinate system of the first image according to the affine transformation matrix.
In an embodiment, the gyroscope information optimization module 603 is further configured to perform inverse affine transformation on matching feature points corresponding to a second image in the first matching feature point set, so as to obtain a second matching feature point set of the first image and the second image; and optimizing gyroscope information of the second image according to the second matching feature point set.
In an embodiment, the second stitched image obtaining module 604 is further configured to map the first image and the second image to a cylindrical coordinate system according to the camera internal reference information and the gyroscope information of the first image, and the camera internal reference information and the optimized gyroscope information of the second image.
In an embodiment, the image obtaining module 601 is further configured to maintain exposure parameters and white balance parameters of a shooting device, and obtain the first image and the second image through shooting by the shooting device.
The respective modules in the image splicing apparatus described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer equipment also comprises an input/output interface, wherein the input/output interface is a connecting circuit for exchanging information between the processor and external equipment, and is connected with the processor through a bus, and the input/output interface is called an I/O interface for short. The computer program is executed by a processor to implement an image stitching method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, carries out the steps in the method embodiments described above.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (14)

1. An image stitching method, characterized in that the method comprises:
acquiring a first image and a second image to be spliced;
performing first splicing on the first image and the second image according to gyroscope information corresponding to the first image and the second image to obtain a first spliced image;
optimizing gyroscope information of the second image according to the first matching feature point set; the first matched feature point set is a feature point set matched by the first image and the second image in an overlapping region of the first spliced image;
performing second splicing on the first image and the second image according to the gyroscope information of the first image and the optimized gyroscope information of the second image to obtain a second spliced image;
and performing seam splicing processing on the overlapping area of the second spliced image based on the shooting angles corresponding to the first image and the second image to obtain a spliced image of the first image and the second image.
2. The method according to claim 1, wherein the stitching the overlapping area of the second stitched image based on the corresponding shooting angles of the first image and the second image comprises:
calculating a first shooting angle set formed by each pixel point in the overlapping area of the second spliced image and the shooting center and the shooting main axis corresponding to the first image, and calculating a second shooting angle set formed by each pixel point in the overlapping area of the second spliced image and the shooting center and the shooting main axis corresponding to the second image;
determining pixel distribution weights of pixel points in the overlapping area of the second spliced image according to the difference value of the first shooting angle and the second shooting angle corresponding to the pixel points in the overlapping area of the second spliced image;
and performing splicing treatment on the overlapping area of the second spliced image according to the pixel distribution weight.
3. The method according to claim 2, wherein determining the pixel distribution weight of each pixel point in the overlapping region of the second stitched image according to the difference between the first capturing angle and the second capturing angle corresponding to each pixel point in the overlapping region of the second stitched image comprises:
determining a difference threshold interval condition which is satisfied by the difference between a first shooting angle and a second shooting angle corresponding to the pixel point aiming at each pixel point in the overlapping area of the second spliced image;
if the difference value threshold interval condition met by the difference value is a first difference value threshold interval condition, determining that the pixel distribution weight of the pixel point is the pixel information of the pixel point of the first image corresponding to the pixel point;
the first difference threshold interval condition is an interval smaller than a first difference threshold.
4. The method of claim 3, further comprising:
if the difference value threshold interval condition met by the difference value is a second difference value threshold interval condition, determining that the pixel distribution weight of the pixel point is that partial pixel information of the pixel point of the first image corresponding to the pixel point and partial pixel information of the pixel point of the second image corresponding to the pixel point are obtained according to the difference value;
the second difference threshold interval condition is an interval which is greater than or equal to the first difference threshold and less than or equal to a second difference threshold.
5. The method of claim 4, further comprising:
if the difference value threshold interval condition met by the difference value is a third difference value threshold interval condition, determining that the pixel distribution weight of the pixel point is the pixel information of the pixel point of the second image corresponding to the pixel point;
and the third difference threshold interval condition is an interval greater than the second difference threshold.
6. The method of claim 1, wherein the first stitching the first image and the second image according to the gyroscope information corresponding to the first image and the second image comprises:
and mapping the second image to a camera coordinate system of the first image according to the camera internal reference information and gyroscope information corresponding to the first image and the second image.
7. The method of claim 6, wherein mapping the second image to the camera coordinate system of the first image according to the camera reference information and gyroscope information corresponding to the first image and the second image comprises:
according to the camera internal reference information and the gyroscope information corresponding to the first image and the second image, calculating an affine transformation matrix of the second image relative to the first image;
mapping the second image to a camera coordinate system of the first image according to the affine transformation matrix.
8. The method of any of claims 1 to 7, wherein optimizing the gyroscope information for the second image based on the first set of matched feature points comprises:
performing inverse affine transformation on the matching feature points corresponding to the second image in the first matching feature point set to obtain a second matching feature point set of the first image and the second image;
and optimizing gyroscope information of the second image according to the second matching feature point set.
9. The method of any one of claims 1 to 7, wherein the second stitching of the first image and the second image according to the gyroscope information of the first image and the optimized gyroscope information of the second image comprises:
and mapping the first image and the second image to a cylindrical coordinate system according to the camera internal reference information and gyroscope information of the first image, and the camera internal reference information and optimized gyroscope information of the second image.
10. The method according to any one of claims 1 to 7, wherein the acquiring the first image and the second image to be stitched comprises:
and keeping the exposure parameters and the white balance parameters of the shooting equipment, and shooting by the shooting equipment to obtain the first image and the second image.
11. An image stitching device, characterized in that the device comprises:
the image acquisition module is used for acquiring a first image and a second image to be spliced;
the first spliced image obtaining module is used for carrying out first splicing on the first image and the second image according to gyroscope information corresponding to the first image and the second image to obtain a first spliced image;
the gyroscope information optimization module is used for optimizing gyroscope information of the second image according to the first matching feature point set; the first matched feature point set is a feature point set matched by the first image and the second image in an overlapping region of the first spliced image;
the second spliced image obtaining module is used for carrying out second splicing on the first image and the second image according to the gyroscope information of the first image and the optimized gyroscope information of the second image to obtain a second spliced image;
and the spliced image obtaining module is used for carrying out splicing processing on the overlapping area of the second spliced image based on the shooting angles corresponding to the first image and the second image to obtain a spliced image of the first image and the second image.
12. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 10 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 10.
14. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 10 when executed by a processor.
CN202210516921.8A 2022-05-13 2022-05-13 Image splicing method, device, equipment and medium Active CN114626991B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210516921.8A CN114626991B (en) 2022-05-13 2022-05-13 Image splicing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210516921.8A CN114626991B (en) 2022-05-13 2022-05-13 Image splicing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN114626991A true CN114626991A (en) 2022-06-14
CN114626991B CN114626991B (en) 2022-08-12

Family

ID=81907094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210516921.8A Active CN114626991B (en) 2022-05-13 2022-05-13 Image splicing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114626991B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110203A1 (en) * 2008-11-04 2010-05-06 Canon Kabushiki Kaisha Image-shake correction apparatus and imaging apparatus
WO2018171429A1 (en) * 2017-03-22 2018-09-27 腾讯科技(深圳)有限公司 Image stitching method, device, terminal, and storage medium
CN108615223A (en) * 2018-05-08 2018-10-02 南京齿贝犀科技有限公司 Tooth lip buccal side Panorama Mosaic method based on Local Optimization Algorithm
CN112396639A (en) * 2019-08-19 2021-02-23 虹软科技股份有限公司 Image alignment method
CN112866556A (en) * 2019-11-28 2021-05-28 南京理工大学 Image stabilization method and system based on gyroscope and feature point matching
CN113112518A (en) * 2021-04-19 2021-07-13 深圳思谋信息科技有限公司 Feature extractor generation method and device based on spliced image and computer equipment
CN113556464A (en) * 2021-05-24 2021-10-26 维沃移动通信有限公司 Shooting method and device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110203A1 (en) * 2008-11-04 2010-05-06 Canon Kabushiki Kaisha Image-shake correction apparatus and imaging apparatus
WO2018171429A1 (en) * 2017-03-22 2018-09-27 腾讯科技(深圳)有限公司 Image stitching method, device, terminal, and storage medium
CN108615223A (en) * 2018-05-08 2018-10-02 南京齿贝犀科技有限公司 Tooth lip buccal side Panorama Mosaic method based on Local Optimization Algorithm
CN112396639A (en) * 2019-08-19 2021-02-23 虹软科技股份有限公司 Image alignment method
CN112866556A (en) * 2019-11-28 2021-05-28 南京理工大学 Image stabilization method and system based on gyroscope and feature point matching
CN113112518A (en) * 2021-04-19 2021-07-13 深圳思谋信息科技有限公司 Feature extractor generation method and device based on spliced image and computer equipment
CN113556464A (en) * 2021-05-24 2021-10-26 维沃移动通信有限公司 Shooting method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
伍文双等: "基于MEMS陀螺仪的光学图像拼接", 《光子学报》 *
赵丽颖 等: "基于无人机航拍图像快速拼接方法的研究", 《长春理工大学学报 (自然科学版)》 *

Also Published As

Publication number Publication date
CN114626991B (en) 2022-08-12

Similar Documents

Publication Publication Date Title
US10395341B2 (en) Panoramic image generation method and apparatus for user terminal
CN107967669A (en) Method, apparatus, computer equipment and the storage medium of picture processing
CN111192352B (en) Map rendering method, map rendering device, computer equipment and storage medium
CN113963072B (en) Binocular camera calibration method and device, computer equipment and storage medium
CN114742703A (en) Method, device and equipment for generating binocular stereoscopic panoramic image and storage medium
WO2024093763A1 (en) Panoramic image processing method and apparatus, computer device, medium and program product
CN112288878B (en) Augmented reality preview method and preview device, electronic equipment and storage medium
CN114626991B (en) Image splicing method, device, equipment and medium
CN108875611A (en) Video actions recognition methods and device
CN110288691B (en) Method, apparatus, electronic device and computer-readable storage medium for rendering image
CN114022518B (en) Method, device, equipment and medium for acquiring optical flow information of image
CN108986031A (en) Image processing method, device, computer equipment and storage medium
CN109360176A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN115272155A (en) Image synthesis method, image synthesis device, computer equipment and storage medium
CN115272470A (en) Camera positioning method and device, computer equipment and storage medium
CN114511448B (en) Method, device, equipment and medium for splicing images
CN114519753A (en) Image generation method, system, electronic device, storage medium and product
CN113538318A (en) Image processing method, image processing device, terminal device and readable storage medium
CN109816613A (en) Image completion method and device
CN116527908B (en) Motion field estimation method, motion field estimation device, computer device and storage medium
CN110443835B (en) Image registration method, device, equipment and storage medium
US11812153B2 (en) Systems and methods for fisheye camera calibration and bird&#39;s-eye-view image generation in a simulation environment
CN109756674B (en) Road facility updating method, device, computer equipment and storage medium
CN116295031B (en) Sag measurement method, sag measurement device, computer equipment and storage medium
CN116051723B (en) Bundling adjustment method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant