CN117036166A - Method for realizing image stitching based on spatial information - Google Patents

Method for realizing image stitching based on spatial information Download PDF

Info

Publication number
CN117036166A
CN117036166A CN202310951653.7A CN202310951653A CN117036166A CN 117036166 A CN117036166 A CN 117036166A CN 202310951653 A CN202310951653 A CN 202310951653A CN 117036166 A CN117036166 A CN 117036166A
Authority
CN
China
Prior art keywords
camera
image
color
stitching
color camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310951653.7A
Other languages
Chinese (zh)
Inventor
马川
许乙山
邹易峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaozhi Future Chengdu Technology Co ltd
Original Assignee
Xiaozhi Future Chengdu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaozhi Future Chengdu Technology Co ltd filed Critical Xiaozhi Future Chengdu Technology Co ltd
Priority to CN202310951653.7A priority Critical patent/CN117036166A/en
Publication of CN117036166A publication Critical patent/CN117036166A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Abstract

The application discloses a method for realizing image stitching based on space information, which is used for stitching a plurality of pictures in pairs, and comprises the steps of obtaining camera parameters of a first picture and a second picture which need to be stitched, establishing a camera conversion matrix through calibration, and utilizing the conversion matrix to stitch the imagesAll points on the map to the imageOn, i.e. realize the imageAnd an imageIs a splice of (2). The application realizes the same position correspondence between different images based on the conversion between the image space coordinates, can completely avoid the extraction of the existing characteristic points technically, realizes the matching and alignment of the real pixel coordinate level based on the error problem existing in probability calculation, and realizes the complete splicing between multiple images.

Description

Method for realizing image stitching based on spatial information
Technical Field
The application relates to the technical field of image stitching, in particular to a technology and a method for realizing image stitching based on space coordinate information, and in particular relates to a method for realizing image stitching based on space information.
Background
The image stitching technology is a technology for stitching multiple images with image intersections into a complete image by finding and matching the same features in multiple independent images (typically RGB bitmaps). The existing image stitching technology almost aims at an image or a pixel in the image as a research object, the reliability of characteristic points in the image pixel is continuously improved through optimizing a deep learning neural network to realize high-precision stitching, and the mode almost leaves no setting of an algorithm threshold, namely, the stitching of the image is almost free from the setting of an algorithm threshold, namely, although denoising, confidence, robustness, iteration and verification methods are added, the color value threshold of the pixel is limited, the characteristics between adjacent pixels inevitably cross, so that the image stitching method by taking the image or the pixel as the object is in theory continuously close to complete stitching, but the actual stitching accuracy almost cannot achieve real complete stitching, especially, the extractable characteristics of the stitched image are few, the gray value is not obviously limited, and the accuracy of extracting the characteristics is further reduced.
In order to solve the problem of image stitching and to obtain the complete picture with high stitching precision, a great number of mature stitching technologies exist in the prior art, and the representative image stitching technologies are obtained by searching through the keyword image stitching as follows: the authorized bulletin number CN104270615B discloses a method for image stitching, which establishes an optimal stitching seam: extracting a plurality of special positions in the reference image; performing feature point matching, and determining feature point matching by coarse screening, robust estimation, fine screening and error matching point removal; adjusting the position of the camera according to the matched relative positions of the feature points; constructing a Laplacian pyramid for the two images; constructing a Gaussian pyramid for the optimal stitching image; carrying out weighted average fusion on the overlapping areas of the two images; checking whether the images are fused or not, if yes, entering a step seven, and if not, returning to a step two; interpolation expansion is carried out on the fusion image, and illumination and color balance adjustment are carried out; and (5) finishing image stitching and outputting a smooth image. The prior art also uses a gaussian pyramid algorithm to fuse a robust estimation method to obtain a spliced image with higher matching degree as much as possible, but the method has the problems that the robust estimation and the feature extraction are inevitably accompanied with probability and are also subject to the feature condition of a spliced object, so that the spliced image with linear change has no accuracy and stability. Therefore, the applicant abandons the research direction of the prior art, does not take the image itself or the image pixel arrangement as a research object, does not extract the image features any more, takes the image features as a splicing basis, adopts brand-new space coordinate information to carry out one-to-one matching, and solves the problem that the splicing is inaccurate due to difficult extraction of the image features or too few extracted effective features.
Disclosure of Invention
In order to solve the problem that a plurality of scenes need to be spliced in the prior art, the application provides a method for realizing image splicing based on spatial information, which is used for realizing accurate splicing of a plurality of images and at least can also overcome one of the following problems in the prior art:
1. the problem of low splicing quality caused by difficult feature extraction or error in feature point extraction precision by taking an RGB bitmap as an object in a feature extraction mode is solved;
2. the problem that an estimation error and a contrast error can be introduced to cause a feature extraction error due to the fact that the feature extraction ratio of an image is not small is solved;
3. the method solves the problem that the characteristic points are difficult to extract due to the fact that gray values between adjacent pixels are changed linearly, so that splicing is difficult.
In order to achieve the purpose, the application adopts a method completely different from the existing image stitching technology, abandons the stitching method of the existing image stitching, always takes RGB images as research objects, and utilizes the same point characteristics existing in different RGB images to perform superposition stitching to achieve the purpose of image stitching; in the prior art, the quality of image stitching quality is mainly dependent on the extraction accuracy of feature points and the matching degree between corresponding identical feature points between different images, and the image stitching accuracy and stability are fluctuated due to the fact that the image stitching quality is inevitably affected by factors such as an extraction algorithm, difficulty and the like of image features in any way. In order to ensure that the splicing accuracy of the images is high and is not influenced by the quality and the content form of the images, the application adopts the following technical scheme:
the method for realizing image stitching based on space information is used for stitching a plurality of photos in pairs and is characterized by comprising the following steps of: comprises the following steps
Step100, obtaining camera parameters of the first photo and the second photo to be spliced, where the camera parameters are respectively:
the first photo is taken by a camera A, wherein the color camera of the camera A has an internal referenceIntrinsic parameters of depth camera->Conversion matrix of depth camera to color camera isImage taken by color camera in camera A +.>Image taken by depth camera +.>
A second photo is taken by camera B, wherein the color camera of camera B has internal parametersIntrinsic parameters of depth camera->Conversion matrix of depth camera to color camera isImage taken by color camera in camera B +.>Image taken by depth camera +.>
Step200, establishing a camera conversion matrix through calibration: using Apriltag algorithm, respectively identifying the same mark point by using the color camera of camera A and the color camera of camera B, and respectively obtaining the transformation matrix T of the mark point relative to the space coordinate system of the color camera of camera A A And transformation T of the spatial coordinate System of the color camera of the marker Point relative to camera B B
Wherein,a conversion matrix representing the color camera of camera a and the color camera of camera B;
step300, image stitching, takingPoint on->Bonding ofCan calculate +.>At the position ofCoordinates of a spatial coordinate system of>
Then pass throughWill->Go to +.>Coordinates of a spatial coordinate system of (c)
Bonding ofIt can be mapped to +.>Points on
Step400, repeating step 200-step 300 until the image is formedAll points on the map to the image +.>On, i.e. realize the image +.>And image->Is a splice of (2).
The method for realizing image stitching based on the space information is applied to wide-angle shooting, a plurality of photos with intersections are obtained through single-camera multi-angle shooting, and the photos with the intersections are stitched in pairs by the method for realizing image stitching based on the space information.
The beneficial effects are that:
the application realizes the same position correspondence between different images based on the conversion between the image space coordinates, can completely avoid the extraction of the existing characteristic points technically, realizes the matching and alignment of the real pixel coordinate level based on the error problem existing in probability calculation, and realizes the complete splicing between multiple images.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic diagram of the splicing principle of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Example 1:
the method for realizing image stitching based on space information is used for stitching a plurality of photos in pairs and is characterized by comprising the following steps of: comprises the following steps
Step100, obtaining camera parameters of the first photo and the second photo to be spliced, where the camera parameters are respectively:
the first photo is taken by a camera A, wherein the color camera of the camera A has an internal referenceIntrinsic parameters of depth camera->Conversion matrix of depth camera to color camera isImage taken by color camera in camera A +.>Image taken by depth camera +.>
A second photo is taken by camera B, wherein the color camera of camera B has internal parametersIntrinsic parameters of depth camera->Conversion matrix of depth camera to color camera isImage taken by color camera in camera B +.>Image taken by depth camera +.>
Step200, establishing a camera conversion matrix through calibration: using Apriltag algorithm, respectively identifying the same mark point by using the color camera of camera A and the color camera of camera B, and respectively obtaining the transformation matrix T of the mark point relative to the space coordinate system of the color camera of camera A A And transformation T of the spatial coordinate System of the color camera of the marker Point relative to camera B B
Wherein,a conversion matrix representing the color camera of camera a and the color camera of camera B;
step300, image stitching, takingPoint on->Bonding ofCan calculate +.>At the position ofCoordinates of a spatial coordinate system of>
Then pass throughWill->Go to +.>Coordinates of a spatial coordinate system of (c)
Bonding ofIt can be mapped to +.>Points on
Step400, repeating step 200-step 300 until the image is formedAll points on the map to the image +.>On, i.e. realize the image +.>And image->Is a splice of (2). It should be noted that the first photograph and the second photograph may be two photographs taken by the same camera a at different positions and/or at different angles.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (2)

1. The method for realizing image stitching based on space information is used for stitching a plurality of photos in pairs and is characterized by comprising the following steps: comprises the following steps
Step100, obtaining camera parameters of the first photo and the second photo to be spliced, where the camera parameters are respectively:
the first photo is taken by a camera A, wherein the color camera of the camera A has an internal referenceIntrinsic parameters of depth camera->Conversion matrix of depth camera to color camera isCamera with camera bodyImage taken by color camera in A +.>Image taken by depth camera +.>
A second photo is taken by camera B, wherein the color camera of camera B has internal parametersIntrinsic parameters of depth camera->Conversion matrix of depth camera to color camera isImage taken by color camera in camera B +.>Image taken by depth camera +.>
Step200, establishing a camera conversion matrix through calibration: using Apriltag algorithm, respectively identifying the same mark point by using the color camera of camera A and the color camera of camera B, and respectively obtaining the transformation matrix T of the mark point relative to the space coordinate system of the color camera of camera A A And transformation T of the spatial coordinate System of the color camera of the marker Point relative to camera B B
Wherein,a conversion matrix representing the color camera of camera a and the color camera of camera B;
step300, image stitching, takingPoint on->Bonding ofCan calculate +.>At the position ofCoordinates of a spatial coordinate system of>
Then pass throughWill->Go to +.>Coordinates of a spatial coordinate system of (c)
Bonding ofIt can be mapped to +.>Point on->
Step400, repeating step 200-step 300 until the image is formedAll points on the map to the image +.>On, i.e. realize the image +.>And image->Is a splice of (2).
2. The application of the method for realizing image stitching based on space information in wide-angle shooting is characterized in that: the single-camera multi-angle shooting obtains a plurality of photos with intersections, and the photos with the intersections are spliced in pairs by the method for realizing image splicing based on the space information according to the claim 1.
CN202310951653.7A 2023-07-31 2023-07-31 Method for realizing image stitching based on spatial information Pending CN117036166A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310951653.7A CN117036166A (en) 2023-07-31 2023-07-31 Method for realizing image stitching based on spatial information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310951653.7A CN117036166A (en) 2023-07-31 2023-07-31 Method for realizing image stitching based on spatial information

Publications (1)

Publication Number Publication Date
CN117036166A true CN117036166A (en) 2023-11-10

Family

ID=88629097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310951653.7A Pending CN117036166A (en) 2023-07-31 2023-07-31 Method for realizing image stitching based on spatial information

Country Status (1)

Country Link
CN (1) CN117036166A (en)

Similar Documents

Publication Publication Date Title
CN110660023B (en) Video stitching method based on image semantic segmentation
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
CN108537782B (en) Building image matching and fusing method based on contour extraction
CN106548169B (en) Fuzzy literal Enhancement Method and device based on deep neural network
CN110866871A (en) Text image correction method and device, computer equipment and storage medium
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN101394573B (en) Panoramagram generation method and system based on characteristic matching
CN109211198B (en) Intelligent target detection and measurement system and method based on trinocular vision
CN110111248A (en) A kind of image split-joint method based on characteristic point, virtual reality system, camera
CN111445389A (en) Wide-view-angle rapid splicing method for high-resolution images
CN104616247B (en) A kind of method for map splicing of being taken photo by plane based on super-pixel SIFT
CN108470356A (en) A kind of target object fast ranging method based on binocular vision
CN113159043B (en) Feature point matching method and system based on semantic information
CN113689331B (en) Panoramic image stitching method under complex background
CN102982334A (en) Sparse parallax obtaining method based on target edge features and gray scale similarity
CN108362205A (en) Space ranging method based on fringe projection
CN112288758B (en) Infrared and visible light image registration method for power equipment
CN111681275A (en) Double-feature-fused semi-global stereo matching method
CN114693760A (en) Image correction method, device and system and electronic equipment
CN117036641A (en) Road scene three-dimensional reconstruction and defect detection method based on binocular vision
CN116778288A (en) Multi-mode fusion target detection system and method
CN113096016A (en) Low-altitude aerial image splicing method and system
CN112907580B (en) Image feature extraction and matching algorithm applied to comprehensive dotted line features in weak texture scene
CN110120012A (en) The video-splicing method that sync key frame based on binocular camera extracts
CN110430400B (en) Ground plane area detection method of binocular movable camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination