WO2023169281A1 - 图像配准方法、装置、存储介质及电子设备 - Google Patents

图像配准方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2023169281A1
WO2023169281A1 PCT/CN2023/079053 CN2023079053W WO2023169281A1 WO 2023169281 A1 WO2023169281 A1 WO 2023169281A1 CN 2023079053 W CN2023079053 W CN 2023079053W WO 2023169281 A1 WO2023169281 A1 WO 2023169281A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
optical flow
registered
control point
reference image
Prior art date
Application number
PCT/CN2023/079053
Other languages
English (en)
French (fr)
Inventor
曲超
苏坦
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2023169281A1 publication Critical patent/WO2023169281A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches

Definitions

  • the present application relates to the field of image processing technology, and in particular, to an image registration method, device, storage medium and electronic equipment.
  • Image registration and related technologies are a hot and difficult technology in the field of image processing research. Its purpose is to compare and fuse images acquired under different conditions (different times, lighting, shooting angles, etc.) for the same object. Specifically, , that is, for two images to be registered, through a series of operations, a spatial transformation is obtained, and one image is mapped to another image, so that points at the same position in the two images correspond one to one.
  • Image technology is widely used in target detection, model reconstruction, motion estimation, feature matching, tumor detection, lesion localization, angiography, geological exploration, aerial reconnaissance and other fields.
  • Image registration is an important step in image processing. If the results of image registration are inaccurate, operations such as image stitching after image registration will not be effective. Therefore, it is necessary to improve the accuracy of image registration.
  • Embodiments of the present application provide an image registration method, device, storage medium and electronic equipment, which can improve the accuracy of image registration.
  • the embodiment of the present application provides an image registration method, including:
  • the image to be registered is registered to the reference image.
  • An embodiment of the present application also provides an image registration device, including:
  • the acquisition module is used to acquire the reference image and the image to be registered
  • a determination module used to determine matching control point pairs in the reference image and the image to be registered based on the optical flow method
  • the mapping module is used to obtain the first mapping relationship between the reference image and the image to be registered based on the control point pair using the thin plate spline interpolation method;
  • the registration module is used to register the image to be registered to the reference image based on the first mapping relationship.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • a computer program is stored on the storage medium.
  • the computer program is executed by a processor to implement the steps in any image registration method provided by the embodiments of the present application.
  • Embodiments of the present application also provide an electronic device.
  • the electronic device includes a processor, a memory, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program to implement any of the methods provided by the embodiments of the present application. Steps in an image registration method.
  • the embodiment of this application first the reference image and the image to be registered are obtained; then the matching control point pairs in the reference image and the image to be registered are determined according to the optical flow method; and then based on the control point pairs, the thin plate spline interpolation method is used to obtain The first mapping relationship between the reference image and the image to be registered; thus, based on the first mapping relationship, the image to be registered is registered to the reference image.
  • the embodiment of the present application combines the optical flow method and the thin plate spline interpolation method. Through the optical flow method, uniform and widely distributed control points can be obtained. Through the thin plate spline interpolation method, a smooth mapping can be obtained based on the aforementioned control points, thereby , while achieving image registration, reducing the deformation of the image and improving the accuracy of image registration.
  • Figure 1 is a schematic flowchart of a first image registration method provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of a scene provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of an optical flow control point provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of the first image stitching process provided by the embodiment of the present application.
  • Figure 5 is a schematic diagram of the second image stitching process provided by the embodiment of the present application.
  • FIG. 6 is a second schematic flowchart of the image registration method provided by the embodiment of the present application.
  • Figure 7 is a first structural schematic diagram of an image registration device provided by an embodiment of the present application.
  • FIG. 8 is a second structural schematic diagram of an image registration device provided by an embodiment of the present application.
  • FIG. 9 is a first structural schematic diagram of an electronic device provided by an embodiment of the present application.
  • Figure 10 is a second structural schematic diagram of an electronic device provided by an embodiment of the present application.
  • Image registration is to map one image to another image by finding a spatial transformation between two images, so that points corresponding to the same position in space in the two images correspond one to one, thereby achieving the purpose of information fusion.
  • the registration of multiple images can also be achieved.
  • every two adjacent images can be taken as a group for registration.
  • the registration of consecutive multiple images can be achieved.
  • registration methods based on image grayscale Registration methods based on image features, optical flow methods, etc. can be used.
  • the registration method based on image grayscale uses the grayscale information of the entire image to establish a similarity measure between two images to register the images.
  • This method requires that the grayscale distribution of the reference image and the image to be registered must have a certain correlation. It can only adapt to translation transformation and small rotation transformation. It requires a large amount of calculation and is low in efficiency. It is suitable for applications with less details and less rich textures. images, mainly used in the field of medical image registration.
  • the registration method based on image features extracts the features affected by image transformation, brightness transformation, noise, etc. It uses stable features with less impact such as sound, such as the edges, corners, and centers of closed areas of objects in the image to register the image, so it is more widely used.
  • existing image registration methods based on image features use less feature information. For example, only corner features or only contour features are used. The information in the image is compressed to a large extent, and only a small part of the information is compressed. Exploiting, this method is more sensitive to errors in feature extraction and feature matching, so the quality of image registration is not high. Moreover, this method has high requirements on the distribution of control points, and it will be difficult to achieve registration in areas with sparse control points.
  • Optical flow is a concept in motion detection of objects in the field of view. It is used to describe the movement of the observed target, surface or edge caused by the movement of the observer.
  • Optical flow method is very useful in pattern recognition, computer vision and other image processing fields. It can be used for motion detection, object cutting, calculation of collision time and object expansion, motion compensation coding, or three-dimensional measurement through object surfaces and edges, etc. wait.
  • the optical flow method cannot guarantee that the optical flow calculation of all pixels is correct. If there are occluded areas in the image, it is more difficult to derive correct optical flow from these occluded areas. If the wrong optical flow is used to map the image, it will easily cause image distortion, making the mapped image not smooth enough and the registration effect will be poor.
  • the optical flow method is only applicable to the alignment of overlapping areas of two images, and it is difficult to transform non-overlapping parts as well.
  • image registration based on the optical flow method the non-overlapping areas are generally not changed, but the two images are gradually stretched and aligned based on the optical flow in the overlapping areas.
  • this method of only stretching and aligning the overlapping area will result in unnatural image transition, and the final registration effect will be poor.
  • Image registration is an important part of image processing. If the results of image registration are not ideal, operations such as image stitching after image registration will not be effective.
  • embodiments of the present application provide an image registration method.
  • the image registration method provided by this application combines the optical flow method and the thin plate spline interpolation method to achieve simultaneous stretching of overlapping areas and non-overlapping areas, and adjust the overall relative position between images, making the image transition more natural and achieving better results. registration effect.
  • the thin plate spline interpolation (TPS) method is a 2D interpolation method that determines the mapping of a deformation function based on the corresponding control point sets in two related images. This deformation function looks for a smooth surface with the smallest degree through all given points.
  • the name "thin plate” comes from the fact that thin plate splines are used to approximate the behavior of a thin piece of metal as it passes through the same control points.
  • Thin plate spline mapping can determine the key coefficients of the mapping transformation from the source image to the target image, and then replace the coordinates of any point in the source image with By entering the formula, the coordinates of the corresponding points in the target image can be obtained, and then the alignment of the two images can be achieved.
  • the execution subject of the image registration method provided by the embodiment of the present application may be the image registration device provided by the embodiment of the present application, or an electronic device integrating the image registration device.
  • the image registration device can be implemented in hardware or software.
  • the electronic device may be a computer device, which may be a terminal device such as a smartphone, a tablet, a personal computer, or a server. The following is a detailed analysis and description.
  • FIG. 1 is a schematic flowchart of a first image registration method provided by an embodiment of the present application.
  • the image registration method may include:
  • the collection of the reference image and the image to be registered in the embodiment of the present application can be achieved through remote sensing image collection devices such as infrared cameras, infrared thermal imaging cameras, and high-resolution visible light cameras. At least two collected images can be continuously shot for the same shooting scene. Or take short interval shots.
  • multiple images may be acquired, and the reference image and the image to be registered are determined from the multiple images.
  • the reference image and the image to be registered may be any two images selected by the device from a set of cached images cached in the background for synthesizing the panoramic image during the process of capturing a panoramic image.
  • the reference image and the image to be registered may be two images of the same scene captured by an image acquisition device at different angles. That is, the reference image and the image to be registered include the same scene. Images of parts also include images of different parts of the same scene. The image contents of the reference image and the image to be registered overlap but are not exactly the same. Therefore, there are overlapping areas and non-overlapping areas in the base image and the image to be registered.
  • Figure 2 is a schematic diagram of a scene provided by an embodiment of the present application.
  • the image in the rectangular frame is the image of the same part of the same scene in the two images
  • the image outside the rectangular frame is the image of different parts of the unified scene of the two images.
  • the two images in Figure 2 can be used as the reference image and the image to be registered respectively.
  • the left image is the reference image and the right image is the image to be registered, align the right image towards the left image; when the left image is the image to be registered and the right image is the reference image, align the left image towards the right image. allow.
  • Optical flow is the instantaneous speed of pixel movement of a spatially moving object on the observation imaging plane.
  • the optical flow method uses the changes in the time domain of pixels in the image sequence and the correlation between adjacent frames to find the correspondence between the previous frame and the current frame, thereby calculating the movement of objects between adjacent frames. a method of information.
  • optical flow is caused by the movement of the objects themselves in the scene, the movement of the camera, or a combination of both.
  • Optical flow expresses the changes in the image. Since it contains information about the target's movement, it can be used by the observer to determine the movement of the target.
  • the optical flow field is a two-dimensional vector field, which reflects the changing trend of the grayscale of each point on the image. It can be regarded as the instantaneous velocity field generated by the movement of pixels with grayscale on the image plane.
  • the information it contains is the instantaneous motion velocity vector information of each pixel.
  • the instantaneous change rate of grayscale at a specific coordinate point on the two-dimensional image plane is usually defined as the optical flow vector.
  • the optical flow method is used to calculate the optical flow fields of the reference image and the image to be registered, thereby determining the relative motion relationship between the reference image and the image to be registered.
  • this embodiment of the present application calculates bidirectional optical flow for the reference image and the image to be registered.
  • the optical flow calculation methods that can be used include DIS (Dense Inverse Search-based method) optical flow algorithm, RAFT (Recurrent All-Pairs Field Transforms for Optical Flow) optical flow algorithm, etc.
  • DIS Dense Inverse Search-based method
  • RAFT Recurrent All-Pairs Field Transforms for Optical Flow
  • the DIS optical flow algorithm has better real-time performance, while the RAFT optical flow algorithm has higher accuracy.
  • One step of calculating the bidirectional optical flow in this application includes using the reference image as a reference to perform optical flow calculation on the image to be registered, so as to obtain the first optical flow field of the image to be registered.
  • the first optical flow field includes the first optical flow vector (u1, v1) of each pixel in the image to be registered.
  • Another step in calculating the bidirectional optical flow includes performing optical flow calculation on the reference image using the image to be registered as a reference to obtain the second optical flow field of the reference image.
  • the second optical flow field includes the second optical flow vector (u2, v2) of each pixel in the reference image.
  • the first optical flow vectors of all pixels in the image to be registered and the second optical flow vectors of all pixels in the reference image are determined. flow vector.
  • the pixels in the reference image and the image to be registered can be sampled at equal intervals based on the first optical flow field and the second optical flow field, and each time they are sampled respectively
  • a first optical flow control point is determined in the registration image
  • a second optical flow control point is determined in the reference image.
  • the first optical flow control point and the second optical flow control point obtained by corresponding sampling form a matching control point pair.
  • control point pairs determined from the reference image and the image to be registered may not be accurate, and there may be mismatches.
  • the target control point pair is obtained based on the first optical flow field of the image to be registered and the second optical flow field of the reference image. The target control point pair is used as the actual control point pair for subsequent generation of mapping relationships. Mismatched control point pairs are filtered out and no longer used.
  • the matched control point pairs in S130 may specifically be the target control point pairs obtained after eliminating mismatched control point pairs. Based on the target control point pair, the thin plate spline interpolation method is used to obtain the first mapping relationship between the reference image and the image to be registered.
  • the first optical flow field of the reference image and the second optical flow field of the reference image are used to filter the control point pairs and eliminate the mismatched control point pairs.
  • the first optical flow vector (u1, v1) of the first optical flow control point located in the image to be registered and located in the reference image can be obtained.
  • the second optical flow vector (u2, v2) of the second optical flow control point is judged according to (u1, v1) and (u2, v2) whether the control point pair is a mismatched control point pair, thereby deciding whether to Control point pairs are eliminated.
  • the control point pair is determined as a mismatched control point pair, and the mismatched control point pair is eliminated. .
  • the first optical flow vector (u1, v1) and the second optical flow vector (u2, v2) meet the preset conditions, it is determined that the control point pair is not a mismatched control point pair, and the control point pair is determined as the target control point pair and retain the target control point pair. Therefore, before mapping the image to be registered, the control points of the image to be registered are initially screened to ensure the accuracy of the control points in the image to be registered, thereby ensuring the accuracy of image registration.
  • the first mapping relationship obtained in S130 may be a global mapping relationship. Based on the optical flow method and combined with the thin plate spline interpolation method, this application can extend the mapping of overlapping areas to non-overlapping areas and achieve global alignment of the image to be registered and the reference image.
  • the first optical flow control point located in the overlapping area of the image to be registered is obtained among all target control points, and the thin plate spline interpolation method is used to interpolate the first optical flow control point in the overlapping area to the entire image to be registered. , obtain the global mapping relationship between the reference image and the image to be registered.
  • the thin plate spline interpolation method before using the thin plate spline interpolation method to interpolate the first optical flow control point in the overlapping area to the entire image to be registered to obtain the global mapping relationship between the reference image and the image to be registered, you may first All first optical flow control points are filtered to further improve the accuracy of image registration.
  • the thin plate spline interpolation method can be used to determine the abnormal control points among all the first optical flow control points, and then eliminate the abnormal control points from all the first optical flow control points.
  • Figure 3 is a schematic diagram of an optical flow control point provided by an embodiment of the present application. As shown in Figure 3, the sampling, matching and filtering of optical flow control points can be achieved in the overlapping areas of the images to be registered.
  • the number of control points generated by the interpolation can be set according to requirements. The more optical flow control points left after filtering, the greater the amount of calculations required for thin plate spline interpolation and the longer the calculation time.
  • the judgment criteria for abnormal control points can be set manually. In order to shorten the calculation time and speed up the registration efficiency, when determining the abnormal control points, the judgment criteria can be set more strictly to eliminate more optical flow control points. But on the other hand, the more optical flow control points there are, the more accurate the generated first mapping relationship will be. Therefore, in order to improve the accuracy of image registration, the judgment criteria can also be set loosely to leave more optical flow control points. Specifically, users can adjust the judgment criteria for abnormal control points as needed to achieve a balance between speed and accuracy of image registration.
  • the image registration method of this application can align all areas of the reference image and the image to be registered, and use the thin-plate spline interpolation method to obtain a smooth mapping to avoid non-overlapping caused by registration of overlapping areas. Regional deformation distortion occurs.
  • the pixel points in the image to be registered can be mapped to obtain a registration image aligned with the reference image.
  • the registered image is consistent with the image of the same part in the reference image, and the relative positions and grayscale trends between pixels are consistent, and can be used for subsequent image splicing, image fusion and other processing.
  • the registration image and the reference image can be spliced under the same spatial coordinate system, overlapping the images of the same part, and splicing the images of different parts to obtain a spliced image.
  • the image to be registered can also be directly mapped to the spatial coordinate system where the reference image is located based on the first mapping relationship, thereby realizing registration and splicing of the image to be registered and the reference image.
  • the first mapping relationship may also be a local mapping relationship.
  • Figure 4 is a schematic diagram of the first image stitching process provided by an embodiment of the present application.
  • the alignment and splicing can be initially completed through feature points or other methods.
  • the preliminary splicing posture data Rs based on the low-resolution splicing into a low-resolution Resolution panorama, using the image registration method provided by this application, can
  • the local mapping relationship is obtained by combining the optical flow method and the thin plate spline interpolation method between each two images, and the local alignment of all images is achieved at low resolution.
  • FIG. 5 is a schematic diagram of the second image stitching process provided by an embodiment of the present application.
  • uniform and widely distributed control points are obtained in each image based on the optical flow method (as shown in Figure 5), and then through the determined control points, a thin plate sample is used Strip interpolation obtains the local mapping relationship local map when registering.
  • Strip interpolation obtains the local mapping relationship local map when registering.
  • the obtained local mapping relationship can be combined with the preliminary splicing posture data Rs to obtain the global mapping relationship global map corresponding to each high-definition image. Based on the global mapping relationship, multiple Global mapping of high-definition images to obtain a high-resolution panorama.
  • FIG. 6 is a schematic flowchart of the second image registration method provided by an embodiment of the present application.
  • the image registration method may include:
  • the collection of the reference image and the image to be registered in the embodiment of the present application can be achieved through remote sensing image collection devices such as infrared cameras, infrared thermal imaging cameras, and high-resolution visible light cameras. At least two collected images can be continuously shot for the same shooting scene. Or take short interval shots.
  • multiple images may be acquired, and the reference image and the image to be registered are determined from the multiple images.
  • the reference image and the image to be registered may be any two images selected by the device from a set of cached images cached in the background for synthesizing the panoramic image during the process of capturing a panoramic image.
  • the reference image and the image to be registered may be two images of the same scene captured by an image acquisition device at different angles. That is, the reference image and the image to be registered include the same scene. Images of parts also include images of different parts of the same scene. The image contents of the reference image and the image to be registered overlap but are not exactly the same. Therefore, there are overlapping areas and non-overlapping areas in the base image and the image to be registered.
  • Figure 2 is a schematic diagram of a scene provided by an embodiment of the present application.
  • the image in the rectangular frame is the image of the same part of the same scene in the two images
  • the image outside the rectangular frame is the image of different parts of the unified scene of the two images.
  • the two images in Figure 2 can be used as the reference image and the image to be registered respectively.
  • the left image is the base image and the right image is the image to be registered, align the right image toward the left image; when the left image is the image to be aligned and the right image is the base image, align the left image to Align to the image on the right.
  • the first optical flow field includes the first optical flow vector of each pixel in the image to be registered.
  • the second optical flow field includes the second optical flow vector of each pixel in the reference image.
  • the optical flow method is used to calculate the optical flow fields of the reference image and the image to be registered, thereby determining the relative motion relationship between the reference image and the image to be registered.
  • this embodiment of the present application calculates bidirectional optical flow for the reference image and the image to be registered.
  • the optical flow calculation methods that can be used include DIS (Dense Inverse Search-based method) optical flow algorithm, RAFT (Recurrent All-Pairs Field Transforms for Optical Flow) optical flow algorithm, etc.
  • DIS Dense Inverse Search-based method
  • RAFT Recurrent All-Pairs Field Transforms for Optical Flow
  • the DIS optical flow algorithm has better real-time performance, while the RAFT optical flow algorithm has higher accuracy.
  • One step of calculating the bidirectional optical flow in this application includes using the reference image as a reference to perform optical flow calculation on the image to be registered, so as to obtain the first optical flow field of the image to be registered.
  • the first optical flow field includes the first optical flow vector (u1, v1) of each pixel in the image to be registered.
  • Another step in calculating the bidirectional optical flow includes performing optical flow calculation on the reference image using the image to be registered as a reference to obtain the second optical flow field of the reference image.
  • the second optical flow field includes the second optical flow vector (u2, v2) of each pixel in the reference image.
  • the first optical flow vectors of all pixels in the image to be registered and the second optical flow vectors of all pixels in the reference image are determined. flow vector.
  • Each pair of control points includes a first optical flow control point located in the image to be registered and a second optical flow control point located in the reference image.
  • the pixels in the base image and the image to be registered can be sampled at equal intervals based on the first optical flow field and the second optical flow field, each time in the area to be registered.
  • a first optical flow control point is determined in the image, and a second optical flow control point is determined in the reference image.
  • the first optical flow control point and the second optical flow control point obtained by corresponding sampling form a control point pair.
  • control point pairs determined from the reference image and the image to be registered may not be accurate, and there may be mismatches.
  • the target control point pair is obtained based on the first optical flow field of the image to be registered and the second optical flow field of the reference image. The target control point pair is used as the actual control point pair for subsequent generation of mapping relationships. Mismatched control point pairs are filtered out and no longer used.
  • the first optical flow field of the reference image and the second optical flow field of the reference image are used to filter the control point pairs and eliminate the mismatched control point pairs.
  • the first optical flow vector (u1, v1) of the first optical flow control point located in the image to be registered and located in the reference image can be obtained.
  • the second optical flow vector (u2, v2) of the second optical flow control point is judged according to (u1, v1) and (u2, v2) whether the control point pair is a mismatched control point pair, thereby deciding whether to Control point pairs are eliminated.
  • the step of determining whether the first optical flow vector and the second optical flow vector meet preset conditions may include:
  • first optical flow vector obtains the first vector sum of the first optical flow vector and the second optical flow vector, and obtain the second length of the first vector sum; determine the first length according to the first length and the second length. Whether the first optical flow vector and the second optical flow vector satisfy preset conditions.
  • the first length is less than the first preset threshold and the second length is less than the second preset threshold, it is determined that the first optical flow vector and the second optical flow vector satisfy the preset condition.
  • the step of determining whether the first optical flow vector and the second optical flow vector meet preset conditions may include:
  • the first length is less than the first preset threshold and the third length is less than the third preset threshold, it is determined that the first optical flow vector and the second optical flow vector satisfy the preset condition.
  • the vector length (first length) of the first optical flow vector should be less than the first preset threshold.
  • the first optical flow vector may be divided into a horizontal optical flow vector and a vertical optical flow vector
  • the first preset threshold may include a horizontal preset threshold and a vertical preset threshold.
  • the condition that the first length of the first optical flow vector is less than the first preset threshold can also be replaced by: the vector length of the horizontal optical flow vector in the horizontal direction is less than the first preset threshold, and/or the vertical optical flow vector is vertical The vector length in the direction is less than the first preset threshold.
  • the first preset threshold may be a predetermined a priori value.
  • the horizontal preset threshold and the vertical preset threshold can be set according to the camera shooting posture.
  • the horizontal preset threshold can be understood as a solution space that limits the optical flow in the horizontal direction
  • the vertical preset threshold can be understood as a solution space that limits the optical flow in the vertical direction.
  • the horizontal preset threshold can be set larger. Since the camera swings left and right, the shooting height does not change, so the vertical optical flow vector in the vertical direction should not be too large.
  • the vertical preset The threshold can be set smaller to limit the solution space of optical flow in the vertical direction and eliminate optical flow vectors that are too long in the vertical direction.
  • the vertical preset threshold can be set larger. Since the camera swings up and down for shooting, there is only a slight movement in the horizontal direction, so the horizontal optical flow vector in the horizontal direction should not be too large.
  • the preset threshold can be set smaller to limit the solution space of optical flow in the horizontal direction and eliminate optical flow vectors that are too long in the horizontal direction.
  • the second preset threshold is greater than the third preset threshold. That is, when the second optical flow vector is not mapped and transformed, the second preset threshold corresponding to the length of the sum of the vectors is greater than the third preset threshold corresponding to the length of the sum of the vectors when the second optical flow vector is mapped and changed.
  • the third preset threshold can be set to 1, and the second preset threshold can be set to 4.
  • control point pair For a control point pair whose first optical flow vector and second optical flow vector do not meet the preset conditions, the control point pair is determined as a mismatching control point pair, and the mismatching control point pair is eliminated.
  • control point pair For a control point pair whose first optical flow vector and second optical flow vector satisfy a preset condition, the control point pair is determined as a target control point pair, and the target control point pair is retained.
  • this application can extend the mapping of overlapping areas to non-overlapping areas and achieve global alignment of the image to be registered and the reference image.
  • the first optical flow control point located in the overlapping area of the image to be registered is obtained among all target control point pairs. Then, based on the first optical flow control point obtained by the optical flow method, the thin plate spline interpolation method is used to process the first optical flow control point to perform image registration between the image to be registered and the reference image.
  • the thin plate spline interpolation method before using the thin plate spline interpolation method to interpolate the first optical flow control point in the overlapping area to the entire image to be registered to obtain the global mapping relationship between the reference image and the image to be registered, you may first All first optical flow control points are filtered to further improve the accuracy of image registration.
  • the thin plate spline interpolation method can be used to determine the abnormal control points among all the first optical flow control points.
  • ⁇ i is the weight corresponding to the i-th control point
  • ⁇ 1 , ⁇ 2 , ⁇ 3 are the weights calculated from the control points
  • p′ i is the position of the i-th control point.
  • the weight ⁇ of the above non-abnormal control points satisfies the normal distribution with a mean value of 0 and a variance ⁇ , then: the probability of ⁇
  • t is a constant, for example, t can be set to 3.
  • t can also be used as needed Set to other numbers.
  • the abnormal control points are eliminated from all first optical flow control points.
  • the thin plate spline interpolation method is used to interpolate the first optical flow control point in the overlapping area to the entire image to be registered, and the global mapping relationship between the reference image and the image to be registered is obtained.
  • all target control points are centered on all first optical flow control points located in the overlapping area of the image to be registered, and each first optical flow control point is substituted into the above equation 3 to obtain the corresponding first optical flow control point.
  • weights and fixed values of the first weight ⁇ 1 , the second weight ⁇ 2 , and the third weight ⁇ 3 are eliminated in S211, and the first optical flow control point after eliminating the abnormal control points is obtained.
  • the image registration method of this application can align all areas of the reference image and the image to be registered, and use the thin-plate spline interpolation method to obtain a smooth mapping to avoid non-overlapping caused by registration of overlapping areas. Regional deformation distortion occurs.
  • the pixels in the image to be registered can be mapped to obtain a registration image aligned with the reference image.
  • the registered image is consistent with the image of the same part in the reference image, and the relative positions and grayscale trends between pixels are consistent, and can be used for subsequent image splicing, image fusion and other processing.
  • the registration image and the reference image can be spliced under the same spatial coordinate system, overlapping the images of the same part, and splicing the images of different parts to obtain a spliced image.
  • the image registration method provided by the embodiment of the present application first obtains the reference image and the image to be registered; and then determines the matching control point pair in the reference image and the image to be registered according to the optical flow method; Then, based on the control point pair, the thin plate spline interpolation method is used to obtain the first mapping relationship between the reference image and the image to be registered; thereby, based on the first mapping relationship, the image to be registered is registered to the reference image.
  • the embodiment of the present application combines the optical flow method and the thin plate spline interpolation method. Through the optical flow method, uniform and widely distributed control points can be obtained. Through the thin plate spline interpolation method, a smooth mapping can be obtained based on the aforementioned control points, thereby , while achieving image registration, reducing the deformation of the image and improving the accuracy of image registration.
  • the embodiment of the present application also provides a device based on the above image registration method.
  • the meanings of the nouns are the same as those in the above image registration method.
  • FIG. 7 is a first structural schematic diagram of the image registration device 300 provided by an embodiment of the present application.
  • the image registration device 300 includes an acquisition module 301, a determination module 302, a mapping module 303 and a registration module 304:
  • Acquisition module 301 used to acquire the reference image and the image to be registered
  • the determination module 302 is used to determine the matching control point pair in the reference image and the image to be registered according to the optical flow method
  • Mapping module 303 configured to use the thin plate spline interpolation method to obtain the first mapping relationship between the reference image and the image to be registered based on the control point pair;
  • the registration module 304 is used to register the image to be registered to the reference image based on the first mapping relationship.
  • the determination module 302 can be used to:
  • the pixels in the reference image and the image to be registered are sampled at equal intervals based on the first optical flow field and the second optical flow field to obtain the pixels in the reference image and the image to be registered.
  • Matching control point pairs wherein each control point pair includes a first optical flow control point located in the image to be registered and a second optical flow control point located in the reference image.
  • the first optical flow field of the image to be registered and the second optical flow field of the reference image are respectively calculated.
  • the determining module 302 can be used to:
  • the reference image as a reference, perform optical flow calculation on the image to be registered to obtain the first optical flow field of the image to be registered, where the first optical flow field includes the first optical flow vector of each pixel in the image to be registered;
  • FIG. 8 is a second structural schematic diagram of the image registration device 300 provided by an embodiment of the present application.
  • the control point pairs include mismatched control point pairs and target control point pairs
  • the image registration device 300 further includes a first elimination module 305 . After obtaining the matching control point pairs in the reference image and the image to be registered, the first elimination module 305 can be used to:
  • the mapping module 303 when using the thin plate spline interpolation method to obtain the first mapping relationship between the reference image and the image to be registered based on the control point pair, the mapping module 303 can be used to:
  • the thin plate spline interpolation method is used to obtain the first mapping relationship between the reference image and the image to be registered.
  • the first elimination module 305 when obtaining the target control point pair based on the first optical flow field of the image to be registered and the second optical flow field of the reference image, the first elimination module 305 can be used to:
  • control point pairs are determined as mismatched control point pairs, and the mismatched control point pairs are eliminated;
  • the control point pair is determined as the target control point pair, and the target control point pair is retained.
  • the control point pair is determined as the target control point pair, and when the target control point pair is retained, the first elimination module 305 can use At:
  • first length is less than the first preset threshold and the second length is less than the second preset threshold, it is determined that the first optical flow vector and the second optical flow vector meet the preset conditions, and the control point pair is determined as the target control point pair, and Target control point pairs are retained.
  • the control point pair is determined as the target control point pair, and when the target control point pair is retained, the first elimination module 305 can use At:
  • mapping transformation on the second optical flow vector to obtain a mapping vector of the second optical flow vector
  • the control point pair is determined as the target control point pair, and retain target control point pairs.
  • the first mapping relationship is a global mapping relationship.
  • the mapping module 303 can be used to:
  • the first optical flow control point in the overlapping area is interpolated to the entire image to be registered, and the global mapping relationship between the reference image and the image to be registered is obtained.
  • the image registration device 300 further includes a second elimination module 306 .
  • the second elimination module 306 can be used to:
  • the registration module 304 when registering the image to be registered to the reference image based on the first mapping relationship, the registration module 304 may be used to:
  • the pixels in the image to be registered are mapped to obtain a registration image aligned with the reference image.
  • the image registration device 300 further includes a splicing module 307 .
  • the stitching module 307 can use At:
  • the registration image and the reference image are spliced under the same spatial coordinate system to obtain a spliced image.
  • the image registration device 300 provided by the embodiment of the present application first obtains the reference image and the image to be registered by the acquisition module 301; then the determination module 302 determines the matching control in the reference image and the image to be registered according to the optical flow method. point pair; then the mapping module 303 uses the thin plate spline interpolation method to obtain the first mapping relationship between the reference image and the image to be registered based on the control point pair; thus the registration module 304 maps the image to be registered to the benchmark based on the first mapping relationship Images are aligned.
  • the embodiment of the present application combines the optical flow method and the thin plate spline interpolation method. Through the optical flow method, uniform and widely distributed control points can be obtained. Through the thin plate spline interpolation method, a smooth mapping can be obtained based on the aforementioned control points, thereby , while achieving image registration, reducing the deformation of the image and improving the accuracy of image registration.
  • An embodiment of the present application also provides an electronic device 400.
  • the electronic device 400 includes a processor 401 and a memory.
  • the processor 401 is electrically connected to the memory.
  • the processor 401 is the control center of the electronic device 400, using various interfaces and lines to connect various parts of the entire electronic device, executing or loading computer programs stored in the memory 402, and through data stored in the memory 402. Various functions of the electronic device 400 and process data, thereby overall monitoring the electronic device 400.
  • the memory 402 can be used to store software programs and modules.
  • the processor 401 executes various functional applications and data processing by running the computer programs and modules stored in the memory 402 .
  • the memory 402 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, a computer program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store a program based on Data created by the use of electronic devices, etc.
  • memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 with access to the memory 402 .
  • the processor 401 in the electronic device 400 will follow the following steps to store a computer program executable on the processor 401 in the memory 402, and the processor 401 will execute the computer program stored in the memory 402.
  • Computer programs to implement various functions as follows:
  • the image to be registered is registered to the reference image.
  • the electronic device 400 may also include: a display 403 , a radio frequency circuit 404 , an audio circuit 405 and a power supply 406 .
  • the display 403, the radio frequency circuit 404, the audio circuit 405 and the power supply 406 are electrically connected to the processor 401 respectively.
  • the display 403 can be used to display information input by the user or information provided to the user as well as various graphical user interfaces. These graphical user interfaces can be composed of graphics, text, icons, videos, and any combination thereof.
  • the display 403 may include a display panel.
  • the display panel may be configured in the form of a Liquid Crystal Display (LCD) or an Organic Light-Emitting Diode (OLED).
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the radio frequency circuit 404 can be used to send and receive radio frequency signals to establish wireless communication with network equipment or other electronic equipment through wireless communication, and to send and receive signals with the network equipment or other electronic equipment.
  • the audio circuit 405 can be used to provide an audio interface between the user and the electronic device through speakers and microphones.
  • the power supply 406 can be used to power various components of the electronic device 400 .
  • the power supply 406 can be logically connected to the processor 401 through a power management system, so that functions such as charging, discharging, and power consumption management can be implemented through the power management system.
  • the electronic device 400 may also include a camera, a Bluetooth module, etc., which will not be described again here.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program is executed by a processor to implement the image registration method in any of the above embodiments, such as : Obtain the reference image and the image to be registered; determine the matching control point pairs in the reference image and the image to be registered according to the optical flow method; based on the control point pair, use the thin plate spline interpolation method to obtain the reference image and the image to be registered A first mapping relationship; based on the first mapping relationship, register the image to be registered to the reference image.
  • the computer-readable storage medium may be a magnetic disk, an optical disk, a read-only memory (Read Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
  • the image registration method in the embodiment of the present application ordinary testers in the field can understand all or part of the process of implementing the image registration method in the embodiment of the present application, which can be achieved through A computer program is used to control related hardware.
  • the computer program can be stored in a computer-readable storage medium, such as in a memory of an electronic device, and is executed by at least one processor in the electronic device. During the execution process The process of the embodiment of the image registration method may be included.
  • the computer-readable storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
  • the image registration device For the image registration device according to the embodiment of the present application, its functional modules can be integrated into one processing chip, or each module can exist physically alone, or two or more modules can be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or software function modules. If the integrated module is implemented in the form of a software function module and is sold or used as an independent product, it can also be stored in a computer-readable storage medium, such as a read-only memory, a disk or a computer-readable storage medium. CD etc.
  • module can be thought of as a software object that executes on the computing system.
  • the different components, modules, engines and services described in this article can be regarded as implementation objects on the computing system.
  • the device and method described herein are preferably implemented in the form of software. Of course, they can also be implemented in hardware, which are all within the scope of protection of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例提供了一种图像配准方法、装置、存储介质及电子设备。本申请实施例首先获取基准图像和待配准图像;然后根据光流法确定出基准图像和待配准图像中匹配的控制点对;进而基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系;从而基于第一映射关系,将待配准图像向基准图像进行配准。本申请实施例结合了光流法与薄板样条插值法,通过光流法能够得到均匀且广泛分布的控制点,通过薄板样条插值法,在前述控制点的基础上得到平滑的映射,从而,在实现图像配准的同时减少图像的形变,提高图像配准的准确度。

Description

图像配准方法、装置、存储介质及电子设备 技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像配准方法、装置、存储介质及电子设备。
背景技术
图像配准及其相关技术是图像处理研究领域的一项热点和难点技术,其目的在于比较和融合针对同一对象在不同条件(不同时间、光照、拍摄角度等)下获取的图像,具体来说,就是对于两张待配准图像,通过一系列操作,得到一种空间变换,把一副图像映射到另一幅图像上,使得两图中对于空间同一位置的点一一对应起来。图像技术在目标检测、模型重建、运动估计、特征匹配,肿瘤检测、病变定位、血管造影、地质勘探、航空侦察等领域都有广泛的应用。
图像配准是进行图像处理时重要的一个环节,若图像配准的结果不准确将会导致图像配准之后的图像拼接等操作无法有效的进行。因此,有必要提高图像配准的准确度。
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本申请的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本申请实施例提供一种图像配准方法、装置、存储介质及电子设备,能够提高图像配准的准确度。
本申请实施例提供一种图像配准方法,包括:
获取基准图像和待配准图像;
根据光流法确定出基准图像和待配准图像中匹配的控制点对;
基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系;
基于第一映射关系,将待配准图像向基准图像进行配准。
本申请实施例还提供了一种图像配准装置,包括:
获取模块,用于获取基准图像和待配准图像;
确定模块,用于根据光流法确定出基准图像和待配准图像中匹配的控制点对;
映射模块,用于基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系;
配准模块,用于基于第一映射关系,将待配准图像向基准图像进行配准。
本申请实施例还提供一种计算机可读的存储介质,存储介质上存储有计算机程序,计算机程序被处理器执行,以实现本申请实施例提供的任一种图像配准方法中的步骤。
本申请实施例还提供一种电子设备,电子设备包括处理器、存储器以及存储于存储器中并可在处理器上运行的计算机程序,处理器执行计算机程序,以实现本申请实施例提供的任一种图像配准方法中的步骤。
本申请实施例中,首先获取基准图像和待配准图像;然后根据光流法确定出基准图像和待配准图像中匹配的控制点对;进而基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系;从而基于第一映射关系,将待配准图像向基准图像进行配准。本申请实施例结合了光流法与薄板样条插值法,通过光流法能够得到均匀且广泛分布的控制点,通过薄板样条插值法,在前述控制点的基础上得到平滑的映射,从而,在实现图像配准的同时减少图像的形变,提高图像配准的准确度。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的图像配准方法的第一种流程示意图。
图2为本申请实施例提供的一场景示意图。
图3为本申请实施例提供的光流控制点的示意图。
图4为本申请实施例提供的第一种图像拼接流程示意图。
图5为本申请实施例提供的第二种图像拼接流程示意图。
图6为本申请实施例提供的图像配准方法的第二种流程示意图。
图7为本申请实施例提供的图像配准装置的第一种结构示意图。
图8为本申请实施例提供的图像配准装置的第二种结构示意图。
图9为本申请实施例提供的电子设备的第一种结构示意图。
图10是本申请实施例提供的电子设备的第二种结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有付出创造性劳动前提下所获得的的所有实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书以及上述附图中的术语“第一”、“第二”、“第三”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应当理解,这样描述的对象在适当情况下可以互换。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含。例如,包含了一系列步骤的过程、方法或包含了一系列模块或单元的装置、终端、系统不必限于清楚地列出的那些步骤或模块和单元,还可以包括没有清楚地列出的步骤或模块或单元,也可以包括对于这些过程、方法、装置、终端或系统固有的其它步骤或模块或单元。
图像配准,是对两幅图像,通过寻找一种空间变换把一幅图像映射到另一幅图像,使得两图中对应于空间同一位置的点一一对应起来,从而达到信息融合的目的。
基于两幅图像的配准,同样可以实现多幅图像的配准。在配准时,可以取每两幅相邻图像为一组进行配准,通过实现每两幅相邻图像的配准,实现连续的多幅图像的配准。
在进行图像配准时,可以采用基于图像灰度的配准方法、基于图像特征的配准方法、光流法等。
基于图像灰度的配准方法是利用整幅图像的灰度信息建立两幅图像之间的相似性度量来对图像进行配准。该方法要求基准图和待配准图的灰度分布必须具有一定的相关性,仅能适应平移变换和较小的旋转变换,计算量较大,效率低,适合于细节较少、纹理不丰富的图像,主要应用于医学图像配准领域。
基于图像特征的配准方法通过提取两幅图像中受图像变换、亮度变换、噪 声等影响较小的稳定特征,如图像中物体的边缘、角点、闭合区域中心等来对图像进行配准,因此应用更为广泛。但现有的基于图像特征的图像配准方法中利用的特征信息较少,如仅利用角点特征或仅利用轮廓线特征,图像中的信息很大程度上被压缩,仅有一小部分信息被利用,这种方法对特征提取和特征匹配的错误较为敏感,因此图像配准的质量不高。并且,该方法对控制点的分布要求很高,控制点稀少的区域将难以实现配准。
光流(Optical flow)是关于视域中的物体运动检测中的概念,用来描述相对于观察者的运动所造成的观测目标、表面或边缘的运动。光流法在样型识别、计算机视觉以及其他影像处理领域中非常有用,可用于运动检测、物件切割、碰撞时间与物体膨胀的计算、运动补偿编码,或者通过物体表面与边缘进行立体的测量等等。然而,光流法无法保证全部像素的光流计算正确。若图像中存在遮挡区域,这些遮挡区域更难以得出正确的光流。而若是使用错误的光流去对图像进行映射,则容易造成图像扭曲,使得映射后的图像不够平滑,配准效果不佳。
此外,光流法仅适用于两图像重叠区域的对齐,难以对非重叠的部分也做出变换。基于光流法的图像配准中,对于非重叠区域,一般不去改变,而只是在重叠区域根据光流对两幅图像进行渐变拉伸对齐。然而,对于两幅图像初始错位较大、或者重叠区域形状不规则的情形,这种仅对重叠区域进行拉伸对齐的方法,图像过渡不自然,最终实现的配准效果不佳。
图像配准作为图像处理时重要的一个环节,若图像配准的结果不理想,将会导致图像配准之后的图像拼接等操作无法有效进行。
为解决上述问题,本申请实施例提供一种图像配准方法。本申请提供的图像配准方法结合光流法和薄板样条插值法,可以实现对重叠区域和非重叠区域的同时拉伸,调整图像间整体的相对位置,使得图像过度更加自然,实现更好的配准效果。
其中,薄板样条插值(TPS)法是一种2D插值方法,根据两幅相关图像中的对应控制点集来确定一个变形函数的映射。该变形函数寻找通过所有给定点的饶度最小的光滑曲面。“薄板”这个名字的由来,就表示薄板样条是用来近似的一块金属薄片在通过相同的控制点时的行为特征。薄板样条的映射可以确定源图像到目标图像的映射变换关键系数,然后将源图像中任意一点的坐标代 入公式,可得到目标图像中对应点的坐标,进而实现两幅图像的对齐。
本申请实施例提供的图像配准方法的执行主体可以是本申请实施例提供的图像配准装置,或者集成了该图像配准装置的电子设备。其中,该图像配准装置可以采用硬件或者软件的方式实现。电子设备可以是计算机设备,该计算机设备可以是诸如智能手机、平板电脑、个人计算机之类的终端设备,也可以是服务器。以下进行具体分析说明。
请参阅图1,图1为本申请实施例提供的图像配准方法的第一种流程示意图。该图像配准方法可以包括:
S110、获取基准图像和待配准图像。
本申请实施例的基准图像和待配准图像的采集可以通过红外相机、红外热像仪、高分辨率可见光相机等遥感图像采集装置实现,采集到的至少两幅图像可以针对同一拍摄场景连续拍摄或短间隔拍摄得到。
在一实施例中,可以获取多张图像,从多张图像中确定出基准图像和待配准图像。例如,基准图像和待配准图像可以是在拍摄一张全景图像的过程中,设备从后台缓存的用于合成该全景图像的一组缓存图像中选取的任意两张。
本申请实施例中,基准图像和该待配准图像可以是图像采集装置以不同角度对同一场景拍摄到的两幅图像,也即是,该基准图像和该待配准图像中包含相同场景相同部分的图像,也包含该相同场景的不同部分的图像。基准图像和待配准图像的图像内容中有所重叠但并不完全一样,因而基准图像和待配准图像中存在重叠区域,也存在非重叠区域。
请参阅图2,图2为本申请实施例提供的一场景示意图。如图2所示,图2的两张图像中,矩形框中的图像为这两张图像同一场景相同部位的图像,矩形框以外的图像为这两张图像统一场景不同部位的图像。图2中的两幅图像可以分别作为基准图像和待配准图像。当左图为基准图像,右图为待配准图像时,将右图向着左图进行配准;当左图为待配准图像,右图为基准图像时,将左图向着右图进行配准。
S120、根据光流法确定出基准图像和待配准图像中匹配的控制点对。
光流是空间运动物体在观察成像平面上的像素运动的瞬时速度。光流法是利用图像序列中像素在时间域上的变化以及相邻帧之间的相关性来找到上一帧跟当前帧之间存在的对应关系,从而计算出相邻帧之间物体的运动信息的一种方法。
一般而言,光流是由于场景中对象本身的移动、相机的运动,或者两者的共同运动所产生的。光流表达了图像的变化,由于它包含了目标运动的信息,因此可被观察者用来确定目标的运动情况。
在图像平面上,物体的运动往往是通过图像序列中不同图像灰度分布的不同体现的,从而,空间中的运动场转移到图像上就表示为光流场。光流场是一个二维矢量场,它反映了图像上每一点灰度的变化趋势,可看成是带有灰度的像素点在图像平面上运动而产生的瞬时速度场。它包含的信息即是各像素点的瞬时运动速度矢量信息。通常将二维图像平面特定坐标点上的灰度瞬时变化率定义为光流矢量。
在一实施例中,利用光流法计算基准图像和待配准图像的光流场,从而确定基准图像与待配准图像间的相对运动关系。作为后面初筛光流控制点的条件,本申请实施例针对基准图像和待配准图像计算双向光流。
在计算光流时,可以采用的光流计算方法包括DIS(Dense Inverse Search-basedmethod)光流算法、RAFT(Recurrent All-Pairs Field Transforms for Optical Flow)光流算法等。DIS光流算法的实时性更优,而RAFT光流算法的准确度更高。
本申请计算双向光流的其中一步包括,以基准图像为参考,对待配准图像进行光流计算,能够得到待配准图像的第一光流场。第一光流场中包括待配准图像中各像素点的第一光流矢量(u1,v1)。计算双向光流的另外一步包括,以待配准图像为参考,对基准图像进行光流计算,得到基准图像的第二光流场。第二光流场中包括基准图像中各像素点的第二光流矢量(u2,v2)。
得到待配准图像的第一光流场以及基准图像的第二光流场后,即确定了待配准图像中所有像素点的第一光流矢量和基准图像中所有像素点的第二光流矢量。
从而,在基准图像和待配准图像的重叠区域,可以基于第一光流场和第二光流场,对基准图像和待配准图像中的像素点进行等间隔采样,每次分别在待配准图像中确定出一个第一光流控制点,在基准图像中确定出一个第二光流控制点。对应采样得到的第一光流控制点和第二光流控制点形成一对匹配的控制点对。
在一些情况下,从基准图像和待配准图像中确定出的控制点对可能并不准确,其中存在误匹配的情况。为了得到准确的映射关系,保证图像配准的准确 度。在一实施例中,基于待配准图像的第一光流场以及基准图像的第二光流场,得到目标控制点对。将目标控制点对作为后续生成映射关系的实际控制点对,误匹配控制点对则筛去不再使用。
S130、基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系。
本申请实施例中,S130中匹配的控制点对,具体可以为剔除误匹配控制点对后得到的目标控制点对。基于目标控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系。
因而,在得到基准图像和待配准图像中的控制点对之后,基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系之前,可以基于待配准图像的第一光流场以及基准图像的第二光流场,对控制点对进行筛选,剔除其中误匹配控制点对。
在一实施例中,对于每一对控制点对,可以获取该控制点对中位于待配准图像中的第一光流控制点的第一光流矢量(u1,v1)以及位于基准图像中的第二光流控制点的第二光流矢量(u2,v2),根据(u1,v1)和(u2,v2)判断该控制点对是否为误匹配控制点对,从而决定是否要对该控制点对进行剔除。
若是第一光流矢量(u1,v1)和第二光流矢量(u2,v2)不满足预设条件,则将该控制点对确定为误匹配控制点对,并剔除该误匹配控制点对。而若是第一光流矢量(u1,v1)和第二光流矢量(u2,v2)满足预设条件,则判定该控制点对不是误匹配控制点对,将该控制点对确定为目标控制点对,并保留该目标控制点对。从而,在对待配准图像进行映射前,对待配准图像的控制点进行初步筛选,确保待配准图像中控制点的准确度,进而保证图像配准的准确度。
S130中得到的第一映射关系可以为全局映射关系。本申请在光流法的基础上结合薄板样条插值法,可以将重叠区域的映射扩展至非重叠区域,实现待配准图像与基准图像的全域对齐。
其中,获取所有目标控制点对中位于待配准图像的重叠区域的第一光流控制点,使用薄板样条插值法,将重叠区域的第一光流控制点插值至整幅待配准图像,得到基准图像和待配准图像的全局映射关系。
在一实施例中,在使用薄板样条插值法,将重叠区域的第一光流控制点插值至整幅待配准图像,得到基准图像和待配准图像的全局映射关系之前,可以先对所有的第一光流控制点进行筛选,以进一步提高图像配准的准确度。具体 的,可以使用薄板样条插值法,确定所有第一光流控制点中的异常控制点,然后,从所有第一光流控制点中剔除异常控制点。
请参阅图3,图3为本申请实施例提供的光流控制点的示意图。如图3所示,可以在待配准图像的重叠区域,实现光流控制点的采样、匹配和筛选。
需要说明的是,本申请在使用薄板样条插值法进行插值时,插值生成的控制点的数量可以根据需求设定。筛选后留下的光流控制点越多,则进行薄板样条插值时所需的计算量就越大,计算时间就越长。
而异常控制点的判断基准可以由人为设定,为缩短计算时间,加快配准效率,在确定异常控制点时,可以将判断基准设置得较为严格,以剔除掉更多的光流控制点。但另一方面,光流控制点越多,生成的第一映射关系就更加准确。因而,若是为了提高图像配准的准确度,也可以将判断基准设置得较为宽松,以留下更多的光流控制点。具体的,用户可以按需调整异常控制点的判断基准,实现图像配准的速度和准确度的平衡。
本申请的图像配准方法在使用了薄板样条插值法后,能够对齐基准图像和待配准图像的全部区域,利用薄板样条插值法得到平滑的映射,避免重叠区域的配准引发非重叠区域变形失真的情况出现。
S140、基于第一映射关系,将待配准图像向基准图像进行配准。
基于S130中得到的第一映射关系,将待配准图像向基准图像进行配准时,可以对待配准图像中的像素点进行映射,得到与基准图像对齐的配准图像。该配准图像与基准图像中相同部位的图像,像素点之间的相对位置、灰度趋势等保持一致,可用于后续的图像拼接、图像融合等处理。
例如,基于已经对齐的配准图像和基准图像,可以在同一空间坐标系下将配准图像和基准图像进行拼接,将其中相同部位的图像实现重叠,不同部位的图像实现拼接,得到拼接图像。
在一实施例中,也可以直接将待配准图像基于第一映射关系映射到基准图像所在的空间坐标系,实现待配准图像与基准图像的配准与拼接。
在一实施例中,第一映射关系也可以为局部映射关系。请参阅图4,图4为本申请实施例提供的第一种图像拼接流程示意图。
其中,对于需要将多幅高清图像拼接成pano(全景图)的情形,可以先通过特征点或者其他方法初步完成对齐拼接,得到初步拼接姿态数据Rs后,基于在低分辨率下拼接成的低分辨率全景图,使用本申请提供的图像配准方法,可 以在每两张图像间结合光流法和薄板样条插值法得到局部映射关系,在低分辨率下实现全部图像的局部对齐。
请继续参阅图5,图5为本申请实施例提供的第二种图像拼接流程示意图。其中,在低分辨率下实现全部图像的局部对齐时,基于光流法在各图像中得到均匀且广泛分布的控制点(如图5所示),然后通过确定出的控制点,使用薄板样条插值得到配准时的局部映射关系local map。相关步骤可参见前述说明,在此不再赘述。
在低分辨率全景图中得到局部映射关系后,可以将得到的局部映射关系与初步拼接姿态数据Rs相结合,得到与每幅高清图像相对应的全局映射关系global map,基于全局映射关系实现多幅高清图像的全局映射,得到高分辨率全景图。
根据前一实施例所描述的方法,以下将举例作进一步详细说明。
请参阅图6,图6为本申请实施例提供的图像配准方法的第二种流程示意图。该图像配准方法可以包括:
S201、获取基准图像和待配准图像。
本申请实施例的基准图像和待配准图像的采集可以通过红外相机、红外热像仪、高分辨率可见光相机等遥感图像采集装置实现,采集到的至少两幅图像可以针对同一拍摄场景连续拍摄或短间隔拍摄得到。
在一实施例中,可以获取多张图像,从多张图像中确定出基准图像和待配准图像。例如,基准图像和待配准图像可以是在拍摄一张全景图像的过程中,设备从后台缓存的用于合成该全景图像的一组缓存图像中选取的任意两张。
本申请实施例中,基准图像和该待配准图像可以是图像采集装置以不同角度对同一场景拍摄到的两幅图像,也即是,该基准图像和该待配准图像中包含相同场景相同部分的图像,也包含该相同场景的不同部分的图像。基准图像和待配准图像的图像内容中有所重叠但并不完全一样,因而基准图像和待配准图像中存在重叠区域,也存在非重叠区域。
请参阅图2,图2为本申请实施例提供的一场景示意图。如图2所示,图2的两张图像中,矩形框中的图像为这两张图像同一场景相同部位的图像,矩形框以外的图像为这两张图像统一场景不同部位的图像。图2中的两幅图像可以分别作为基准图像和待配准图像。当左图为基准图像,右图为待配准图像时,将右图向着左图进行配准;当左图为待配准图像,右图为基准图像时,将左图 向着右图进行配准。
S202、以基准图像为参考,对待配准图像进行光流计算,得到待配准图像的第一光流场。
其中,第一光流场包括待配准图像中各像素点的第一光流矢量。
S203、以待配准图像为参考,对基准图像进行光流计算,得到基准图像的第二光流场。
其中,第二光流场包括基准图像中各像素点的第二光流矢量。
在一实施例中,利用光流法计算基准图像和待配准图像的光流场,从而确定基准图像与待配准图像间的相对运动关系。作为后面初筛光流控制点的条件,本申请实施例针对基准图像和待配准图像计算双向光流。
在计算光流时,可以采用的光流计算方法包括DIS(Dense Inverse Search-basedmethod)光流算法、RAFT(Recurrent All-Pairs Field Transforms for Optical Flow)光流算法等。DIS光流算法的实时性更优,而RAFT光流算法的准确度更高。
本申请计算双向光流的其中一步包括,以基准图像为参考,对待配准图像进行光流计算,能够得到待配准图像的第一光流场。第一光流场中包括待配准图像中各像素点的第一光流矢量(u1,v1)。计算双向光流的另外一步包括,以待配准图像为参考,对基准图像进行光流计算,得到基准图像的第二光流场。第二光流场中包括基准图像中各像素点的第二光流矢量(u2,v2)。
得到待配准图像的第一光流场以及基准图像的第二光流场后,即确定了待配准图像中所有像素点的第一光流矢量和基准图像中所有像素点的第二光流矢量。
S204、在基准图像与待配准图像的重叠区域,基于第一光流场和第二光流场对基准图像和待配准图像中的像素点进行等间隔采样,得到基准图像和待配准图像中匹配的控制点对。
其中,每一对控制点对包括一个位于待配准图像中的第一光流控制点以及一个位于基准图像中的第二光流控制点。
在基准图像和待配准图像的重叠区域,可以基于第一光流场和第二光流场,对基准图像和待配准图像中的像素点进行等间隔采样,每次分别在待配准图像中确定出一个第一光流控制点,在基准图像中确定出一个第二光流控制点。对应采样得到的第一光流控制点和第二光流控制点形成一对控制点对。
S205、对于每一对控制点对,获取第一光流控制点的第一光流矢量以及第二光流控制点的第二光流矢量。
在一些情况下,从基准图像和待配准图像中确定出的控制点对可能并不准确,其中存在误匹配的情况。为了得到准确的映射关系,保证图像配准的准确度。在一实施例中,基于待配准图像的第一光流场以及基准图像的第二光流场,得到目标控制点对。将目标控制点对作为后续生成映射关系的实际控制点对,误匹配控制点对则筛去不再使用。
因而,在得到基准图像和待配准图像中的控制点对之后,基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系之前,可以基于待配准图像的第一光流场以及基准图像的第二光流场,对控制点对进行筛选,剔除其中误匹配控制点对。
S206、判断第一光流矢量和第二光流矢量是否满足预设条件。若否则转入S207,若是则转入S208。
在一实施例中,对于每一对控制点对,可以获取该控制点对中位于待配准图像中的第一光流控制点的第一光流矢量(u1,v1)以及位于基准图像中的第二光流控制点的第二光流矢量(u2,v2),根据(u1,v1)和(u2,v2)判断该控制点对是否为误匹配控制点对,从而决定是否要对该控制点对进行剔除。
在一实施例中,S206、判断第一光流矢量和第二光流矢量是否满足预设条件的步骤可以包括:
获取第一光流矢量的第一长度;获取第一光流矢量与第二光流矢量的第一矢量和,并获取第一矢量和的第二长度;根据第一长度和第二长度判断第一光流矢量和第二光流矢量是否满足预设条件。
其中,若第一长度小于第一预设阈值且第二长度小于第二预设阈值,则判定第一光流矢量和第二光流矢量满足预设条件。
在一实施例中,S206、判断第一光流矢量和第二光流矢量是否满足预设条件的步骤可以包括:
获取第一光流矢量的第一长度;根据第一光流矢量生成第二映射关系;根据第二映射关系,对第二光流矢量进行映射变换,得到第二光流矢量的映射矢量;获取第一光流矢量与映射矢量的第二矢量和,并获取第二矢量和的第三长度;根据第一长度和第三长度判断第一光流矢量和第二光流矢量是否满足预设条件。
其中,若第一长度小于第一预设阈值且第三长度小于第三预设阈值,则判定第一光流矢量和第二光流矢量满足预设条件。
其中,第一光流矢量的矢量长度(第一长度)应小于第一预设阈值。
在一实施例中,第一光流矢量可分为水平光流矢量和垂直光流矢量,第一预设阈值可以包括水平预设阈值和垂直预设阈值。上述第一光流矢量的第一长度小于第一预设阈值的条件,也可替换为:水平光流矢量在水平方向上的矢量长度小于第一预设阈值,和/或垂直光流矢量垂直方向上的矢量长度小于第一预设阈值。
在一实施例中,第一预设阈值可以是事先确定的先验值。例如,可以根据相机拍摄姿态设定水平预设阈值和垂直预设阈值。水平预设阈值可以理解为在水平方向上限制光流的解空间,垂直预设阈值可以理解为在垂直方向上限制光流的解空间。
例如,相机拍摄姿态为左右摆动拍摄,则水平预设阈值可以设置得较大,而由于相机左右摆动拍摄,拍摄高度未变,因而垂直方向上的垂直光流矢量不应太大,垂直预设阈值可以设置得较小,从而在垂直方向上限制光流的解空间,剔除掉在垂直方向上长度过大的光流矢量。
上下摆动拍摄同理。当相机拍摄姿态为上下摆动拍摄时,垂直预设阈值可以设置得较大,而由于相机上下摆动拍摄,水平方向上仅存在微小移动,因而水平方向上的水平光流矢量不应太大,水平预设阈值可以设置得较小,从而在水平方向上限制光流的解空间,剔除掉在水平方向上长度过大的光流矢量。
在一实施例中,第二预设阈值大于第三预设阈值。即,不对第二光流矢量进行映射变换时,矢量之和的长度对应的第二预设阈值要大于对第二光流矢量进行映射变幻时矢量之和的长度对应的第三预设阈值。例如,第三预设阈值可以设置为1,第二预设阈值可以设置为4。
S207、将控制点对确定为误匹配控制点对,并剔除误匹配控制点对。
对于第一光流矢量和第二光流矢量不满足预设条件的控制点对,将该控制点对确定为误匹配控制点对,并剔除误匹配控制点对。
S208、将控制点对确定为目标控制点对,并保留目标控制点对。
对于第一光流矢量和第二光流矢量满足预设条件的控制点对,将该控制点对确定为目标控制点对,并保留目标控制点对。
S209、获取所有目标控制点对中位于待配准图像的重叠区域的第一光流控 制点。
本申请在光流法的基础上结合薄板样条插值法,可以将重叠区域的映射扩展至非重叠区域,实现待配准图像与基准图像的全域对齐。
首先,获取所有目标控制点对中位于待配准图像的重叠区域的第一光流控制点。然后,基于光流法获取的第一光流控制点,采取薄板样条插值法,通过对第一光流控制点进行处理,对待配准图像和基准图像进行图像配准。
S210、使用薄板样条插值法,确定所有第一光流控制点中的异常控制点。
在一实施例中,在使用薄板样条插值法,将重叠区域的第一光流控制点插值至整幅待配准图像,得到基准图像和待配准图像的全局映射关系之前,可以先对所有的第一光流控制点进行筛选,以进一步提高图像配准的准确度。具体的,可以使用薄板样条插值法,确定所有第一光流控制点中的异常控制点。
为便于说明本申请使用薄板样条插值法筛选异常控制点以及插值、得到全局映射关系的过程,以下先对薄板样条插值法的原理进行介绍:
根据薄板样条插值(TPS)理论,平面内每个点的映射可以用其它控制点与它们相应的权重来表示:
其中:g(x,y)为位置x=(x,y)处的映射,ωi为第i个控制点对应的权重,α1、α2、α3为由控制点计算出的权重,φi(x)为点x=(x,y)与第i个控制点之间的径向基函数(Radial Basis Functions,RBF):
其中:p′i为第i个控制点的位置。
上述未知权重ωi、α1、α2、α3可由以下方程解得:
其中:为第j个控制点与第i个控制点之间的RBF组成的矩阵,为n个控制点的位置组成的矩阵,f=(g1,...,gn)T,为控制点的值(即光流)组成的矩阵。
上述非异常控制点的权重ω满足均值为0,方差σ的正态分布,则:{|ω/σ|>t}的概率为2(1-Φ(t)),其中Φ(t)为标准正态分布,例如,在确定所有第一光流控制点中的异常控制点时,具体可以为:若发生{|ωi/σ|>t},则可确定点i为异常点的概率大于0.5,进而将点i确定为异常控制点,并将其剔除。其中,t为一个常数,例如,t可以设置为3。可选的,根据需要,t也可以 设置为其它数字。
S211、从所有第一光流控制点中剔除异常控制点。
对于第一光流控制点中的异常控制点,将异常控制点从所有第一光流控制点中剔除。
S212、使用薄板样条插值法,将重叠区域的第一光流控制点插值至整幅待配准图像,得到基准图像和待配准图像的全局映射关系。
剔除掉异常控制点后,使用薄板样条插值法,将重叠区域的第一光流控制点插值至整幅待配准图像,得到基准图像和待配准图像的全局映射关系。
具体的,所有目标控制点对中位于待配准图像的重叠区域的所有第一光流控制点,将各第一光流控制点代入上述式3中,得到各第一光流控制点对应的权重以及固定的第一权重α1、第二权重α2、第三权重α3的数值。然后,根据上述非异常控制点满足的正态分布,在S211中剔除不满足该正态分布的异常控制点,得到剔除异常控制点后的第一光流控制点。将剔除异常控制点后的第一光流控制点及对应的权重代入上述式1,得到待配准图像相对基准图像的全局映射关系。
本申请的图像配准方法在使用了薄板样条插值法后,能够对齐基准图像和待配准图像的全部区域,利用薄板样条插值法得到平滑的映射,避免重叠区域的配准引发非重叠区域变形失真的情况出现。
S213、基于全局映射关系,对待配准图像中的像素点进行映射,得到与基准图像对齐的配准图像。
基于全局映射关系,将待配准图像向基准图像进行配准时,可以对待配准图像中的像素点进行映射,得到与基准图像对齐的配准图像。该配准图像与基准图像中相同部位的图像,像素点之间的相对位置、灰度趋势等保持一致,可用于后续的图像拼接、图像融合等处理。
S214、在同一空间坐标系下将配准图像和基准图像进行拼接,得到拼接图像。
例如,基于已经对齐的配准图像和基准图像,可以在同一空间坐标系下将配准图像和基准图像进行拼接,将其中相同部位的图像实现重叠,不同部位的图像实现拼接,得到拼接图像。
由上述可知,本申请实施例所提供的图像配准方法首先获取基准图像和待配准图像;然后根据光流法确定出基准图像和待配准图像中匹配的控制点对; 进而基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系;从而基于第一映射关系,将待配准图像向基准图像进行配准。本申请实施例结合了光流法与薄板样条插值法,通过光流法能够得到均匀且广泛分布的控制点,通过薄板样条插值法,在前述控制点的基础上得到平滑的映射,从而,在实现图像配准的同时减少图像的形变,提高图像配准的准确度。
应当注意,尽管在附图中以特定顺序描述了本申请中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等,因此实际执行的顺序有可能根据实际情况改变。
为便于更好的实施本申请实施例提供的图像配准方法,本申请实施例还提供一种基于上述图像配准方法的装置。其中名词的含义与上述图像配准方法中相同,具体实现细节可以参考方法实施例中的说明。
请参阅图7,图7为本申请实施例提供的图像配准装置300的第一种结构示意图。该图像配准装置300包括获取模块301、确定模块302、映射模块303和配准模块304:
获取模块301,用于获取基准图像和待配准图像;
确定模块302,用于根据光流法确定出基准图像和待配准图像中匹配的控制点对;
映射模块303,用于基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系;
配准模块304,用于基于第一映射关系,将待配准图像向基准图像进行配准。
在一实施例中,基准图像与待配准图像存在重叠区域,在根据光流法确定出基准图像和待配准图像中匹配的控制点对时,确定模块302可以用于:
分别计算待配准图像的第一光流场以及基准图像的第二光流场;
在基准图像与待配准图像的重叠区域,基于第一光流场和第二光流场对基准图像和待配准图像中的像素点进行等间隔采样,得到基准图像和待配准图像中匹配的控制点对,其中,每一对控制点对包括一个位于待配准图像中的第一光流控制点以及一个位于基准图像中的第二光流控制点。
在一实施例中,在分别计算待配准图像的第一光流场以及基准图像的第二 光流场时,确定模块302可以用于:
以基准图像为参考,对待配准图像进行光流计算,得到待配准图像的第一光流场,第一光流场包括待配准图像中各像素点的第一光流矢量;
以待配准图像为参考,对基准图像进行光流计算,得到基准图像的第二光流场,第二光流场包括基准图像中各像素点的第二光流矢量。
请参阅图8,图8为本申请实施例提供的图像配准装置300的第二种结构示意图。在一实施例中,控制点对中包括误匹配控制点对和目标控制点对,图像配准装置300还包括第一剔除模块305。在得到基准图像和待配准图像中匹配的控制点对之后,第一剔除模块305可以用于:
基于待配准图像的第一光流场以及基准图像的第二光流场,得到目标控制点对;
在一实施例中,在基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系时,映射模块303可以用于:
基于目标控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系。
在一实施例中,在基于待配准图像的第一光流场以及基准图像的第二光流场,得到目标控制点对时,第一剔除模块305可以用于:
对于每一对控制点对,获取第一光流控制点的第一光流矢量以及第二光流控制点的第二光流矢量;
第一光流矢量和所述第二光流矢量不满足预设条件,则将控制点对确定为误匹配控制点对,并剔除误匹配控制点对;
若第一光流矢量和第二光流矢量满足预设条件,则将控制点对确定为目标控制点对,并保留目标控制点对。
在一实施例中,若第一光流矢量和第二光流矢量满足预设条件,则将控制点对确定为目标控制点对,并保留目标控制点对时,第一剔除模块305可以用于:
获取第一光流矢量的第一长度;
获取第一光流矢量与第二光流矢量的第一矢量和,并获取第一矢量和的第二长度;
若第一长度小于第一预设阈值且第二长度小于第二预设阈值,则判定第一光流矢量和第二光流矢量满足预设条件,将控制点对确定为目标控制点对,并 保留目标控制点对。
在一实施例中,若第一光流矢量和第二光流矢量满足预设条件,则将控制点对确定为目标控制点对,并保留目标控制点对时,第一剔除模块305可以用于:
获取第一光流矢量的第一长度;
根据第一光流矢量生成第二映射关系;
根据第二映射关系,对第二光流矢量进行映射变换,得到第二光流矢量的映射矢量;
获取第一光流矢量与映射矢量的第二矢量和,并获取第二矢量和的第三长度;
若第一长度小于第一预设阈值且第三长度小于第三预设阈值,则判定第一光流矢量和第二光流矢量满足预设条件,将控制点对确定为目标控制点对,并保留目标控制点对。
在一实施例中,第一映射关系为全局映射关系,在基于目标控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系时,映射模块303可以用于:
获取所有目标控制点对中位于待配准图像的重叠区域的第一光流控制点;
使用薄板样条插值法,将重叠区域的第一光流控制点插值至整幅待配准图像,得到基准图像和待配准图像的全局映射关系。
请继续参阅图8,在一实施例中,图像配准装置300还包括第二剔除模块306。在使用薄板样条插值法,将重叠区域的第一光流控制点插值至整幅待配准图像,得到基准图像和待配准图像的全局映射关系之前,第二剔除模块306可以用于:
使用薄板样条插值法,确定所有第一光流控制点中的异常控制点;
从所有第一光流控制点中剔除异常控制点。
在一实施例中,在基于第一映射关系,将待配准图像向基准图像进行配准时,配准模块304可以用于:
基于第一映射关系,对待配准图像中的像素点进行映射,得到与基准图像对齐的配准图像。
请继续参阅图8,在一实施例中,图像配准装置300还包括拼接模块307。在一实施例中,在得到与基准图像对齐的配准图像之后,拼接模块307可以用 于:
在同一空间坐标系下将配准图像和基准图像进行拼接,得到拼接图像。
由上述可知,本申请实施例所提供的图像配准装置300首先获取模块301获取基准图像和待配准图像;然后确定模块302根据光流法确定出基准图像和待配准图像中匹配的控制点对;进而映射模块303基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系;从而配准模块304基于第一映射关系,将待配准图像向基准图像进行配准。本申请实施例结合了光流法与薄板样条插值法,通过光流法能够得到均匀且广泛分布的控制点,通过薄板样条插值法,在前述控制点的基础上得到平滑的映射,从而,在实现图像配准的同时减少图像的形变,提高图像配准的准确度。
本申请实施例还提供一种电子设备400。请参阅图9,电子设备400包括处理器401以及存储器。其中,处理器401与存储器电性连接。
该处理器401是电子设备400的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或加载存储在存储器402内的计算机程序,以及通过存储在存储器402内的数据,执行电子设备400的各种功能并处理数据,从而对电子设备400进行整体监控。
该存储器402可用于存储软件程序以及模块,处理器401通过运行存储在存储器402的计算机程序以及模块,从而执行各种功能应用以及数据处理。存储器402可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的计算机程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器402还可以包括存储器控制器,以提供处理器401对存储器402的访问。
在本申请实施例中,电子设备400中的处理器401会按照如下的步骤,将可在处理器401上执行的计算机程序存储在存储器402中,并由处理器401执行存储在存储器402中的计算机程序,从而实现各种功能,如下:
获取基准图像和待配准图像;
根据光流法确定出基准图像和待配准图像中匹配的控制点对;
基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系;
基于第一映射关系,将待配准图像向基准图像进行配准。
请一并参阅图10,在某些实施方式中,电子设备400还可以包括:显示器403、射频电路404、音频电路405以及电源406。其中,其中,显示器403、射频电路404、音频电路405以及电源406分别与处理器401电性连接。
该显示器403可以用于显示由用户输入的信息或提供给用户的信息以及各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示器403可以包括显示面板,在某些实施方式中,可以采用液晶显示器(Liquid Crystal Display,LCD)、或者有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板。
该射频电路404可以用于收发射频信号,以通过无线通信与网络设备或其他电子设备建立无线通讯,与网络设备或其他电子设备之间收发信号。
该音频电路405可以用于通过扬声器、传声器提供用户与电子设备之间的音频接口。
该电源406可以用于给电子设备400的各个部件供电。在一些实施例中,电源406可以通过电源管理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管中未示出,电子设备400还可以包括摄像头、蓝牙模块等,在此不再赘述。
本申请实施例还提供一种计算机可读的存储介质,该计算机可读的存储介质存储有计算机程序,该计算机程序被处理器执行,以实现上述任一实施例中的图像配准方法,比如:获取基准图像和待配准图像;根据光流法确定出基准图像和待配准图像中匹配的控制点对;基于控制点对,使用薄板样条插值法得到基准图像和待配准图像的第一映射关系;基于第一映射关系,将待配准图像向基准图像进行配准。
在本申请实施例中,计算机可读的存储介质可以是磁碟、光盘、只读存储器(Read Only Memory,ROM)、或者随机存取记忆体(Random Access Memory,RAM)等。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
需要说明的是,对本申请实施例的图像配准方法而言,本领域普通测试人员可以理解实现本申请实施例的图像配准方法的全部或部分流程,是可以通过 计算机程序来控制相关的硬件来完成,该计算机程序可存储于一计算机可读的存储介质中,如存储在电子设备的存储器中,并被该电子设备内的至少一个处理器执行,在执行过程中可包括如图像配准方法的实施例的流程。其中,该计算机可读的存储介质可为磁碟、光盘、只读存储器、随机存取记忆体等。
对本申请实施例的图像配准装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。该集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读的存储介质中,该计算机可读的存储介质譬如为只读存储器,磁盘或光盘等。
本文所使用的术语“模块”可看做为在该运算系统上执行的软件对象。本文该的不同组件、模块、引擎及服务可看做为在该运算系统上的实施对象。而本文该的装置及方法优选的以软件的方式进行实施,当然也可在硬件上进行实施,均在本申请保护范围之内。
以上对本申请实施例所提供的一种图像配准方法、装置、存储介质及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本申请的限制。

Claims (15)

  1. 一种图像配准方法,其特征在于,包括:
    获取基准图像和待配准图像;
    根据光流法确定出所述基准图像和所述待配准图像中匹配的控制点对;
    基于所述控制点对,使用薄板样条插值法得到所述基准图像和所述待配准图像的第一映射关系;
    基于所述第一映射关系,将所述待配准图像向所述基准图像进行配准。
  2. 根据权利要求1所述的图像配准方法,其特征在于,所述基准图像与所述待配准图像存在重叠区域,所述根据光流法确定出所述基准图像和所述待配准图像中匹配的控制点对包括:
    分别计算所述待配准图像的第一光流场以及所述基准图像的第二光流场;
    在所述基准图像与所述待配准图像的重叠区域,基于第一光流场和所述第二光流场对所述基准图像和所述待配准图像中的像素点进行等间隔采样,得到所述基准图像和所述待配准图像中匹配的控制点对,其中,每一对所述控制点对包括一个位于所述待配准图像中的第一光流控制点以及一个位于所述基准图像中的第二光流控制点。
  3. 根据权利要求2所述的图像配准方法,其特征在于,所述分别计算所述待配准图像的第一光流场以及所述基准图像的第二光流场包括:
    以所述基准图像为参考,对所述待配准图像进行光流计算,得到所述待配准图像的第一光流场,所述第一光流场包括所述待配准图像中各像素点的第一光流矢量;
    以所述待配准图像为参考,对所述基准图像进行光流计算,得到所述基准图像的第二光流场,所述第二光流场包括所述基准图像中各像素点的第二光流矢量。
  4. 根据权利要求2所述的图像配准方法,其特征在于,所述控制点对中包括误匹配控制点对和目标控制点对,所述得到所述基准图像和所述待配准图像中匹配的控制点对之后,还包括:
    基于所述待配准图像的第一光流场以及所述基准图像的第二光流场,得到目标控制点对。
  5. 根据权利要求4所述的图像配准方法,其特征在于,所述基于所述控制点对,使用薄板样条插值法得到所述基准图像和所述待配准图像的第一映射关 系包括:
    基于所述目标控制点对,使用薄板样条插值法得到所述基准图像和所述待配准图像的第一映射关系。
  6. 根据权利要求5所述的图像配准方法,其特征在于,所述基于所述待配准图像的第一光流场以及所述基准图像的第二光流场,得到目标控制点对包括:
    对于每一对所述控制点对,获取所述第一光流控制点的第一光流矢量以及所述第二光流控制点的第二光流矢量;
    若所述第一光流矢量和所述第二光流矢量不满足预设条件,则将所述控制点对确定为误匹配控制点对,并剔除所述误匹配控制点对;
    若所述第一光流矢量和所述第二光流矢量满足预设条件,则将所述控制点对确定为目标控制点对,并保留所述目标控制点对。
  7. 根据权利要求6所述的图像配准方法,其特征在于,所述若所述第一光流矢量和所述第二光流矢量满足预设条件,则将所述控制点对确定为目标控制点对,并保留所述目标控制点对包括:
    获取所述第一光流矢量的第一长度;
    获取所述第一光流矢量与所述第二光流矢量的第一矢量和,并获取所述第一矢量和的第二长度;
    若所述第一长度小于第一预设阈值且所述第二长度小于第二预设阈值,则判定所述第一光流矢量和所述第二光流矢量满足预设条件,将所述控制点对确定为目标控制点对,并保留所述目标控制点对。
  8. 根据权利要求6所述的图像配准方法,其特征在于,所述若所述第一光流矢量和所述第二光流矢量满足预设条件,则将所述控制点对确定为目标控制点对,并保留所述目标控制点对包括:
    获取所述第一光流矢量的第一长度;
    根据所述第一光流矢量生成第二映射关系;
    根据所述第二映射关系,对所述第二光流矢量进行映射变换,得到所述第二光流矢量的映射矢量;
    获取所述第一光流矢量与所述映射矢量的第二矢量和,并获取所述第二矢量和的第三长度;
    若所述第一长度小于第一预设阈值且所述第三长度小于第三预设阈值,则判定所述第一光流矢量和所述第二光流矢量满足预设条件,将所述控制点对确 定为目标控制点对,并保留所述目标控制点对。
  9. 根据权利要求4所述的图像配准方法,其特征在于,所述第一映射关系为全局映射关系,所述基于所述目标控制点对,使用薄板样条插值法得到所述基准图像和所述待配准图像的第一映射关系包括:
    获取所有目标控制点对中位于所述待配准图像的重叠区域的第一光流控制点;
    使用薄板样条插值法,将所述重叠区域的第一光流控制点插值至整幅待配准图像,得到所述基准图像和所述待配准图像的全局映射关系。
  10. 根据权利要求9所述的图像配准方法,其特征在于,所述使用薄板样条插值法,将所述重叠区域的第一光流控制点插值至整幅待配准图像,得到所述基准图像和所述待配准图像的全局映射关系之前,还包括:
    使用薄板样条插值法,确定所有第一光流控制点中的异常控制点;
    从所有第一光流控制点中剔除所述异常控制点。
  11. 根据权利要求1所述的图像配准方法,其特征在于,所述基于所述第一映射关系,将所述待配准图像向所述基准图像进行配准包括:
    基于所述第一映射关系,对所述待配准图像中的像素点进行映射,得到与所述基准图像对齐的配准图像。
  12. 根据权利要求11所述的图像配准方法,其特征在于,所述得到与所述基准图像对齐的配准图像之后,还包括:
    在同一空间坐标系下将所述配准图像和所述基准图像进行拼接,得到拼接图像。
  13. 一种图像配准装置,其特征在于,包括:
    获取模块,用于获取基准图像和待配准图像;
    确定模块,用于根据光流法确定出所述基准图像和所述待配准图像中匹配的控制点对;
    映射模块,用于基于所述控制点对,使用薄板样条插值法得到所述基准图像和所述待配准图像的第一映射关系;
    配准模块,用于基于所述第一映射关系,将所述待配准图像向所述基准图像进行配准。
  14. 一种计算机可读的存储介质,其特征在于,所述存储介质上存储有计算机程序,所述计算机程序被处理器执行,以实现如权利要求1至12任一项所 述的图像配准方法。
  15. 一种电子设备,其特征在于,所述电子设备包括处理器、存储器以及存储于所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序,以实现如权利要求1至12任一项所述的图像配准方法。
PCT/CN2023/079053 2022-03-09 2023-03-01 图像配准方法、装置、存储介质及电子设备 WO2023169281A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210226741.6A CN114742866A (zh) 2022-03-09 2022-03-09 图像配准方法、装置、存储介质及电子设备
CN202210226741.6 2022-03-09

Publications (1)

Publication Number Publication Date
WO2023169281A1 true WO2023169281A1 (zh) 2023-09-14

Family

ID=82274470

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/079053 WO2023169281A1 (zh) 2022-03-09 2023-03-01 图像配准方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN114742866A (zh)
WO (1) WO2023169281A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742866A (zh) * 2022-03-09 2022-07-12 影石创新科技股份有限公司 图像配准方法、装置、存储介质及电子设备
CN116363185B (zh) * 2023-06-01 2023-08-01 成都纵横自动化技术股份有限公司 地理配准方法、装置、电子设备和可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110235884A1 (en) * 2010-03-25 2011-09-29 Emory University Atlas-Assisted Synthetic Computed Tomography Using Deformable Image Registration
US20180205884A1 (en) * 2017-01-17 2018-07-19 Disney Enterprises, Inc. Omnistereoscopic Panoramic Video
CN110536142A (zh) * 2019-08-30 2019-12-03 天津大学 一种针对非刚性图像序列的帧间插值方法
CN110874827A (zh) * 2020-01-19 2020-03-10 长沙超创电子科技有限公司 湍流图像复原方法、装置、终端设备及计算机可读介质
CN111476143A (zh) * 2020-04-03 2020-07-31 华中科技大学苏州脑空间信息研究院 获取多通道图像、生物多参数以及身份识别的装置
CN114742866A (zh) * 2022-03-09 2022-07-12 影石创新科技股份有限公司 图像配准方法、装置、存储介质及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110235884A1 (en) * 2010-03-25 2011-09-29 Emory University Atlas-Assisted Synthetic Computed Tomography Using Deformable Image Registration
US20180205884A1 (en) * 2017-01-17 2018-07-19 Disney Enterprises, Inc. Omnistereoscopic Panoramic Video
CN110536142A (zh) * 2019-08-30 2019-12-03 天津大学 一种针对非刚性图像序列的帧间插值方法
CN110874827A (zh) * 2020-01-19 2020-03-10 长沙超创电子科技有限公司 湍流图像复原方法、装置、终端设备及计算机可读介质
CN111476143A (zh) * 2020-04-03 2020-07-31 华中科技大学苏州脑空间信息研究院 获取多通道图像、生物多参数以及身份识别的装置
CN114742866A (zh) * 2022-03-09 2022-07-12 影石创新科技股份有限公司 图像配准方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN114742866A (zh) 2022-07-12

Similar Documents

Publication Publication Date Title
US11501507B2 (en) Motion compensation of geometry information
WO2023169281A1 (zh) 图像配准方法、装置、存储介质及电子设备
WO2019101061A1 (en) Three-dimensional (3d) reconstructions of dynamic scenes using reconfigurable hybrid imaging system
CN110070598B (zh) 用于3d扫描重建的移动终端及其进行3d扫描重建方法
WO2019011249A1 (zh) 一种图像中物体姿态的确定方法、装置、设备及存储介质
WO2020253618A1 (zh) 一种视频抖动的检测方法及装置
CN110956661B (zh) 基于双向单应矩阵的可见光与红外相机动态位姿计算方法
EP3135033B1 (en) Structured stereo
WO2018112788A1 (zh) 图像处理方法及设备
CN106981078B (zh) 视线校正方法、装置、智能会议终端及存储介质
CN109788189A (zh) 将相机与陀螺仪融合在一起的五维视频稳定化装置及方法
CN110636276B (zh) 视频拍摄方法、装置、存储介质及电子设备
US20240046557A1 (en) Method, device, and non-transitory computer-readable storage medium for reconstructing a three-dimensional model
CN109656033B (zh) 一种区分液晶显示屏灰尘和缺陷的方法及装置
WO2021136386A1 (zh) 数据处理方法、终端和服务器
WO2022160857A1 (zh) 图像处理方法及装置、计算机可读存储介质和电子设备
WO2021008205A1 (zh) 图像处理
US11812154B2 (en) Method, apparatus and system for video processing
US20230394833A1 (en) Method, system and computer readable media for object detection coverage estimation
JPWO2016208404A1 (ja) 情報処理装置および方法、並びにプログラム
WO2022110877A1 (zh) 深度检测方法、装置、电子设备、存储介质及程序
CN113902932A (zh) 特征提取方法、视觉定位方法及装置、介质和电子设备
TW201523516A (zh) 移動攝影機之畫面穩定方法
CN111829522B (zh) 即时定位与地图构建方法、计算机设备以及装置
Gava et al. Dense scene reconstruction from spherical light fields

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23765859

Country of ref document: EP

Kind code of ref document: A1