WO2022267287A1 - 图像配准方法及相关装置、设备和存储介质 - Google Patents

图像配准方法及相关装置、设备和存储介质 Download PDF

Info

Publication number
WO2022267287A1
WO2022267287A1 PCT/CN2021/127346 CN2021127346W WO2022267287A1 WO 2022267287 A1 WO2022267287 A1 WO 2022267287A1 CN 2021127346 W CN2021127346 W CN 2021127346W WO 2022267287 A1 WO2022267287 A1 WO 2022267287A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matching point
target
registered
target image
Prior art date
Application number
PCT/CN2021/127346
Other languages
English (en)
French (fr)
Inventor
王求元
Original Assignee
浙江商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江商汤科技开发有限公司 filed Critical 浙江商汤科技开发有限公司
Publication of WO2022267287A1 publication Critical patent/WO2022267287A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • G06T3/14
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof

Definitions

  • the present disclosure relates to the technical field of artificial intelligence, and relates to an image registration method and related devices, equipment and storage media.
  • augmented reality Augmented Reality
  • virtual reality Virtual Reality
  • image registration is the focus of research in computer vision fields such as AR and VR.
  • image registration technology the transformation parameters between the image to be registered and the target image captured by the camera can be obtained, so that the target image can be obtained by transforming the parameters later. position in the image to be registered.
  • the existing image registration technology obtains more accurate registration parameters when the target image accounts for a large proportion of the image to be registered, but when the target image accounts for a small proportion of the image to be registered, the existing Image registration techniques often cannot be accurately registered.
  • Embodiments of the present disclosure provide an image registration method and a related device, device, and storage medium.
  • an embodiment of the present disclosure provides an image registration method, including: acquiring a target image and an image to be registered; extracting several first feature points of the target image and several second feature points of the image to be registered; based on The degree of matching between the first feature point and the second feature point selects at least one group of first matching point pairs, wherein each group of first matching point pairs includes the first feature point and the second feature point; based on the first matching The direction information of the point pair is used to obtain the final transformation parameters between the target image and the image to be registered.
  • the rotation angle of the image to be registered relative to the target image can be obtained, and then the rotation angle information can be used to obtain the target
  • the final transformation parameters between the image and the image to be registered and finally achieve image registration.
  • fewer feature points can be used for image registration, so the registration will not be affected by the proportion of the target image in the image to be registered, even if the proportion of the target image in the image to be registered is relatively large. Small size can also achieve accurate image registration, so it can improve the accuracy of image registration.
  • the aforementioned extraction of several first feature points of the target image includes: scaling the target image to obtain at least one scaled image with different resolutions; each of the target image and the at least one scaled image At least one first feature point is extracted from the image to obtain several first feature points; and/or, the degree of matching between the first feature point and the second feature point is based on the characteristics of the first feature point and the second feature point Indicates the distance between obtained.
  • the registration accuracy of the image registration method provided by the embodiments of the present disclosure for different target image scales can be further improved.
  • the aforementioned scaling of the target image to obtain at least one scaled image with different resolutions includes: determining a preset scale between the target image and the image to be registered; generating at least one derivative based on the preset scale scales, wherein each derived scale is different and smaller than a preset scale; based on each derived scale, the target image is scaled to obtain a corresponding scaled image.
  • At least one derived scale smaller than the preset scale at least one small-scale target image can be obtained, thereby improving the accuracy of image registration in the case of small scales in the subsequent registration.
  • the aforementioned determination of the preset scale between the target image and the image to be registered includes: ratio to get the preset scale.
  • a series of scales can be obtained later based on the preset scale, and the target image can be scaled according to these scales, which can improve the image registration method provided by the embodiment of the present disclosure for different target image scales.
  • the accuracy of the lower registration can improve the image registration method provided by the embodiment of the present disclosure for different target image scales.
  • obtaining the final transformation parameters between the target image and the image to be registered based on the direction information of the first matching point pair described above includes: obtaining the first matching point pair based on the direction information of the first matching point pair Points correspond to the first candidate transformation parameters, and the first candidate transformation parameters that meet the preset requirements are used as the final transformation parameters.
  • the first candidate transformation parameter corresponding to the first matching point pair is obtained, and the first candidate transformation parameter that meets the preset requirements is used as the final transformation parameter , comprising: selecting one of the first matching point pairs as the target matching point pair; based on the direction information of the target matching point pair, obtaining the first candidate transformation parameter corresponding to the target matching point pair; judging the corresponding Whether the first candidate transformation parameter meets the preset requirement; in response to the first candidate transformation parameter corresponding to the target matching point pair meeting the preset requirement, the first candidate transformation parameter corresponding to the target matching point pair is used as the final transformation parameter.
  • the final transformation parameters between the target image and the image to be registered can be obtained to achieve image registration.
  • the above-mentioned at least one group of first matching point pairs is selected as the target matching point pair in the order of the matching degree of the first matching point pair from high to low; and/or, in the determination of the target
  • the method further includes: in response to the first candidate transformation parameter corresponding to the target matching point pair not meeting the preset requirement, selecting a new A group of the first matching point pairs is used as the target matching point pair, and the direction information based on the target matching point pair is re-executed to obtain the first candidate transformation parameter corresponding to the target matching point pair and its subsequent steps: in response to the first candidate transformation parameters corresponding to the target matching point pair not finding a first candidate transformation parameter that satisfies a preset requirement within a preset time, determining that the final transformation parameter cannot be obtained.
  • the terminal can implement other steps to solve delays, no response, etc. .
  • obtaining the first candidate transformation parameters corresponding to the first matching point pair based on the above-mentioned direction information of the first matching point pair includes: extracting the first image containing the first matching point from the target image area, and extract the second image area containing the second matching point in the image to be registered, wherein the first matching point and the second matching point are respectively the first feature point and the second feature point in the first matching point pair ; Determining a first deflection angle of the first image area and a relative second deflection angle of the second image area; and obtaining first candidate transformation parameters based on the first deflection angle and the second deflection angle.
  • these first deflection angles and second deflection angles can be used to obtain the first candidate transformation parameters to achieve subsequent image registration .
  • the above-mentioned obtaining of the first candidate transformation parameters based on the first deflection angle and the second deflection angle includes: based on the scale corresponding to the first matching point pair, the first deflection angle and the second deflection angle, obtaining The first candidate transformation parameters, wherein the scale corresponding to the first matching point pair is the scale between images where the first matching point pair is located.
  • the first candidate transformation parameters can be obtained by using the direction information of the target matching point pair and the coordinate information of the first feature point and the second feature point in the target matching point pair, so as to realize subsequent image registration.
  • obtaining the first candidate transformation parameter based on the scale corresponding to the first matching point pair, the first deflection angle, and the second deflection angle includes: obtaining the distance between the first deflection angle and the second deflection angle The angle difference; based on the angle difference and the scale corresponding to the first matching point pair, the first candidate transformation parameter is obtained.
  • the first candidate transformation parameter can be obtained to implement subsequent image registration.
  • the center of the above-mentioned first image area is the center of the target image; and/or, the first deflection angle is a line and a preset direction between the centroid of the first image area and the center of the first image area The directional angle between them; the second deflection angle is the directional angle between the line connecting the centroid of the second image area and the center of the second image area and the preset direction.
  • the first deflection angle and the second deflection angle can be obtained.
  • the method before extracting several first feature points of the target image and several second feature points of the image to be registered, the method further includes: selecting several groups of second matching points in the target image and the image to be registered Point pair; synthesize the position information of several groups of second matching point pairs to obtain the second candidate transformation parameter; when the second candidate transformation parameter satisfies the preset requirements, use the second candidate transformation parameter as the final transformation parameter; in the second When the candidate transformation parameters do not meet the preset requirements, extracting several first feature points of the target image and several second feature points of the image to be registered and subsequent steps thereof are performed.
  • the above method it is possible to firstly use the feature points and feature representations of the image to perform image registration, and then the accurate image registration cannot be performed using the feature points and feature representations of the image (such as the occupancy between the target image and the image to be registered).
  • the direction information of the feature points is used for image registration to reduce the failure of image registration and improve the accuracy of image registration.
  • the above-mentioned preset requirements include: the similarity between the target area corresponding to the corresponding candidate transformation parameter and the target image meets the preset similarity requirement, and the target area corresponding to the corresponding candidate transformation parameter is the target area corresponding to the corresponding candidate transformation parameter.
  • the region corresponding to the target image determined by the transformation parameters in the image to be registered.
  • the accuracy of the first candidate transformation parameters can be determined, and then the first candidate transformation parameters that meet the requirements can be selected. as the final transformation parameters.
  • the method before the extracted first feature points of the target image and the second feature points of the image to be registered, the method further includes: responding to the fact that the shape of the target image is different from the shape of the image to be registered , expand the target image to an image with the same shape as the image to be registered.
  • the above-mentioned method of expanding the target image can complete image registration even when the target image is of arbitrary shape, which is beneficial to improve the robustness of image registration.
  • an embodiment of the present disclosure provides an image recognition device, including: an image acquisition part, a feature extraction part, a feature matching part and a determination part.
  • the image acquisition part is configured to acquire the target image and the image to be registered;
  • the feature extraction part is configured to extract several first feature points of the target image and several second feature points of the image to be registered;
  • the feature matching part is configured to be based on the first The degree of matching between a feature point and a second feature point selects at least one group of first matching point pairs, wherein each group of first matching point pairs includes a first feature point and a second feature point;
  • the determining part is configured as Based on the direction information of the first matching point pair, the final transformation parameters between the target image and the image to be registered are obtained.
  • an embodiment of the present disclosure provides an electronic device, including a memory and a processor coupled to each other, and the processor is configured to execute program instructions stored in the memory, so as to implement the image registration method in the first aspect above.
  • an embodiment of the present disclosure provides a computer-readable storage medium on which program instructions are stored, and when the program instructions are executed by a processor, the image registration method in the above-mentioned first aspect is implemented.
  • an embodiment of the present disclosure provides a computer program, including computer readable code, when the computer readable code is run in an electronic device, the processor in the electronic device executes to implement the above first aspect Image registration methods in .
  • the rotation angle of the image to be registered relative to the target image can be obtained, and then the rotation angle information can be used to obtain The final transformation parameters between the target image and the image to be registered, and finally achieve image registration.
  • fewer feature points can be used for image registration, so the registration will not be affected by the proportion of the target image in the image to be registered, even if the proportion of the target image in the image to be registered is relatively large Small size can also achieve accurate image registration, so it can improve the accuracy of image registration.
  • FIG. 1 is a schematic flowchart of an optional image registration method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an optional image registration method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of an optional image registration method provided by an embodiment of the present disclosure
  • Fig. 4 is a schematic flowchart of an optional image registration method provided by an embodiment of the present disclosure.
  • Fig. 5 is a schematic diagram of an embodiment of a deflection angle acquisition method
  • Fig. 6 is a schematic diagram of an embodiment of expanding a target image
  • FIG. 7 is a schematic flowchart of an optional image registration method provided by an embodiment of the present disclosure.
  • Fig. 8 is a schematic diagram of an optional frame of an image registration device provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of an optional frame of an electronic device provided by an embodiment of the present disclosure.
  • Fig. 10 is a schematic diagram of an optional frame of a computer-readable storage medium provided by an embodiment of the present disclosure.
  • system and “network” are often used interchangeably herein.
  • the term “and/or” in this article is just an association relationship describing associated objects, which means that there can be three relationships, for example, A and/or B can mean: A exists alone, A and B exist simultaneously, and there exists alone B these three situations.
  • the character "/” in this article generally indicates that the contextual objects are an “or” relationship.
  • “many” herein means two or more than two.
  • FIG. 1 is a schematic flowchart of an optional image registration method provided by an embodiment of the present disclosure. May include the following steps:
  • Step S11 Obtain the target image and the image to be registered.
  • the image to be registered may be an image captured by a camera.
  • the images to be registered can be images captured by electronic devices such as mobile phones, tablet computers, and smart glasses; or, in video surveillance scenarios, the images to be registered can be surveillance cameras
  • the captured images are not limited here. Other scenarios can be deduced in the same way, and examples are not given here.
  • the target image may be included in the image to be registered. When the target image is included in the image to be registered, registration of the target image and the image to be registered can be realized.
  • the target image may be an image on a plane, for example, on a flat ground or a flat wall.
  • the target image may be acquired in advance, that is, the target image may be predetermined before executing the image registration method provided by the embodiment of the present disclosure.
  • the target image can be set according to actual application conditions. For example, in the case where the position of building A in the image to be registered needs to be determined, the image of building A can be obtained in advance; or, in the case where the position of person B in the image to be registered needs to be determined, the person For the image of B, other situations can be deduced in the same way, so we will not give examples one by one here.
  • the target image may be determined from the acquired images. For example, the interior of the building can be photographed in advance to obtain a certain number of images of the interior of the building, and then among these images, a specific one can be selected as the target image. If the image includes a painting, the painting can be as the target image.
  • Step S12 Extract several first feature points of the target image and several second feature points of the image to be registered.
  • a feature extraction operation can be performed on the target image and the image to be registered to obtain feature information about the target image and the image to be registered.
  • some feature extraction algorithms may be used for feature extraction to obtain feature points in the image, and the number of feature points is not limited.
  • Feature extraction algorithms are, for example, FAST (features from accelerated segment test) algorithm, SIFT (Scale-invariant feature transform) algorithm, ORB (Oriented FAST and Rotated BRIEF) algorithm, etc.
  • the feature extraction algorithm is an ORB algorithm.
  • a feature representation corresponding to each feature point is also obtained, and the feature representation is, for example, a feature vector.
  • each feature point has a corresponding feature representation.
  • Feature extraction is performed on the target image, and the obtained feature points are defined as the first feature points.
  • Feature extraction is performed on the image to be registered, and the obtained feature points are defined as second feature points.
  • the feature extraction algorithm is the FAST algorithm. At this time, you can choose to sort based on the size of the response value of the extracted feature points, and sort according to the size of the response value, Then rank and select the first Y first feature points of Y.
  • the feature points obtained by feature extraction through the feature extraction algorithm mentioned in the above embodiments may be considered to be located on the same plane as the target image.
  • Step S13 Select at least one set of first matching point pairs based on the matching degree between the first feature point and the second feature point, wherein each set of first matching point pairs includes the first feature point and the second feature point.
  • the matching degree between the first feature point and the second feature point may be the matching degree between the feature representation of the first feature point and the feature representation of the second feature point.
  • the matching degree between each first feature point and each second feature point may be calculated, so as to obtain the matching degree between each first feature point and each second feature point.
  • the matching degree between the first feature point and the second feature point is obtained based on the distance between the feature representations of the first feature point and the second feature point. Therefore, the matching degree information between the feature points can be obtained by calculating the distance between the feature representations of the feature points. For example, the size of the distance between two feature points (one is the first feature point and the other is the second feature point) feature representation is the matching degree, the closer the distance is, the more the match is; the closest distance can be considered as best match.
  • the feature representations are feature vectors, and the distance between feature representations is the distance between feature vectors.
  • the distance between feature vectors is, for example, Euclidean distance, cosine similarity, normalized Euclidean distance, etc., which are not limited here.
  • At least one set of first matching point pairs is selected.
  • Each set of first matching point pairs includes first feature points and second feature points.
  • NM corresponding distances can be obtained, that is, NM first matching point pairs.
  • Step S14 Obtain the final transformation parameters between the target image and the image to be registered based on the direction information of the first matching point pair.
  • the direction information of the first matching point pair can be calculated.
  • the direction information of the first matching point pair can be obtained according to the direction of the feature points of the first feature point and the second feature point in the first matching point pair.
  • the direction information of the first matching point pair may be a difference between the direction of the first feature point and the direction of the second feature point.
  • the direction information of the first matching point pair can be The difference between the corner orientation angle of the first feature point and the corner orientation angle of the second feature point.
  • the rotation angle of the image to be registered represented by the direction information of the first matching point pair relative to the target image can be used to perform image registration, and finally the target image and the target image can be obtained.
  • the final transformation parameter is, for example, a homography matrix corresponding to the target image and the image to be registered.
  • the direction information of the first matching point pair and the coordinate information of the first feature point and the second feature point in the first matching point pair can be used to arrive at the target
  • the final transformation parameters between the image and the image to be registered can be used to arrive at the target
  • the rotation angle of the image to be registered relative to the target image can be obtained, and then the rotation angle information can be used to obtain the target
  • the final transformation parameters between the image and the image to be registered and finally achieve image registration.
  • fewer feature points can be used for image registration, so the registration will not be affected by the proportion of the target image in the image to be registered, even if the proportion of the target image in the image to be registered is relatively large. Small size can also achieve accurate image registration, so it can improve the accuracy of image registration.
  • the above-mentioned directional information based on the first matching point pair to obtain the final transformation parameters between the target image and the image to be registered can be realized by the following scheme: based on the directional information of the first matching point pair, get The first candidate transformation parameters corresponding to the first matching point pair, and the first candidate transformation parameters that meet the preset requirements are used as the final transformation parameters.
  • transformation parameters between multiple target images and images to be registered can be obtained according to the multiple pairs of first matching point pairs, and these transformation parameters can be defined as The first candidate transformation parameter.
  • it may be determined whether to use the first candidate transformation parameters as the final transformation parameters by judging whether the first candidate transformation parameters meet the preset requirements. When the preset condition is met, the first candidate transformation parameter that meets the preset requirement is used as the final transformation parameter. In this way, by screening the first candidate transformation parameters, more accurate final transformation parameters can be obtained.
  • the preset requirement includes: the similarity between the target region corresponding to the corresponding candidate transformation parameter and the target image meets the preset similarity requirement.
  • the target region corresponding to the corresponding candidate transformation parameter is the region corresponding to the target image determined in the image to be registered by using the corresponding candidate transformation parameter.
  • the region corresponding to the target image determined in the image to be registered may be determined by determining points corresponding to edge points of the target image in the image to be registered.
  • the target image is a quadrilateral, and its edge points can be the points corresponding to the four corners.
  • the points corresponding to the edge points of the target image can be determined in the image to be registered, so as to determine the points corresponding to the edge points of the target image in the image to be registered.
  • the region corresponding to the target image among the second feature points obtained from the image to be registered, the point that best matches the first feature point obtained from the edge point of the target image can be determined to obtain the edge point of the target image at the point to be registered. corresponding to the image.
  • the edge point of the target image is A
  • the point that best matches A in the image to be registered is B
  • B is the corresponding point of edge point A on the image to be registered.
  • each obtained first candidate transformation parameter can be used to transform the region to obtain transformed regions corresponding to multiple first candidate transformation parameters, Then compare the similarity between these regions and the target image, and select the first candidate transformation parameter whose similarity meets the preset similarity requirement as the final transformation parameter.
  • image matching algorithms can be used to calculate, such as mean absolute difference algorithm (Mean Absolute Differences, MAD), absolute error and algorithm (Sum of Absolute Differences, SAD), error square sum algorithm (Sum of Squared Differences , SSD), mean square difference algorithm (Mean Square Differences, MSD), normalized product correlation algorithm (Normalized Cross Correlation, NCC), sequential similarity detection algorithm (Sequential Similiarity Detection Algorithm, SSDA), hadamard transformation algorithm ( Sum of Absolute Transformed Difference, SATD), etc., there is no limit here.
  • mean absolute difference algorithm Mean Absolute Differences, MAD
  • SAD absolute error and algorithm
  • Sum of Squared Differences SSD
  • mean square difference algorithm Mean Square Differences, MSD
  • normalized product correlation algorithm Normalized Cross Correlation, NCC
  • sequential similarity detection algorithm Simential Similiarity Detection Algorithm, SSDA
  • hadamard transformation algorithm Sum of Absolute Transformed Difference, SATD
  • the accuracy of the first candidate transformation parameters can be determined, and then the first candidate transformation parameters that meet the requirements can be selected. as the final transformation parameters.
  • the first candidate transformation parameters meet the preset requirements it may be judged one by one, that is, first judge whether a first candidate transformation parameter meets the requirements, and if the requirements are met, No longer judge the other first candidate transformation parameters, but directly use the first candidate transformation parameters as the final transformation parameters; if the requirements are not met, then judge another first candidate transformation parameter, and and so on.
  • the calculation speed of the image registration method provided by the embodiment of the present disclosure can be improved, and the calculation can be stopped after obtaining the first candidate transformation parameter that meets the preset requirements.
  • FIG. 2 is a schematic flowchart of an optional image registration method provided by an embodiment of the present disclosure.
  • This embodiment is an extension of "extracting several first feature points of the target image" mentioned in the above steps, and may include the following steps:
  • Step S121 Scaling the target image to obtain at least one zoomed image with different resolutions.
  • Scaling the target image may be enlarging or reducing the target image. Enlarging the target image is, for example, an upsampling operation, and reducing the target image is, for example, a downsampling operation. Scaling the target image to obtain at least one zoomed image with different resolutions can be used to establish an image pyramid about the target image. By obtaining at least one scaled image with different resolutions, the registration accuracy of the image registration method provided by the embodiments of the present disclosure at different scales can be improved.
  • the target image in order to improve the accuracy of image registration at a small scale, the target image may be reduced to obtain some reduced target images with a smaller resolution.
  • FIG. 3 is a schematic flowchart of an optional image registration method provided by an embodiment of the present disclosure. "Scaling the target image to obtain at least one zoomed image with different resolutions" may include the following steps S1211 to S1213.
  • Step S1211 Determine the preset scale between the target image and the image to be registered.
  • the scaling scale may be determined in advance, that is, a preset scale, so as to scale the target image. It may be to determine the preset scale between the target image and the image to be registered.
  • the preset scale can be obtained based on the size of the image to be registered, the size of the target image, and the preset proportion of the target image in the image to be registered.
  • the preset proportion can be understood as the proportion of the registered image in the image to be registered.
  • the preset proportion is, for example, 15%, 18%, etc., which can be set according to needs, and there is no limitation here.
  • Both the size of the image to be registered and the size of the target image can be the resolution of the image to be registered, for example, the size of the image to be registered is 1080*2160, the size of the target image is 256*256, and so on.
  • the preset scale can be calculated according to the following formula (1):
  • s 0 is the preset scale
  • w c ⁇ h c is the size of the image to be registered
  • w r ⁇ h r is the size of the target image
  • a 0 % is the predetermined preset proportion
  • a series of scales can be obtained later based on the preset scale, and the target image can be scaled according to these scales, which can improve the image registration of the image registration method provided by the embodiment of the present disclosure under different scales. accurate accuracy.
  • Step S1212 Generate at least one derived scale based on the preset scale, wherein each derived scale is different and smaller than the preset scale.
  • At least one derived scale can be generated based on the preset scale.
  • the derived scale can be larger than the preset scale or smaller than the preset scale.
  • each derived scale is different and smaller than the preset scale.
  • the derived scale may be a reduced scale of the preset scale.
  • each derived scale is different and may be greater than a preset scale.
  • n-1 derived scales can be generated, that is, n scales (including preset scales) can be obtained in total, s 0 , s 1 , s 2 ,...,s n-1 , where and so on.
  • n can be set to 3.
  • Step S1213 Based on each derived scale, the target image is scaled to obtain a corresponding scaled image.
  • the scale between the scaled image and the image to be registered is the corresponding derived scale.
  • the target image may be scaled based on each derived scale to obtain a corresponding scaled image, wherein the scale between the scaled image and the image to be registered is the corresponding derived scale.
  • the target image T 0 can be reduced based on the scale s 1 to obtain a reduced image T 1
  • the scale between the reduced image T 1 and the image to be registered is s 1 .
  • Step S122 Extract at least one first feature point from each of the target image and the at least one scaled image to obtain several first feature points.
  • Obtaining at least one scaled image and target image means that target images corresponding to different scales are obtained, so that feature extraction can be performed on these images, and at least one first feature point is extracted from each image to obtain at least one first feature point.
  • the registration accuracy of the image registration method provided in the embodiments of the present disclosure at different scales can be further improved.
  • FIG. 3 is a schematic flowchart of an optional image registration method provided by an embodiment of the present disclosure. This embodiment refers to the above-mentioned "based on the direction information of the first matching point pair, the first candidate transformation parameter corresponding to the first matching point pair is obtained, and the first candidate transformation parameter that meets the preset requirements As the final transformation parameters" extension, including the following steps:
  • Step S141 Select one of the first matching point pairs as a target matching point pair.
  • At least one group of first matching point pairs has been selected, and at this time one group of first matching point pairs may be selected as a target matching point pair to calculate the first candidate transformation parameters.
  • the aforementioned at least one group of first matching point pairs is selected as the target matching point pair in descending order of matching degrees of the first matching point pairs. That is to say, when selecting the target matching point pair from the first matching point pair, according to the matching degree of the first matching point pair, select from the highest matching degree.
  • the matching degree is the distance between feature points, that is, the first matching point pair with the smallest distance is selected. In this way, the first matching point pair most likely to meet the preset requirement can be preferentially calculated.
  • Step S142 Obtain the first candidate transformation parameters corresponding to the target matching point pair based on the direction information of the target matching point pair.
  • first candidate transformation parameters corresponding to the group of target point pairs can be calculated.
  • FIG. 4 is a schematic flowchart of an optional image registration method provided by an embodiment of the present disclosure.
  • the "obtain the first candidate transformation parameter corresponding to the target matching point pair based on the direction information of the target matching point pair" mentioned in the above steps is extended, including the following steps S1421 to S1423.
  • Step S1421 extracting the first image region containing the first matching point in the target image, and extracting the second image region containing the second matching point in the image to be registered.
  • the first matching point and the second matching point are respectively the first feature point and the second feature point in the first matching point pair.
  • a first image region of a certain shape may be selected with the first matching point as the center point.
  • an area with a size of 16 ⁇ 16 pixels may be selected as the first image area, or a circular area with a radius of 16 pixels may be selected as the first image area.
  • the determination of the second image area is the same as that of the first image area.
  • the center of the first image area may be determined as the center of the target image.
  • Step S1422 Determine the first deflection angle of the first image area and the second deflection angle of the second image area.
  • each pixel in the area can be used to obtain the deflection angle of the area.
  • the deflection angle obtained by using the first image area is the first deflection angle
  • the deflection angle obtained by using the second image area is the second deflection angle.
  • the first deflection angle is the directional angle between the line connecting the centroid of the first image area and the center of the first image area and a preset direction.
  • the second deflection angle is a directional angle between a line connecting the centroid of the second image area and the center of the second image area and a preset direction.
  • the directional included angle may include: an included angle at which the connecting line deflects to a preset direction in a clockwise direction, or an included angle at which the connecting line deflects in a counterclockwise direction to a preset direction, which is not limited here.
  • the sign of the directional included angle is "-" (that is, a minus sign), or it can also be defined that when deflecting in a counterclockwise direction, the sign of a directional included angle is "+" (i.e. positive sign), is not limited here.
  • FIG. 5 is a schematic diagram of an embodiment of a deflection angle acquisition manner.
  • the solid-line rectangle represents the target image
  • the dotted-line rectangle within the solid-line rectangle represents the first image area
  • P is the centroid of the first image area
  • the center of the first image area is the coordinate origin O to establish rectangular coordinates system
  • the line connecting the centroid P of the first image area and the center of the first image area is OP
  • the preset direction can be the x-axis of the above Cartesian coordinate system
  • the directional angle can be counterclockwise from the preset direction to the connecting line
  • the included angle ⁇ of the direction can be deduced by analogy, and no more examples will be given here.
  • centroid (c x ,c y ) can be expressed as:
  • (x, y) represents the offset of a certain pixel point in the first image area relative to the center of the first image area
  • I (x, y) represents the pixel value of the pixel point
  • represents the calculation The sum coincides, and the range of the sum is the pixels in the first image area.
  • the first deflection angle ⁇ can be directly obtained by the following formula:
  • (x, y) represents the offset of a certain pixel point in the first image area relative to the center of the first image area
  • I (x, y) represents the pixel value of the pixel point
  • represents the calculation The sum coincides, and the range of the sum is the pixels in the first image area.
  • the second deflection angle can also be calculated by the same method.
  • a first deflection angle of the first image region can be determined.
  • the method for calculating the second deflection angle of the second image area is the same as the above method for calculating the first deflection angle.
  • Step S1423 Obtain a first candidate transformation parameter based on the first deflection angle and the second deflection angle.
  • the direction information of the target matching point pair can be determined based on the two deflection angles. For example, the difference between the first deflection angle and the second deflection angle may be used as the direction information of the target matching point pair. Then, based on the direction information of the target matching point pair and the coordinate information of the first feature point and the second feature point in the target matching point pair, the first candidate transformation parameters are obtained.
  • the first candidate transformation parameter may be obtained based on the scale corresponding to the first pair of matching points, the first deflection angle, and the second deflection angle.
  • the scale corresponding to the first matching point pair is the scale between the images where the first matching point pair is located, that is, the scale of the target image to which the first matching point of the first matching point belongs, for example, the above-mentioned s 0 , s 1 wait.
  • the above-mentioned step of "obtaining the first candidate transformation parameter based on the scale corresponding to the first matching point pair, the first deflection angle and the second deflection angle" may include the following steps 1 and 2.
  • Step 1 Obtain the angle difference between the first deflection angle and the second deflection angle.
  • the angle difference is, for example, the difference between the first deflection angle and the second deflection angle.
  • the formula (4) for calculating the angle difference is as follows:
  • is the angle difference
  • T represents the target image
  • F represents the image to be registered.
  • Step 2 Obtain the first candidate transformation parameter based on the angle difference and the scale corresponding to the first matching point pair.
  • the first candidate transformation parameter is, for example, a homography matrix corresponding between the target image and the image to be registered.
  • the calculation formula (5) of the homography matrix is as follows:
  • H is the corresponding homography matrix between the target image and the image to be registered, that is, the first candidate transformation parameter;
  • H r represents the translation amount of the image to be registered relative to the target image;
  • H s represents the first matching point pair The corresponding scale;
  • HR represents the rotation amount of the image to be registered relative to the target image, and
  • H l represents the translation amount reset after translation.
  • a set of target matching point pairs can be used to obtain the corresponding homography matrix between the target image and the image to be registered, and then image registration can be realized.
  • the corresponding relationship between the pixels on the target image and the pixels on the image to be registered can be established later.
  • the corresponding relationship between the pixels on the target image and the pixels on the image to be registered can be determined by calculating formula (7):
  • H represents the first candidate transformation parameter
  • (x, y) is a pixel point in the target image
  • (x′, y′) is a pixel point in the image to be registered. That is to say, the first candidate transformation parameter can be used to perform coordinate transformation on the pixel point in the target image to obtain the pixel point corresponding to the pixel point in the image to be registered.
  • the first candidate transformation parameter After the first candidate transformation parameter is obtained, it may be further judged whether the first candidate transformation parameter meets a preset requirement.
  • Step S143 Determine whether the first candidate transformation parameter corresponding to the target matching point pair satisfies a preset requirement.
  • step S14 For a detailed description of the preset requirements, please refer to the above step S14.
  • the first candidate transformation parameters may be optimized first, so as to obtain more accurate first candidate transformation parameters.
  • the target image can be marked as T, the image to be registered is marked as F, and the first candidate transformation parameter is marked as H.
  • the optimization formula (8) is as follows:
  • F(H -1 ) represents the result of transforming the image F to be registered through the first candidate transformation parameter H
  • the f function is used to calculate the similarity between T and F(H -1 ), that is, the f function is used to calculate
  • the degree of similarity between the target image and the image to be registered can be a Sum of Squared Differences (SSD) function, or a Normalized Cross Correlation (NCC) function, etc.
  • SSD Sum of Squared Differences
  • NCC Normalized Cross Correlation
  • the iterative optimization method is, for example, a Gauss-Newton (Gauss-Newton) iterative method or a Levenberg-Marquard algorithm or the like.
  • Score represents the similarity score. The higher the score, the more similar the target image is to the image to be registered.
  • the expression (9) of the SSD function is as follows:
  • ⁇ x, y represents the pixel point (x, y) in the target image T and the corresponding pixel point (x′, y′) determined by the first candidate registration parameter H in the image F to be registered.
  • the error square summation is performed on the pixel values of the matched point pairs.
  • the expression (10) of the SSD function is as follows:
  • ⁇ x, y represents the pixel point (x, y) in the target image F and the corresponding pixel point (x′, y′) determined by the first candidate registration parameter H in the image F to be registered.
  • the pixel values of the formed matching point pairs are subjected to normalized cross-correlation processing.
  • Represents the average value of the pixel (x, y) pixel values in the target image Indicates the average value of the pixel values of the pixel points (x', y') in the image to be registered.
  • the value range of NCC(T,F) ranges from -1 to 1, and the closer NCC(T,F) is to 1, the higher the similarity between the target image and the image to be registered is.
  • step S144 may be executed when the first candidate transformation parameter meets the preset requirement. In the case that the first candidate transformation parameter does not meet the preset requirement, step S145 may be executed.
  • Step S144 In response to the first candidate transformation parameter corresponding to the target matching point pair meeting the preset requirement, use the first candidate transformation parameter corresponding to the target matching point pair as the final transformation parameter.
  • the terminal may use the candidate parameter as the final transformation parameter between the target image and the image to be registered.
  • Step S145 In response to the fact that the first candidate transformation parameters corresponding to the target matching point pair do not meet the preset requirements, select a new set of first matching point pairs as the target matching point pair, and re-execute the direction information based on the target matching point pair , to obtain the first candidate transformation parameter corresponding to the target matching point pair and its subsequent steps.
  • the terminal may respond to the fact that the first candidate transformation parameter corresponding to the target matching point pair does not meet the preset requirements, and use The new first matching point pair is used as the target matching point pair to calculate new first candidate transformation parameters. Therefore, the first candidate transformation parameter corresponding to the target matching point pair and subsequent steps may be obtained by re-executing the direction information based on the target matching point pair.
  • the final transformation parameters between the target image and the image to be registered can be obtained to achieve image registration.
  • the target image may be expanded to an image having the same shape as the image to be registered.
  • the terminal can expand the target image to an image having the same shape as the image to be registered, and use this image as a new target image.
  • FIG. 6 is a schematic diagram of an embodiment of expanding the target image.
  • the circumscribed rectangle of the circle can be obtained, and the circle in the circumscribed rectangle is the target image, and the pixels between the circle and the circumscribed rectangle
  • the point can be any pixel value to obtain a new target image.
  • the area between the circle and the circumscribed rectangle can be uniformly filled with black, or the area between the circle and the circumscribed rectangle can be uniformly filled with white.
  • No limit. Please continue to refer to Figure 6.
  • the target image is a circle and the image to be registered is a rectangle
  • the pixel points between the circle and the rectangle can be any pixel value, so as to obtain a new target image, that is, the rectangle containing the circle, which is not limited to the circumscribed rectangle.
  • the target image is of other shapes, or when the image to be registered is of other shapes, it can be deduced by analogy. Therefore, image registration can be completed even when the target image is of arbitrary shape, which is beneficial to improve the robustness of image registration.
  • the rotation angle of the image to be registered relative to the target image is obtained, and then the rotation angle information can be used
  • the final transformation parameters between the target image and the image to be registered are obtained, and image registration is finally realized.
  • fewer feature points can be used for image registration, so the registration will not be affected by the proportion of the target image in the image to be registered, even if the proportion of the target image in the image to be registered is relatively large. Small size can also achieve accurate image registration, so it can improve the accuracy of image registration.
  • FIG. 7 is a schematic flowchart of an optional image registration method provided by an embodiment of the present disclosure. This embodiment is a further extension of the embodiment provided in FIG. 1 above. Before step S12 of the above embodiment is performed, the following steps may also be performed:
  • Step S21 Select several sets of second matching point pairs in the target image and the image to be registered.
  • a set of second matching point pairs includes a first feature point extracted from the target image and a second feature point extracted from the second feature image.
  • the target image may include a scaled target image generated based on a series of different scales, such as the aforementioned derived scales.
  • the images to be registered may also include zoomed images to be registered based on a series of different scales.
  • the obtained series of target images of different scales can be defined as the target image pyramid, and a series of images to be registered of different scales are defined as the image pyramid to be registered. That is, when performing feature extraction on the target image or the image to be registered, feature extraction may be performed on all images in the target image pyramid or the image pyramid to be registered, thereby obtaining a series of first feature points and second Two feature points. Then, several groups of second matching point pairs can be selected.
  • several groups of second matching point pairs may be selected based on the matching degree between the first feature point and the second feature point.
  • the selection method refer to step S13 in the above-mentioned embodiment.
  • Step S22 Synthesizing the position information of several groups of second matching point pairs to obtain second candidate transformation parameters.
  • the second candidate transformation parameters can be obtained according to the position information of these second matching point pairs.
  • a random consensus sampling algorithm Random Sample Consensus, RANSAC
  • RANSAC Random Sample Consensus
  • the second candidate transformation parameter is, for example, the homography matrix H corresponding to the target image and the image to be registered.
  • Step S23 judging whether the second candidate transformation parameter satisfies a preset requirement.
  • the method of judging whether the second candidate parameter meets the preset requirement is, for example, judging that the similarity between the target region corresponding to the second candidate transformation parameter and the target image satisfies the preset similarity requirement.
  • the target region corresponding to the second candidate transformation parameter is the region corresponding to the target image determined in the image to be registered by using the second candidate transformation parameter.
  • step S24 may be executed; if the second candidate transformation parameter does not meet the preset requirement, step S25 may be executed.
  • Step S24 Use the second candidate transformation parameter as the final transformation parameter.
  • this candidate parameter can be used as the final transformation parameter between the target image and the image to be registered. After obtaining the final transformation parameters, the image registration method can then be stopped.
  • Step S25 Execute extracting a number of first feature points of the target image and a number of second feature points of the image to be registered and subsequent steps.
  • the above steps can be continued: extracting several first feature points of the target image and several second feature points of the image to be registered. Feature points and their subsequent steps.
  • step S21 when step S21 is performed, the first feature point and the second feature point may have been extracted, so in subsequent steps, the step of extracting feature points may not be performed again. If the above steps also calculate the matching degree between the first feature point and the second feature point, the step of calculating the matching degree between the first feature point and the second feature point may not be performed in the subsequent steps. In this way, the running speed of the image registration method provided by the embodiment of the present disclosure can be improved.
  • the above method it is possible to firstly use the feature points and feature representations of the image to perform image registration, and then the accurate image registration cannot be performed using the feature points and feature representations of the image (such as the occupancy between the target image and the image to be registered).
  • the direction information of the feature points is used for image registration to reduce the failure of image registration and improve the accuracy of image registration.
  • FIG. 8 is a schematic diagram of an optional frame of an image registration device provided by an embodiment of the present disclosure.
  • the image registration device 80 includes an image acquisition part 81 , a feature extraction part 82 , a feature matching part 83 and a determination part 84 .
  • the image acquisition part is configured to acquire the target image and the image to be registered.
  • the feature extraction part is configured to extract a number of first feature points of the target image and a number of second feature points of the image to be registered.
  • the feature matching part is configured to select at least one set of first matching point pairs based on the matching degree between the first feature point and the second feature point, wherein each set of first matching point pairs includes the first feature point and the second feature point. Two feature points.
  • the determining part is configured to obtain the final transformation parameters between the target image and the image to be registered based on the direction information of the first matching point pair.
  • the above feature extraction part is further configured to: scale the target image to obtain at least one scaled image with different resolutions; extract at least one first image from each of the target image and the at least one scaled image. feature points to obtain several first feature points.
  • the above-mentioned matching degree between the first feature point and the second feature point is obtained based on the distance between the feature representations of the first feature point and the second feature point.
  • the above-mentioned feature extraction part is further configured to: determine the preset scale between the target image and the image to be registered; generate at least one derived scale based on the preset scale, wherein each derived scale is different and smaller than the preset scale scale; based on each derived scale, the target image is scaled to obtain the corresponding scaled image.
  • the above-mentioned feature extraction part is further configured to obtain a preset scale based on the size of the image to be registered, the size of the target image, and the preset proportion of the target image in the image to be registered.
  • the above-mentioned determining part is further configured to: obtain the first candidate transformation parameters corresponding to the first matching point pair based on the direction information of the first matching point pair, and use the first candidate transformation parameters that meet the preset requirements as The final transformation parameters.
  • the above-mentioned determining part is further configured to: select one of the first matching point pairs as the target matching point pair; obtain the first candidate transformation parameter corresponding to the target matching point pair based on the direction information of the target matching point pair; Judging whether the first candidate transformation parameter corresponding to the target matching point pair meets a preset requirement; in response to the first candidate transformation parameter corresponding to the target matching point pair meeting the preset requirement, the first candidate transformation parameter corresponding to the target matching point pair Candidate transformation parameters serve as final transformation parameters.
  • the aforementioned at least one group of first matching point pairs is selected as the target matching point pair according to the order of the matching degree of the first matching point pair from high to low.
  • the image registration device 80 also includes a second determining part.
  • the second determining part is configured to select a new group of first matching point pairs as the target matching point pair in response to not satisfying the preset requirements, and re-execute the direction information based on the target matching point pair to obtain the target matching point pair Corresponding first candidate transformation parameters and subsequent steps thereof; in response to the determination part failing to find the first candidate transformation parameters that meet the preset requirements within the preset time, the second determination part is further configured to determine that the final transformation cannot be obtained parameter.
  • the above-mentioned determining part is further configured to: extract the first image region containing the first matching point in the target image, and extract the second image region containing the second matching point in the image to be registered, wherein the first The matching point and the second matching point are respectively the first feature point and the second feature point in the first matching point pair; determine the first deflection angle of the first image area and the second deflection angle of the second image area; based on the first The deflection angle and the second deflection angle are used to obtain the first candidate transformation parameters.
  • the above-mentioned determining part is further configured to: obtain the first candidate transformation parameters based on the scale corresponding to the first pair of matching points, the first deflection angle, and the second deflection angle, wherein the scale corresponding to the first pair of matching points is the scale between the images where the first matching point pair is located.
  • the above-mentioned determining part is further configured to: obtain the angle difference between the first deflection angle and the second deflection angle; and obtain the first candidate transformation parameter based on the angle difference and the scale corresponding to the first matching point pair.
  • the center of the above-mentioned first image area is the center of the target image.
  • the above-mentioned first deflection angle is the directional angle between the centroid of the first image area and the center of the first image area and the preset direction; the second deflection angle is the centroid of the second image area and the first The directional angle between the line connecting the centers of the two image areas and the preset direction.
  • the image registration device 80 also includes a second registration part, and the second registration part is configured to select several groups of second matching point pairs in the target image and the image to be registered; synthesize several groups of second matching point pairs position information to obtain the second candidate transformation parameter; if the second candidate transformation parameter satisfies the preset requirement, use the second candidate transformation parameter as the final transformation parameter; if the second candidate transformation parameter does not meet the preset requirement , performing extraction of several first feature points of the target image and several second feature points of the image to be registered and subsequent steps thereof.
  • the image registration device 80 also includes an image expansion part.
  • the image expansion part is configured to perform Expand the target image to an image with the same shape as the image to be registered.
  • Fig. 9 is a schematic diagram of an optional frame of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device 90 includes a memory 91 and a processor 92 coupled to each other, and the processor 92 is configured to execute program instructions stored in the memory 91 to implement the steps of any one of the image registration method embodiments described above.
  • the electronic device 90 may include, but is not limited to: a microcomputer, a server.
  • the electronic device 90 may also include mobile devices such as notebook computers and tablet computers, which are not limited here.
  • the processor 92 is used to control itself and the memory 91 to implement the steps of any one of the above image registration method embodiments.
  • the processor 92 may also be called a CPU (Central Processing Unit, central processing unit).
  • the processor 92 may be an integrated circuit chip with signal processing capability.
  • the processor 92 can also be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field-programmable gate array (Field-Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the processor 92 may be jointly realized by an integrated circuit chip.
  • FIG. 10 is a schematic diagram of an optional frame of a computer-readable storage medium provided by an embodiment of the present disclosure.
  • the computer-readable storage medium 100 stores program instructions 101 that can be executed by a processor, and the program instructions 101 are used to implement the steps of any one of the image registration method embodiments described above.
  • the above solution can help improve the accuracy of image registration.
  • the functions or parts included in the device provided by the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments, and the implementation manner may refer to the descriptions of the above method embodiments.
  • the disclosed method and device may be implemented in other ways.
  • the device implementations described above are only illustrative.
  • the division of parts or units is only a logical function division.
  • units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed to network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the embodiment of the present disclosure is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage
  • several instructions are included to make a computer device (which may be a personal computer, server, or network device, etc.) or a processor (processor) execute all or part of the steps of the methods in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .
  • the embodiment of the present disclosure discloses an image registration method and related devices, devices, and storage media, wherein the method includes: acquiring a target image and an image to be registered; extracting several first feature points of the target image and the image to be registered Several second feature points; Based on the matching degree between the first feature point and the second feature point, select at least one group of first matching point pairs, wherein each group of first matching point pairs includes the first feature point and the second feature point Two feature points; based on the direction information of the first matching point pair, the final transformation parameters between the target image and the image to be registered are obtained.
  • image registration can be realized, and the accuracy of image registration can be improved.

Abstract

一种图像配准方法及相关装置、设备和存储介质,其中,该方法包括:获取目标图像和待配准图像(S11);提取目标图像的若干第一特征点和待配准图像的若干第二特征点(S12);基于第一特征点和第二特征点之间的匹配程度,选出至少一组第一匹配点对,其中,每组第一匹配点对包括第一特征点和第二特征点(S13);基于第一匹配点对的方向信息,得到目标图像与待配准图像之间的最终变换参数(S14)。

Description

图像配准方法及相关装置、设备和存储介质
相关申请的交叉引用
本公开基于申请号为202110711211.6、申请日为2021年06月25日、申请名称为“图像配准方法及相关装置、设备和存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及人工智能技术领域,涉及一种图像配准方法及相关装置、设备和存储介质。
背景技术
随着电子信息技术的发展,增强现实(Augmented Reality,AR)、虚拟现实(Virtual Reality,VR)等成为计算机视觉领域中的应用热点,通过相机作为输入设备,并利用图像算法处理,可以数字化周围环境,从而获取与真实环境进行交互的使用体验。图像配准是AR、VR等计算机视觉领域中的研究重点,通过图像配准技术可以获取相机拍摄到的待配准图像与目标图像之间的变换参数,从而后续可以通过变换参数,得到目标图像在待配准图像中的位置。
当前,已有的图像配准技术在目标图像在待配准图像中占比较大的情况下,得到的配准参数较为准确,而当目标图像在待配准图像中占比较小,现有的图像配准技术,往往无法准确配准。
因此,如何提高图像配准的准确性成为亟待解决的问题。
发明内容
本公开实施例提供一种图像配准方法及相关装置、设备和存储介质。
第一方面,本公开实施例提供了一种图像配准方法,包括:获取目标图像和待配准图像;提取目标图像的若干第一特征点和待配准图像的若干第二特征点;基于第一特征点和第二特征点之间的匹配程度,选出至少一组第一匹配点对,其中,每组第一匹配点对包括第一特征点和第二特征点;基于第一匹配点对的方向信息,得到目标图像与待配准图像之间的最终变换参数。
因此,通过获得至少一组第一匹配点对,并计算第一匹配点对的方向信息,以此来获得待配准图像相对于目标图像的旋转角度,然后就可以利用该旋转角度信息得到目标图像与待配准图像之间的最终变换参数,最终实现图像配准。而且,通过该方法,能够利用较少的特征点来进行图像配准,故配准不会受到目标图像在待配准图像中的占比影响,即使目标图像在待配准图像中的占比较小也能实现准确的图像配准,故能够提高图像配准的准确性。
在一些实施例中,上述的提取目标图像的若干第一特征点,包括:对目标图像进行缩放,得到不同分辨率的至少一张缩放图像;从目标图像和至少 一张缩放图像中的每张图像中分别提取至少一个第一特征点,以得到若干第一特征点;和/或,第一特征点和第二特征点之间的匹配程度是基于第一特征点和第二特征点的特征表示之间的距离得到的。
因此,通过获得不同尺度的目标图像(包括缩放后的目标图像),可以进一步提高本公开实施例提供的图像配准方法针对不同目标图像尺度情况下配准的准确率。
在一些实施例中,上述的对目标图像进行缩放,得到不同分辨率的至少一张缩放图像,包括:确定目标图像与待配准图像之间的预设尺度;基于预设尺度生成至少一个衍生尺度,其中,每个衍生尺度不同,且均小于预设尺度;基于每个衍生尺度,对目标图像进行缩放,得到对应的缩放图像。
因此,通过得到至少一个小于预设尺度的衍生尺度,可以得到至少一张小尺度的目标图像,由此可以在后续的配准中,提高小尺度情况下的图像配准的准确度。
在一些实施例中,上述的确定目标图像与待配准图像之间的预设尺度,包括:基于待配准图像的尺寸、目标图像的尺寸以及目标图像在待配准图像中的预设占比,得到预设尺度。
因此,通过确定预设尺度,在后续可以基于预设尺度得到一系列的尺度,并依据这些尺度来对目标图像进行缩放,可以提高本公开实施例提供的图像配准方法针对不同目标图像尺度情况下配准的准确率。
在一些实施例中,上述的基于第一匹配点对的方向信息,得到目标图像与待配准图像之间的最终变换参数,包括:基于第一匹配点对的方向信息,得到与第一匹配点对相对应的第一候选变换参数,并将满足预设要求的第一候选变换参数作为最终变换参数。
因此,通过对第一候选变换参数进行筛选,可以得到更为准确的最终变换参数。
在一些实施例中,上述的基于第一匹配点对的方向信息,得到与第一匹配点对相对应的第一候选变换参数,并将满足预设要求的第一候选变换参数作为最终变换参数,包括:选择其中一组第一匹配点对作为目标匹配点对;基于目标匹配点对的方向信息,得到与目标匹配点对相对应的第一候选变换参数;判断目标匹配点对所对应的第一候选变换参数是否满足预设要求;响应于所述目标匹配点对所对应的第一候选变换参数满足预设要求,将目标匹配点对所对应的第一候选变换参数作为最终变换参数。
因此,通过利用一组特征点点对,可以得到目标图像与待配准图像之间的最终变换参数,实现图像配准。
在一些实施例中,上述的至少一组第一匹配点对是按照第一匹配点对的匹配程度从高到低的顺序选择作为目标匹配点对;和/或,在所述判断所述目标匹配点对所对应的第一候选变换参数是否满足预设要求之后,所述方法还包括:响应于所述目标匹配点对所对应的第一候选变换参数不满足所述预设要求,选择新的一组所述第一匹配点对作为所述目标匹配点对,并重新执行 所述基于所述目标匹配点对的方向信息,得到与所述目标匹配点对相对应的第一候选变换参数及其后续步骤;响应于所述目标匹配点对所对应的第一候选变换参数预设时间内未找出满足预设要求的第一候选变换参数,确定无法得到最终变换参数。
因此,通过按照第一匹配点对的匹配程度从高到低的顺序选择目标匹配点对,可以优先计算出最有可能满足预设要求的第一匹配点对。另外,通过设定在预设时间内未找出满足预设要求的第一候选变换参数的情况下,确定无法得到最终变换参数,此时终端可以实施其他步骤,来解决延迟,无响应等情况。
在一些实施例中,上述的基于第一匹配点对的方向信息,得到与第一匹配点对相对应的第一候选变换参数,包括:在目标图像中提取包含第一匹配点的第一图像区域,并在待配准图像中提取包含第二匹配点的第二图像区域,其中,第一匹配点和第二匹配点分别为第一匹配点对中的第一特征点和第二特征点;确定第一图像区域的第一偏转角度和第二图像区域相对第二偏转角度;基于第一偏转角度和第二偏转角度,得到第一候选变换参数。
因此,通过计算第一图像区域的第一偏转角度和第二图像区域的第二偏转角度,可以利用这些第一偏转角度和第二偏转角度得到第一候选变换参数,以实现后续的图像配准。
在一些实施例中,上述的基于第一偏转角度和第二偏转角度,得到第一候选变换参数,包括:基于第一匹配点对所对应的尺度、第一偏转角度和第二偏转角度,得到第一候选变换参数,其中,第一匹配点对所对应的尺度为第一匹配点对所在的图像之间的尺度。
因此,可以通过利用目标匹配点对的方向信息,目标匹配点对中的第一特征点与第二特征点的坐标信息,来得到第一候选变换参数,以实现后续的图像配准。
在一些实施例中,上述的基于第一匹配点对所对应的尺度、第一偏转角度和第二偏转角度,得到第一候选变换参数,包括:获取第一偏转角度与第二偏转角度之间的角度差;基于角度差和第一匹配点对所对应的尺度,得到第一候选变换参数。
因此,通过计算第一偏转角度与第二偏转角度之间的角度差,可以得到第一候选变换参数,以实现后续的图像配准。
在一些实施例中,上述的第一图像区域的中心为目标图像的中心;和/或,第一偏转角度为第一图像区域的形心与第一图像区域的中心的连线与预设方向之间的有向夹角;第二偏转角度为第二图像区域的形心与第二图像区域的中心的连线与预设方向之间的有向夹角。
因此,通过计算图像区域形心与第一图像区域的中心的连线与预设方向之间的有向夹角,可以求得第一偏转角度和第二偏转角度。
在一些实施例中,上述的在提取目标图像的若干第一特征点和待配准图像的若干第二特征点之前,方法还包括:在目标图像和待配准图像中选择若 干组第二匹配点对;综合若干组第二匹配点对的位置信息,得到第二候选变换参数;在第二候选变换参数满足预设要求的情况下,将第二候选变换参数作为最终变换参数;在第二候选变换参数不满足预设要求的情况下,执行提取目标图像的若干第一特征点和待配准图像的若干第二特征点及其后续步骤。
因此,通过上述方法,可以实现先利用图像的特征点以及特征表示进行图像配准,在利用图像的特征点以及特征表示无法进行准确图像配准(例如目标图像与待配准图像之间的占比较小)的情况下,再利用特征点的方向信息进行图像配准,以减少图像配准失败的情况,提高图像配准的准确性。
在一些实施例中,上述的预设要求包括:相应候选变换参数所对应的目标区域与目标图像之间的相似度满足预设相似度要求,相应候选变换参数所对应的目标区域为利用相应候选变换参数在待配准图像中确定的与目标图像对应的区域。
因此,通过计算待配准图像中确定的与目标图像对应的区域与目标图像的相似度,可以以此来确定第一候选变换参数的准确程度,进而能够从中选择满足要求的第一候选变换参数作为最终变换参数。
在一些实施例中,上述的在所提取目标图像的若干第一特征点和待配准图像的若干第二特征点之前,方法还包括:响应于目标图像的形状与待配准图像的形状不同,将目标图像外扩为与待配准图像形状相同的图像。
因此,上述外扩目标图像的方法,可以在目标图像为任意形状的情况下,也能够完成图像配准,有利于提高图像配准的鲁棒性。
第二方面,本公开实施例提供了一种图像识别装置,包括:图像获取部分、特征提取部分、特征匹配部分和确定部分。图像获取部分被配置为获取目标图像和待配准图像;特征提取部分被配置为提取目标图像的若干第一特征点和待配准图像的若干第二特征点;特征匹配部分被配置为基于第一特征点和第二特征点之间的匹配程度,选出至少一组第一匹配点对,其中,每组第一匹配点对包括第一特征点和第二特征点;确定部分被配置为基于第一匹配点对的方向信息,得到目标图像与待配准图像之间的最终变换参数。
第三方面,本公开实施例提供了一种电子设备,包括相互耦接的存储器和处理器,处理器用于执行存储器中存储的程序指令,以实现上述第一方面中的图像配准方法。
第四方面,本公开实施例提供了一种计算机可读存储介质,其上存储有程序指令,程序指令被处理器执行时实现上述第一方面中的图像配准方法。
第五方面,本公开实施例提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行时实现上述第一方面中的图像配准方法。
上述方案,通过获得至少一组第一匹配点对,并计算第一匹配点对的方向信息,以此来获得待配准图像相对于目标图像的旋转角度,然后就可以利用该旋转角度信息得到目标图像与待配准图像之间的最终变换参数,最终实现图像配准。而且,通过该方法,能够利用较少的特征点来进行图像配准, 故配准不会受到目标图像在待配准图像中的占比影响,即使目标图像在待配准图像中的占比较小也能实现准确的图像配准,故能够提高图像配准的准确性。
附图说明
图1是本公开实施例提供的图像配准方法的一个可选的流程示意图;
图2是本公开实施例提供的图像配准方法的一个可选的流程示意图;
图3是本公开实施例提供的图像配准方法的一个可选的流程示意图;
图4是本公开实施例提供的图像配准方法的一个可选的流程示意图;
图5是偏转角度获取方式一实施例的示意图;
图6是对目标图像进行外扩的一实施例的示意图;
图7是本公开实施例提供的图像配准方法的一个可选的流程示意图;
图8是本公开实施例提供的图像配准装置的一个可选的框架示意图;
图9是本公开实施例提供的电子设备的一个可选的框架示意图;
图10为本公开实施例提供的计算机可读存储介质的一个可选的框架示意图。
具体实施方式
下面结合说明书附图,对本公开实施例的方案进行详细说明。
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、接口、技术之类的细节,以便透彻理解本公开实施例。
本文中术语“系统”和“网络”在本文中常被可互换使用。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。此外,本文中的“多”表示两个或者多于两个。
请参阅图1,图1是本公开实施例提供的图像配准方法的一个可选的流程示意图。可以包括如下步骤:
步骤S11:获取目标图像和待配准图像。
在一个实施场景中,待配准图像可以是相机拍摄到的图像。例如,在AR、VR等应用场景中,待配准图像可以是诸如手机、平板电脑、智能眼镜等电子设备所拍摄到的图像;或者,在视频监控场景中,待配准图像可以是监控相机所拍摄到的图像,在此不做限定。其他场景可以以此类推,在此不再一一举例。待配准图像中可以包括目标图像。当待配准图像中包括目标图像时,可以实现目标图像和待配准图像的配准。
目标图像可以是平面上的图像,例如是在平整的地面上,平整的墙面上。目标图像可以预先获取的,也即目标图像可以在执行本公开实施例提供的图像配准方法之前预先确定。目标图像可以根据实际应用情况进行设置。例如, 在需要确定待配准图像中建筑物A的位置的情况下,可以预先获取建筑物A的图像;或者,在需要确定待配准图像中人物B的位置的情况下,可以预先获取人物B的图像,其他情况可以以此类推,在此不再一一举例。在另一些实施场景中,可以从已经获取的图像中,确定目标图像。例如,可以预先对建筑物的内部情况进行拍照,以得到一定数量的建筑物内部图像,然后在这些图像中,选择特定的作为目标图像,如图像中包括一幅画,则可以将这幅画作为目标图像。
步骤S12:提取目标图像的若干第一特征点和待配准图像的若干第二特征点。
得到目标图像和待配准图像以后,可以对目标图像和待配准图像进行特征提取的操作,以获得关于目标图像和待配准图像的特征信息。在一个实施场景中,可以利用一些特征提取算法进行特征提取,以获得图像的中的特征点,特征点的数量不做限制。特征提取算法例如是FAST(features from accelerated segment test)算法,SIFT(Scale-invariant feature transform)算法,ORB(Oriented FAST and Rotated BRIEF)算法等等。在一个可选的实施场景中,特征提取算法为ORB算法。另外,在得到特征点以后,还会得到与每个特征点对应的特征表示,特征表示例如是特征向量。因此,每一个特征点,均有一个与其对应的特征表示。对目标图像进行特征提取,得到的特征点定义为第一特征点。对待配准图像进行特征提取,得到的特征点定义为第二特征点。在一个可选的实施场景中,在对目标图像进行特征提取时,特征提取算法是FAST算法,此时可以选择基于提取的特征点的响应值的大小排序,并按照响应值的大下排序,然后排名选择前Y的Y个第一特征点。
在一个实施场景中,以上实施例提及的通过特征提取算法进行特征提取得到的特征点,都可以认为是与目标图像位于同一平面。
步骤S13:基于第一特征点和第二特征点之间的匹配程度,选出至少一组第一匹配点对,其中,每组第一匹配点对包括第一特征点和第二特征点。
第一特征点和第二特征点之间的匹配程度,可以是第一特征点的特征表示与第二特征点的特征表示的匹配程度。在一个实施场景中,可以计算每一个第一特征点与每一个第二特征点的匹配程度,以此获得每一个第一特征点与每一个第二特征点之间的匹配程度。
在一个实施场景中,第一特征点和第二特征点之间的匹配程度是基于第一特征点和第二特征点的特征表示之间的距离得到的。由此,可以通过计算特征点的特征表示之间的距离,以获得特征点之间的匹配程度信息。例如,两个特征点(一个是第一特征点,一个是第二特征点)特征表示之间的距离的大小,即为匹配程度,距离越近则越匹配;距离最近的,则可以认为是最匹配的。在一个可选的实施场景中,特征表示为特征向量,特征表示之间的距离即是特征向量之间的距离。特征向量之间的距离例如是欧氏距离、余弦相似度、标准化欧氏距离等等,此处不做限制。
基于第一特征点和第二特征点之间的匹配程度,选出至少一组第一匹配 点对。每组第一匹配点对包括第一特征点和第二特征点。在选择时,可以按照匹配程度的从高到低来选择,选出一定数量的第一匹配点对。
在一个可选的实施场景中,一共有N个第一特征点(对应有N个特征表示),M个第二特征点(对应有M个特征表示),通过计算每一个第一特征点与每一个第二特征点中每个第二特征点的距离,可以得到NM个对应的距离,即NM个第一匹配点对。在得到NM个第一匹配点对后,可以对这NM个第一匹配点对的距离按照大小进行排序,然后按照距离从小到大的顺序,选择第一匹配点对。例如,当N=3,M=5时,可以获得15组第一匹配点对,以及这些点对之间的距离,按照距离从小到大顺序,选择一定数量的第一匹配点对。
步骤S14:基于第一匹配点对的方向信息,得到目标图像与待配准图像之间的最终变换参数。
在得到第一匹配点对以后,可以计算第一匹配点对的方向信息。第一匹配点对的方向信息可以根据第一匹配点对中的第一特征点和第二特征点的特征点方向来得到。在一个实施例中,第一匹配点对的方向信息可以是第一特征点的方向与第二特征点的方向的差值。例如,在特征点是通过ORB算法提取得到的情况下,第一特征点的方向是角点方向角,第二特征点的方向也是角点方向角,则第一匹配点对的方向信息可以为第一特征点的角点方向角与第二特征点的角点方向角的差值。通过计算第一匹配点对的方向信息,可以求得待配准图像相对于目标图像的旋转角度。
在得到第一匹配点对的方向信息以后,后续就可以利用第一匹配点对的方向信息代表的待配准图像相对于目标图像的旋转角度,来进行图像配准,最终得到目标图像与待配准图像之间的最终变换参数。最终变换参数例如是目标图像与待配准图像对应的单应性矩阵。
在一个可选的实施场景中,可以利用第一匹配点对的方向信息,以及第一匹配点对中的第一特征点与第二特征点的坐标信息,例如是像素坐标信息,来到目标图像与待配准图像之间的最终变换参数。
因此,通过获得至少一组第一匹配点对,并计算第一匹配点对的方向信息,以此来获得待配准图像相对于目标图像的旋转角度,然后就可以利用该旋转角度信息得到目标图像与待配准图像之间的最终变换参数,最终实现图像配准。而且,通过该方法,能够利用较少的特征点来进行图像配准,故配准不会受到目标图像在待配准图像中的占比影响,即使目标图像在待配准图像中的占比较小也能实现准确的图像配准,故能够提高图像配准的准确性。
在一个实施场景中,上述的基于第一匹配点对的方向信息,得到目标图像与待配准图像之间的最终变换参数,可以通过以下方案实现:基于第一匹配点对的方向信息,得到与第一匹配点对相对应的第一候选变换参数,并将满足预设要求的第一候选变换参数作为最终变换参数。
可以理解的,在存在多对第一匹配点对的情况下,可以根据该多对第一匹配点对,得到多个目标图像与待配准图像之间的变换参数,这些变换参数 可以定义为第一候选变换参数。此时,可以通过判断这些第一候选变换参数能否满足预设要求,来确定是否将第一候选变换参数作为最终变换参数。在满足预设条件时,再将满足预设要求的第一候选变换参数作为最终变换参数。以此,通过对第一候选变换参数进行筛选,可以得到更为准确的最终变换参数。
在一个实施场景中,预设要求包括:相应候选变换参数所对应的目标区域与目标图像之间的相似度满足预设相似度要求。相应候选变换参数所对应的目标区域是利用相应候选变换参数在待配准图像中确定的与目标图像对应的区域。
待配准图像中确定的与目标图像对应的区域,可以是通过在待配准图像中,确定与目标图像的边缘点对应的点来确定。例如,目标图像是四边形,其边缘点可以是四个角所对应的点,此时可以在待配准图像中确定与目标图像的边缘点对应的点,以此在待配准图像中确定与目标图像对应的区域。在一个实施场景中,可以在由待配准图像得到的第二特征点中,确定与由目标图像的边缘点得到第一特征点最匹配的点,来得到目标图像的边缘点在待配准图像中对应的。例如,目标图像的边缘点是A,经过计算,待配准图像中与A最匹配的点的为B,则B为边缘点A在待配准图像上对应的点。
在确定了待配准图像中与目标图像对应的区域以后,可以利用得到的每一个第一候选变换参数,对该区域进行变换,得到与多个第一候选变换参数对应的变换后的区域,然后再将这些区域与目标图像进行相似度的比较,从中选择相似度满足预设相似度要求的第一候选变换参数作为最终变换参数。在一个实施场景中,也可以是利用得到的每一个第一候选变换参数,对注册图像进行变换,然后利用变换的注册图像与待配准图像中确定的与目标图像对应的区域进行相似度比较,以此选择相似度满足预设相似度要求的第一候选变换参数作为最终变换参数。在计算相似度时,可以利用图像匹配算法来计算,例如是平均绝对差算法(Mean Absolute Differences,MAD)、绝对误差和算法(Sum of Absolute Differences,SAD)、误差平方和算法(Sum of Squared Differences,SSD)、平均误差平方和算法(Mean Square Differences,MSD)、归一化积相关算法(Normalized Cross Correlation,NCC)、序贯相似性检测算法(Sequential Similiarity Detection Algorithm,SSDA)、hadamard变换算法(Sum of Absolute Transformed Difference,SATD)等,此处不做限制。
因此,通过计算待配准图像中确定的与目标图像对应的区域与目标图像的相似度,可以以此来确定第一候选变换参数的准确程度,进而能够从中选择满足要求的第一候选变换参数作为最终变换参数。
在一个公开实施例中,在确定第一候选变换参数是否满足预设要求时,可以是一个接着一个的判断,即先判断一个第一候选变换参数是否满足要求,在满足要求要求的情况下,不再对其他个第一候选变换参数进行判断,而直接将该第一候选变换参数作为最终变换参数;在不满足要求要求的情况下,再对另外的一个第一候选变换参数进行判断,并以此类推。以此,可以提高 本公开实施例提供的图像配准方法的计算速度,在得到满足预设要求的第一候选变换参数后就可以停止运算。
在一个公开实施例中,可以设定在预设时间内未找出满足预设要求的第一候选变换参数的情况下,确定无法得到最终变换参数。在某些场景中,若运算时间过长,可能会导致实施终端出现延迟,无响应等情况,因此可以设定终端能够响应于预设时间内未找出满足预设要求的第一候选变换参数,即确定无法得到最终变换参数。此时终端可以实施其他步骤,来解决延迟,无响应等情况。
请参阅图2,图2是本公开实施例提供的图像配准方法的一个可选的流程示意图。本实施例是对上述步骤提及的“提取目标图像的若干第一特征点”的扩展,可以包括如下步骤:
步骤S121:对目标图像进行缩放,得到不同分辨率的至少一张缩放图像。
对目标图像进行缩放,可以是对目标图像进行放大处理或是缩小处理。对目标图像进行放大例如是进行上采样操作,对目标图像进行缩小例如是下采样操作。对目标图像进行缩放,得到不同分辨率的至少一张缩放图像,可以以此建立关于目标图像的图像金字塔。通过获得不同分辨率的至少一张缩放图像,可以提高本公开实施例提供的图像配准方法在不同尺度下的配准的准确率。
在一个实施场景中,为了提高小尺度情况下的图像配准的准确度,可以对目标图像进行缩小操作,以得到一些分辨率较小的缩小目标图像。
请参阅图3,图3是本公开实施例提供的图像配准方法的一个可选的流程示意图。“对目标图像进行缩放,得到不同分辨率的至少一张缩放图像”可以包括以下步骤S1211至步骤S1213。
步骤S1211:确定目标图像与待配准图像之间的预设尺度。
在对目标图像进行缩放时,可以预先确定缩放的尺度,即预设尺度,以此来对目标图像进行缩放。可以是确定目标图像与待配准图像之间的预设尺度。
在一个实施场景中,可以基于待配准图像的尺寸、目标图像的尺寸以及目标图像在待配准图像中的预设占比,得到预设尺度。
预设占比可以理解为注册图像在待配准图像中的比例大小,预设占比例如是15%、18%等等,可以根据需要进行设置,此处不做限制。待配准图像的尺寸、目标图像的尺寸均可以是待配准图像的分辨率大小,例如待配准图像的尺寸是1080*2160,目标图像的尺寸是256*256等等。
在一个实施场景中,预设尺度可以按照以下公式(1)计算:
Figure PCTCN2021127346-appb-000001
在公式(1)中,s 0为预设尺度,w c×h c为待配准图像的尺寸,w r×h r为目标图像的尺寸,a 0%是预先确定的预设占比。
通过确定预设尺度,在后续可以基于预设尺度得到一系列的尺度,并依 据这些尺度来对目标图像进行缩放,可以提高本公开实施例提供的图像配准方法在不同尺度情况下的图像配准的准确度。
步骤S1212:基于预设尺度生成至少一个衍生尺度,其中,每个衍生尺度不同,且均小于预设尺度。
在得到的预设尺度以后,可以基于预设尺度生成至少一个衍生尺度。衍生尺度可以是大于预设尺度,可以小于预设尺度。在本实施例中,每个衍生尺度不同,且均小于预设尺度。衍生尺度可以是预设尺度的缩小尺度。在另一个实施例中,每个衍生尺度不同,且可以均大于预设尺度。
在一个实施场景中,可以生成n-1个衍生尺度,即一共可以得到n个尺度(包括预设尺度),s 0,s 1,s 2,…,s n-1,其中
Figure PCTCN2021127346-appb-000002
并以此类推。例如,n可以设置为3。
步骤S1213:基于每个衍生尺度,对目标图像进行缩放,得到对应的缩放图像。
在一个实施方式中,缩放图像与待配准图像之间的尺度为对应的衍生尺度。
在得到至少一个衍生尺度,即可以基于每个衍生尺度,对目标图像进行缩放,得到对应的缩放图像,其中,缩放图像与待配准图像之间的尺度为对应的衍生尺度。例如,可以基于s 1尺度对目标图像T 0进行缩小,得到一缩小图像T 1,缩小图像T 1与待配准图像之间的尺度即为s 1
以此,通过得到至少一个小于预设尺度的衍生尺度,可以至少一张小尺度的目标图像,由此可以在后续的配准中,提高小尺度情况下的图像配准的准确度。
步骤S122:从目标图像和至少一张缩放图像中的每张图像中分别提取至少一个第一特征点,以得到若干第一特征点。
得到了至少一张缩放图像和目标图像,意味着得到了不同尺度对应的目标图像,由此可以对这些图像都进行特征提取,在每张图像中提取至少一个第一特征点,以得到至少一个第一特征点。
通过获得不同尺度的目标图像(包括缩放后的目标图像),可以进一步提高本公开实施例提供的图像配准方法针在不同尺度下的配准的准确率。
请参阅图3,图3是本公开实施例提供的图像配准方法的一个可选的流程示意图。本实施例是对上述实施例提及的“基于第一匹配点对的方向信息,得到与第一匹配点对相对应的第一候选变换参数,并将满足预设要求的第一候选变换参数作为最终变换参数”的扩展,包括以下步骤:
步骤S141:选择其中一组第一匹配点对作为目标匹配点对。
在上述的步骤中,已经选出了至少一组的第一匹配点对,此时可以选择其中一组第一匹配点对作为目标匹配点对,来计算第一候选变换参数。
在一个实施场景中,上述的至少一组第一匹配点对是按照第一匹配点对的匹配程度从高到低的顺序选择作为目标匹配点对。也即,从第一匹配点对 中选择目标匹配点对时,按照第一匹配点对的匹配程度,从最高匹配程度开始选。在一个实施场景中,匹配程度是特征点之间的距离,那就是从距离最小的第一匹配点对开始选起。以此,可以优先计算出最有可能满足预设要求的第一匹配点对。
步骤S142:基于目标匹配点对的方向信息,得到与目标匹配点对相对应的第一候选变换参数。
在选择出一组目标匹配点对以后,可以先计算该组目标点对对应的第一候选变换参数。
请参阅图4,图4是本公开实施例提供的图像配准方法的一个可选的流程示意图。在本实施例是对上述步骤提及的“基于目标匹配点对的方向信息,得到与目标匹配点对相对应的第一候选变换参数”扩展,包括以下步骤S1421至步骤S1423。
步骤S1421:在目标图像中提取包含第一匹配点的第一图像区域,并在待配准图像中提取包含第二匹配点的第二图像区域。
第一匹配点和第二匹配点分别为第一匹配点对中的第一特征点和第二特征点。在提取第一图像区域时,可以以第一匹配点为中心点,选取一定形状的第一图像区域。例如,可以以第一匹配点为中心点,选择16x16像素点的大小区域作为第一图像区域,或者是选择半径为16个像素点的圆形区域作为第一图像区域。第二图像区域的确定与第一图像区域相同。
在一个实施场景中,可以将第一图像区域的中心确定为目标图像的中心。
步骤S1422:确定第一图像区域的第一偏转角度和第二图像区域的第二偏转角度。
在确定第一图像区域和第二图像区域以后,可以利用该区域中的每一个像素点,来获取该区域的偏转角度。利用第一图像区域求得的偏转角度为第一偏转角度,利用第二图像区域求得的偏转角度为第二偏转角度。
在一个实施场景中,第一偏转角度为第一图像区域的形心与第一图像区域的中心的连线与预设方向之间的有向夹角。第二偏转角度为第二图像区域的形心与第二图像区域的中心的连线与预设方向之间的有向夹角。其中,有向夹角可以包括:连线以顺时针方向偏转至预设方向的夹角,或者,连线以逆时针方向偏转至预设方向的夹角,在此不做限定。例如,可以定义在以顺时针方向进行偏转时,有向夹角的符号为“-”(即负号),或者,也可以定义在以逆时针方向进行偏转时,有向夹角的符号为“+”(即正号),在此不做限定。
在一个实施场景中,请结合参阅图5,图5是偏转角度获取方式一实施例的示意图。如图5所示,实线矩形表示目标图像,实线矩形内的虚线矩形表示第一图像区域,P为第一图像区域的形心,以第一图像区域的中心为坐标原点O建立直角坐标系,第一图像区域的形心P与第一图像区域的中心的连线为OP,预设方向可以为上述直角坐标系的x轴,有向夹角可以为预设方向至连线逆时针方向的夹角θ。其他情况可以以此类推,在此不再一一举例。
在另一个实施场景中,请继续结合参阅图5,形心(c x,c y)可以表示为:
Figure PCTCN2021127346-appb-000003
上述公式(2)中,(x,y)表示第一图像区域中某一像素点相对第一图像区域中心的偏移量,I(x,y)表示该像素点的像素值,∑表示求和符合,其求和范围为第一图像区域中的像素点。
在又一个实施场景中,第一偏转角度θ可以直接通过下式得到:
θ=arctan(∑yI(x,y),∑xI(x,y))      (3)
上述公式(3)中,(x,y)表示第一图像区域中某一像素点相对第一图像区域中心的偏移量,I(x,y)表示该像素点的像素值,∑表示求和符合,其求和范围为第一图像区域中的像素点。同理,第二偏转角度也可以按照相同的方法计算得到。
以此,可以确定第一图像区域的第一偏转角度。第二图像区域的第二偏转角度的计算方法与上述计算第一偏转角度的方法相同。
步骤S1423:基于第一偏转角度和第二偏转角度,得到第一候选变换参数。
得到第一偏转角度和第二偏转角度以后,即可以基于这两个偏转角度来确定目标匹配点对的方向信息。例如,可以将第一偏转角度与第二偏转角度的差值作为目标匹配点对的方向信息。然后,基于目标匹配点对的方向信息,以及目标匹配点对中的第一特征点与第二特征点的坐标信息,来得到第一候选变换参数。
在一个实施场景中,可以基于第一匹配点对所对应的尺度、第一偏转角度和第二偏转角度,得到第一候选变换参数。第一匹配点对所对应的尺度为第一匹配点对所在的图像之间的尺度,也就是第一匹配点的第一匹配点所属的目标图像的尺度,例如是上述的s 0、s 1等等。
在一个实施场景中,上述的“基于第一匹配点对所对应的尺度、第一偏转角度和第二偏转角度,得到第一候选变换参数”步骤,可以包括以下步骤1和步骤2。
步骤1:获取第一偏转角度与第二偏转角度之间的角度差。
角度差例如是第一偏转角度与第二偏转角度的差值。
在一个实施场景中,计算角度差的公式(4)如下:
Figure PCTCN2021127346-appb-000004
其中,θ为角度差,
Figure PCTCN2021127346-appb-000005
为第一偏转角度,T表示目标图像,
Figure PCTCN2021127346-appb-000006
为第二偏转角度,F表示待配准图像。
步骤2:基于角度差和第一匹配点对所对应的尺度,得到第一候选变换参数。
第一候选变换参数例如是目标图像与待配准图像之间对应的单应性矩阵。在一个实施场景中,单应性矩阵的计算公式(5)如下:
H=H lH sH RH r         (5)
其中,H目标图像与待配准图像之间对应的单应性矩阵,即第一候选变 换参数;H r表示待配准图像相对于目标图像的平移量;H s代表的第一匹配点对所对应的尺度;H R代表的是待配准图像相对于目标图像的旋转量,H l代表平移之后复位的平移量。
为了利用上述的求得角度差,可以对上述的公式(5)进行变换,得到公式(6)。
Figure PCTCN2021127346-appb-000007
其中,
Figure PCTCN2021127346-appb-000008
为第一特征点在目标图像上的像素坐标;
Figure PCTCN2021127346-appb-000009
为第二特征点在待配准图像上的像素坐标;s为第一匹配点对所对应的尺度,即点
Figure PCTCN2021127346-appb-000010
对应的尺度;θ为角度差。
通过上述的方法,就可以利用一组目标匹配点对来求得目标图像与待配准图像之间对应的单应性矩阵,进而实现图像的配准。
在得到目标图像与待配准图像之间对应的单应性矩阵H,以后可以建立目标图像上的像素点与待配准图像上的像素点的对应关系。在一些实施例中,可以通过计算公式(7)确定目标图像上的像素点与待配准图像上的像素点的对应关系:
Figure PCTCN2021127346-appb-000011
其中,H表示第一候选变换参数,(x,y)为目标图像中的像素点,(x′,y′)为待配准图像中的像素点。也就是说,可以利用第一候选变换参数对目标图像中的像素点进行坐标转换,得到待配准图像中与该像素点对应的像素点。
在求得第一候选变换参数以后,可以进一步的判断第一候选变换参数能否满足预设要求。
步骤S143:判断目标匹配点对所对应的第一候选变换参数是否满足预设要求。
预设要求详细描述可以参见上述的步骤S14。
在一个实施场景中,可以先对第一候选变换参数进行优化,以得到更为准确的第一候选变换参数。可以将目标图像记为T,待配准图像记为F,第一候选变换参数记为H,优化公式(8)如下:
Figure PCTCN2021127346-appb-000012
其中,F(H -1)表示待配准图像F经过第一候选变换参数H变换的结果,f函数用于计算T和F(H -1)之间的相似度,即f函数用于计算目标图像与待配准图像的相似程度,可以为误差平方和(Sum of Squared Differences,SSD)函数,或者归一化互相关(Normalized Cross Correlation,NCC)函数等。
Figure PCTCN2021127346-appb-000013
表示利用迭代优化的方法优化H,使得目标图像与待配准图像的相似程度尽可能的提高。迭代优化的方法例如是高斯—牛顿(Gauss-Newton)迭代法或是 Levenberg-Marquard algorithm算法等等。Score代表的相似度得分,得分越高,代表目标图像与待配准图像越相似。
在一个实施场景中,SSD函数的表达式(9)如下:
SSD(T,F)=∑ x,y(T(x,y)-F(x′,y′)) 2      (9)
其中,∑ x,y表示对目标图像T中像素点(x,y)以及由第一候选配准参数H在待配准图像F中确定的与其对应的像素点(x′,y′)所组成的匹配点对的像素值进行误差平方求和。由此可见,相似度SSD(T,F)越小,目标图像与待配准图像之间的相似度越高,反之,相似度SSD(T,F)越大,目标图像与待配准图像之间的相似度越低。
在一个实施场景中,SSD函数的表达式(10)如下:
Figure PCTCN2021127346-appb-000014
其中,∑ x,y表示对目标图像F中像素点(x,y)以及由第一候选配准参数H在待配准图像F中确定的与其对应的像素点(x′,y′)所组成的匹配点对的像素值进行归一化互相关处理。此外,
Figure PCTCN2021127346-appb-000015
表示目标图像中像素点(x,y)像素值的平均值,
Figure PCTCN2021127346-appb-000016
表示待配准图像中像素点(x′,y′)像素值的平均值。需要说明的是,NCC(T,F)的值域范围为-1至1,且NCC(T,F)越接近于1,表示目标图像与待配准图像之间的相似度越高。
以此,可以通过利用优化后的第一候选变换参数计算得到的相似度得分,来判断第一候选变换参数是否满足预设要求。
上述的判断,在第一候选变换参数满足预设要求的情况下,可以执行步骤S144。在第一候选变换参数不满足预设要求的情况下,可以执行步骤S145。
步骤S144:响应于目标匹配点对所对应的第一候选变换参数满足预设要求,将目标匹配点对所对应的第一候选变换参数作为最终变换参数。
在第一候选变换参数满足预设要求的情况下,可以认为此时已经配准成功。因此,终端可以响应于目标匹配点对所对应的第一候选变换参数满足预设要求,将该候选参数作为目标图像与待配准图像之间的最终变换参数。
步骤S145:响应于目标匹配点对所对应的第一候选变换参数不满足预设要求,选择新的一组第一匹配点对作为目标匹配点对,并重新执行基于目标匹配点对的方向信息,得到与目标匹配点对相对应的第一候选变换参数及其后续步骤。
在第一候选变换参数不满足预设要求的情况下,可以认为此时配准没有成功,因此,终端可以响应于目标匹配点对所对应的第一候选变换参数不满足预设要求,并利用新的第一匹配点对作为目标匹配点对来计算得到新的第一候选变换参数。因此,可以重新执行基于目标匹配点对的方向信息,得到与目标匹配点对相对应的第一候选变换参数及其后续步骤。
在一个实施场景中,在从第一匹配点对中选择一组目标匹配点对时,是按照匹配程度从高到低的顺序来选择的,因此,再重新选择时,则是选择除了已经被选择的第一匹配点对以后的点对中,匹配程度最高的第一匹配点。
因此,通过利用一组特征点点对,可以得到目标图像与待配准图像之间的最终变换参数,实现图像配准。
在一个公开实施例中,在上述的步骤S12之前,如果目标图像的形状与待配准图像的形状不同的,可以将目标图像外扩为与待配准图像形状相同的图像。终端能够响应于目标图像的形状为矩形之外的任意形状,以将目标图像外扩为与待配准图像形状相同的图像,并以此图像作为新的目标图像。
请结合参阅图6,图6是对目标图像进行外扩的一实施例的示意图。如图6所示,以目标图像为圆形且待配准图像为矩形为例,可以获取圆形的外接矩形,且该外接矩形中圆形为目标图像,圆形与外接矩形之间的像素点可以为任意像素值,从而得到新的目标图像,如可以统一采用黑色填充圆形与外接矩形之间的区域,或者,也可以统一采用白色填充圆形与外接矩形之间的区域,在此不做限定。请继续结合参阅图6,在目标图像为圆形且待配准图像为矩形的情况下,也可以获取包含该圆形且不与该圆形相切的矩形,且该矩形中圆形为目标图像,圆形与该矩形之间的像素点可以为任意像素值,从而得到新的目标图像,即包含圆形的矩形,可以不限于外接矩形。在目标图像为其他形状的情况下,或者,在待配准图像为其他形状的情况下,可以以此类推。因此,可以在目标图像为任意形状的情况下,也能够完成图像配准,有利于提高图像配准的鲁棒性。
上述的方法,通过获得至少一组第一匹配点对,并计算第一匹配点对的方向信息,以此来获得待配准图像相对于目标图像的旋转角度,然后就可以利用该旋转角度信息得到目标图像与待配准图像之间的最终变换参数,最终实现图像配准。而且,通过该方法,能够利用较少的特征点来进行图像配准,故配准不会受到目标图像在待配准图像中的占比影响,即使目标图像在待配准图像中的占比较小也能实现准确的图像配准,故能够提高图像配准的准确性。
请参阅图7,图7是本公开实施例提供的图像配准方法的一个可选的流程示意图。本实施例是对上述图1提供的实施例的进一步扩展,在执行上述实施例的步骤S12之前,还可以执行以下的步骤:
步骤S21:在目标图像和待配准图像中选择若干组第二匹配点对。
在一个实施场景中,可以从目标图像和待配准图像中选择若干组第二匹配点对。在一组第二匹配点对中,包含一个从目标图像上提取的第一特征点,和一个从第二特征图像上提取的第二特征点。
在一个实施场景中,目标图像可以包括基于一系列不同的尺度生成的缩放目标图像,例如是上述提及的衍生尺度。待配准图像也可以包括基于一系列不同的尺度生成的缩放待配准图像。得到的一系列不同尺度的目标图像,可以定义为目标图像金字塔,不同尺度的一系列待配准图像定义为待配准图 像金字塔。也即,在对目标图像或者是待配准图像进行特征提取时,可以是对目标图像金字塔或者是待配准图像金字塔中的全部图像进行特征提取,从而得到一系列的第一特征点和第二特征点。然后,就可以选择若干组第二匹配点对。
在一个实施场景中,可以基于第一特征点和第二特征点之间的匹配程度,选出若干组第二匹配点对。选择方法可以参见上述实施例的步骤S13。
步骤S22:综合若干组第二匹配点对的位置信息,得到第二候选变换参数。
在得到若干组第二匹配点对以后,就可以根据这些第二匹配点对的位置信息,来得到第二候选变换参数。其中,可以采用随机一致性采样算法(Random Sample Consensus,RANSAC)得到第二候选变换参数。第二候选变换参数例如是目标图像和待配准图像对应的单应性矩阵H。
步骤S23:判断第二候选变换参数是否满足预设要求。
判断第二候选参数是否满足预设要求的方法,例如是判断第二候选变换参数所对应的目标区域与目标图像之间的相似度满足预设相似度要求。其中,第二候选变换参数所对应的目标区域为利用第二候选变换参数在待配准图像中确定的与目标图像对应的区域。判断第二候选变换参数是否满足预设要求的方法,可以参阅上述判断上述第一候选变换参数是否满足预设要求的方案。
在第二候选变换参数满足预设要求的情况下,可以执行步骤S24;在第二候选变换参数不满足预设要求的情况下,则可以执行步骤S25。
步骤S24:将第二候选变换参数作为最终变换参数。
在第二候选变换参数满足预设要求的情况下,可以认为此时已经配准成功。因此,可以将该候选参数可以作为目标图像与待配准图像之间的最终变换参数。在得到最终变换参数以后,则可以停止图像配准方法。
步骤S25:执行提取目标图像的若干第一特征点和待配准图像的若干第二特征点及其后续步骤。
在第二候选变换参数不满足预设要求的情况下,可以认为此时配准没有成功,因此,可以继续执行上述的:提取目标图像的若干第一特征点和待配准图像的若干第二特征点及其后续步骤。
在一个实施场景中,在执行步骤S21时,可能已经提取了第一特征点和第二特征点,因此在后续的步骤,可以不再执行提取特征点的步骤。如果上述的步骤还计算了第一特征点和第二特征点之间的匹配程度,则后续的步骤中也可以不再执行计算第一特征点和第二特征点之间的匹配程度的步骤。以此,可以提高本公开实施例提供的图像配准方法的运行速度。
因此,通过上述方法,可以实现先利用图像的特征点以及特征表示进行图像配准,在利用图像的特征点以及特征表示无法进行准确图像配准(例如目标图像与待配准图像之间的占比较小)的情况下,再利用特征点的方向信息进行图像配准,以减少图像配准失败的情况,提高图像配准的准确性。
请参阅图8,图8是本公开实施例提供的图像配准装置的一个可选的框架示意图。图像配准装置80包括图像获取部分81、特征提取部分82、特征匹 配部分83和确定部分84。图像获取部分被配置为执行获取目标图像和待配准图像。特征提取部分被配置为提取目标图像的若干第一特征点和待配准图像的若干第二特征点。特征匹配部分被配置为执行基于第一特征点和第二特征点之间的匹配程度,选出至少一组第一匹配点对,其中,每组第一匹配点对包括第一特征点和第二特征点。确定部分被配置为执行基于第一匹配点对的方向信息,得到目标图像与待配准图像之间的最终变换参数。
其中,上述的特征提取部分还被配置为:对目标图像进行缩放,得到不同分辨率的至少一张缩放图像;从目标图像和至少一张缩放图像中的每张图像中分别提取至少一个第一特征点,以得到若干第一特征点。其中,上述的第一特征点和第二特征点之间的匹配程度是基于第一特征点和第二特征点的特征表示之间的距离得到的。
其中,上述的特征提取部分还被配置为:确定目标图像与待配准图像之间的预设尺度;基于预设尺度生成至少一个衍生尺度,其中,每个衍生尺度不同,且均小于预设尺度;基于每个衍生尺度,对目标图像进行缩放,得到对应的缩放图像。
其中,上述的特征提取部分还被配置为:基于待配准图像的尺寸、目标图像的尺寸以及目标图像在待配准图像中的预设占比,得到预设尺度。
其中,上述的确定部分还被配置为:基于第一匹配点对的方向信息,得到与第一匹配点对相对应的第一候选变换参数,并将满足预设要求的第一候选变换参数作为最终变换参数。
其中,上述的确定部分还被配置为:选择其中一组第一匹配点对作为目标匹配点对;基于目标匹配点对的方向信息,得到与目标匹配点对相对应的第一候选变换参数;判断目标匹配点对所对应的第一候选变换参数是否满足预设要求;响应于所述目标匹配点对所对应的第一候选变换参数满足预设要求,将目标匹配点对所对应的第一候选变换参数作为最终变换参数。
其中,上述的至少一组第一匹配点对是按照第一匹配点对的匹配程度从高到低的顺序选择作为目标匹配点对。其中,图像配准装置80还包括第二确定部分。第二确定部分被配置为响应于不满足预设要求,则选择新的一组第一匹配点对作为目标匹配点对,并重新执行基于目标匹配点对的方向信息,得到与目标匹配点对相对应的第一候选变换参数及其后续步骤;响应于确定部分没有在预设时间内未找出满足预设要求的第一候选变换参数,第二确定部分还被配置为确定无法得到最终变换参数。
其中,上述的确定部分还被配置为:在目标图像中提取包含第一匹配点的第一图像区域,并在待配准图像中提取包含第二匹配点的第二图像区域,其中,第一匹配点和第二匹配点分别为第一匹配点对中的第一特征点和第二特征点;确定第一图像区域的第一偏转角度和第二图像区域相对第二偏转角度;基于第一偏转角度和第二偏转角度,得到第一候选变换参数。
其中,上述的确定部分还被配置为:基于第一匹配点对所对应的尺度、第一偏转角度和第二偏转角度,得到第一候选变换参数,其中,第一匹配点 对所对应的尺度为第一匹配点对所在的图像之间的尺度。
其中,上述的确定部分还被配置为:获取第一偏转角度与第二偏转角度之间的角度差;基于角度差和第一匹配点对所对应的尺度,得到第一候选变换参数。
其中,上述的第一图像区域的中心为目标图像的中心。上述的第一偏转角度为第一图像区域的形心与第一图像区域的中心的连线与预设方向之间的有向夹角;第二偏转角度为第二图像区域的形心与第二图像区域的中心的连线与预设方向之间的有向夹角。
其中,图像配准装置80还包括第二配准部分,第二配准部分被配置为执行在目标图像和待配准图像中选择若干组第二匹配点对;综合若干组第二匹配点对的位置信息,得到第二候选变换参数;在第二候选变换参数满足预设要求的情况下,将第二候选变换参数作为最终变换参数;在第二候选变换参数不满足预设要求的情况下,执行提取目标图像的若干第一特征点和待配准图像的若干第二特征点及其后续步骤。
其中,图像配准装置80还包括图像外扩部分。在上述的提取目标图像的若干第一特征点和待配准图像的若干第二特征点之前,且目标图像的形状与待配准图像的形状不同的情况下,图像外扩部分被配置为执行将目标图像外扩为与待配准图像形状相同的图像。
图9是本公开实施例提供的电子设备的一个可选的框架示意图。电子设备90包括相互耦接的存储器91和处理器92,处理器92用于执行存储器91中存储的程序指令,以实现上述任一图像配准方法实施例的步骤。在一个的实施场景中,电子设备90可以包括但不限于:微型计算机、服务器,此外,电子设备90还可以包括笔记本电脑、平板电脑等移动设备,在此不做限定。
其中,处理器92用于控制其自身以及存储器91以实现上述任一图像配准方法实施例的步骤。处理器92还可以称为CPU(Central Processing Unit,中央处理单元)。处理器92可能是一种集成电路芯片,具有信号的处理能力。处理器92还可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。另外,处理器92可以由集成电路芯片共同实现。
请参阅图10,图10为本公开实施例提供的计算机可读存储介质的一个可选的框架示意图。计算机可读存储介质100存储有能够被处理器运行的程序指令101,程序指令101用于实现上述任一图像配准方法实施例的步骤。
上述方案,能够有利于提高图像配准的准确性。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的部分可以被配置为执行上文方法实施例描述的方法,其实现方式可以参照上文方法实施例的描述。
上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以互相参考。
在本公开实施例所提供的几个实施例中,应该理解到,所揭露的方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施方式仅仅是示意性的,例如,部分或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性、机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施方式方案的目的。
另外,在本公开中的各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本公开各个实施方式方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
工业实用性
本公开实施例公开了一种图像配准方法及相关装置、设备和存储介质,其中,该方法包括:获取目标图像和待配准图像;提取目标图像的若干第一特征点和待配准图像的若干第二特征点;基于第一特征点和第二特征点之间的匹配程度,选出至少一组第一匹配点对,其中,每组第一匹配点对包括第一特征点和第二特征点;基于第一匹配点对的方向信息,得到目标图像与待配准图像之间的最终变换参数。通过该方法,能够实现图像配准,且提高图像配准的准确性。

Claims (18)

  1. 一种图像配准方法,包括:
    获取目标图像和待配准图像;
    提取所述目标图像的若干第一特征点和所述待配准图像的若干第二特征点;
    基于所述第一特征点和所述第二特征点之间的匹配程度,选出至少一组第一匹配点对,其中,每组所述第一匹配点对包括所述第一特征点和所述第二特征点;
    基于所述第一匹配点对的方向信息,得到所述目标图像与所述待配准图像之间的最终变换参数。
  2. 根据权利要求1所述的方法,所述提取所述目标图像的若干第一特征点,包括:
    对所述目标图像进行缩放,得到不同分辨率的至少一张缩放图像;
    从所述目标图像和所述至少一张缩放图像中的每张图像中分别提取至少一个第一特征点,以得到所述若干第一特征点;
    和/或,所述第一特征点和所述第二特征点之间的匹配程度是基于所述第一特征点和所述第二特征点的特征表示之间的距离得到的。
  3. 根据权利要求2所述的方法,所述对所述目标图像进行缩放,得到不同分辨率的至少一张缩放图像,包括:
    确定所述目标图像与所述待配准图像之间的预设尺度;
    基于所述预设尺度生成至少一个衍生尺度,其中,每个所述衍生尺度不同,且均小于所述预设尺度;
    基于每个所述衍生尺度,对所述目标图像进行缩放,得到对应的所述缩放图像。
  4. 根据权利要求3所述的方法,所述确定所述目标图像与所述待配准图像之间的预设尺度,包括:
    基于所述待配准图像的尺寸、所述目标图像的尺寸以及所述目标图像在所述待配准图像中的预设占比,得到所述预设尺度。
  5. 根据权利要求1至4任一项所述的方法,所述基于所述第一匹配点对的方向信息,得到所述目标图像与所述待配准图像之间的最终变换参数,包括:
    基于所述第一匹配点对的方向信息,得到与所述第一匹配点对相对应的第一候选变换参数,并将满足预设要求的所述第一候选变换参数作为所述最终变换参数。
  6. 根据权利要求5所述的方法,所述基于所述第一匹配点对的方向信息,得到与所述第一匹配点对相对应的第一候选变换参数,并将满足预设要求的所述第一候选变换参数作为所述最终变换参数,包括:
    选择其中一组所述第一匹配点对作为目标匹配点对;
    基于所述目标匹配点对的方向信息,得到与所述目标匹配点对相对应的 第一候选变换参数;
    判断所述目标匹配点对所对应的第一候选变换参数是否满足预设要求;
    响应于所述目标匹配点对所对应的第一候选变换参数满足所述预设要求,将所述目标匹配点对所对应的第一候选变换参数作为所述最终变换参数。
  7. 根据权利要求6所述的方法,所述至少一组第一匹配点对是按照所述第一匹配点对的匹配程度从高到低的顺序选择作为所述目标匹配点对;
    和/或,在所述判断所述目标匹配点对所对应的第一候选变换参数是否满足预设要求之后,所述方法还包括:
    响应于所述目标匹配点对所对应的第一候选变换参数不满足所述预设要求,选择新的一组所述第一匹配点对作为所述目标匹配点对,并重新执行所述基于所述目标匹配点对的方向信息,得到与所述目标匹配点对相对应的第一候选变换参数及其后续步骤;
    响应于预设时间内未找出满足所述预设要求的第一候选变换参数,确定无法得到所述最终变换参数。
  8. 根据权利要求5至7任一项所述的方法,所述基于所述第一匹配点对的方向信息,得到与所述第一匹配点对相对应的第一候选变换参数,包括:
    在所述目标图像中提取包含第一匹配点的第一图像区域,并在所述待配准图像中提取包含第二匹配点的第二图像区域,其中,所述第一匹配点和所述第二匹配点分别为所述第一匹配点对中的所述第一特征点和所述第二特征点;
    确定所述第一图像区域的第一偏转角度和所述第二图像区域的第二偏转角度;
    基于所述第一偏转角度和所述第二偏转角度,得到所述第一候选变换参数。
  9. 根据权利要求8所述的方法,所述基于所述第一偏转角度和所述第二偏转角度,得到所述第一候选变换参数,包括:
    基于所述第一匹配点对所对应的尺度、所述第一偏转角度和所述第二偏转角度,得到所述第一候选变换参数,其中,所述第一匹配点对所对应的尺度为所述第一匹配点对所在的图像之间的尺度。
  10. 根据权利要求9所述的方法,所述基于所述第一匹配点对所对应的尺度、所述第一偏转角度和所述第二偏转角度,得到所述第一候选变换参数,包括:
    获取所述第一偏转角度与所述第二偏转角度之间的角度差;
    基于所述角度差和所述第一匹配点对所对应的尺度,得到所述第一候选变换参数。
  11. 根据权利要求8至10任一项所述的方法,所述第一图像区域的中心为所述目标图像的中心;
    和/或,所述第一偏转角度为所述第一图像区域的形心与所述第一图像区域的中心的连线与预设方向之间的有向夹角;所述第二偏转角度为所述第二 图像区域的形心与所述第二图像区域的中心的连线与预设方向之间的有向夹角。
  12. 根据权利要求1至11任一项所述的方法,在所述提取所述目标图像的若干第一特征点和所述待配准图像的若干第二特征点之前,所述方法还包括:
    在所述目标图像和所述待配准图像中选择若干组第二匹配点对;
    综合所述若干组第二匹配点对的位置信息,得到第二候选变换参数;
    在所述第二候选变换参数满足预设要求的情况下,将所述第二候选变换参数作为所述最终变换参数;
    在所述第二候选变换参数不满足所述预设要求的情况下,执行所述提取所述目标图像的若干第一特征点和所述待配准图像的若干第二特征点及其后续步骤。
  13. 根据权利要求5至12任一项所述的方法,所述预设要求包括:相应候选变换参数所对应的目标区域与所述目标图像之间的相似度满足预设相似度要求,所述相应候选变换参数所对应的目标区域为利用相应候选变换参数在所述待配准图像中确定的与所述目标图像对应的区域。
  14. 根据权利要求1至13任一项所述的方法,在所述提取所述目标图像的若干第一特征点和所述待配准图像的若干第二特征点之前,所述方法还包括:
    响应于所述目标图像的形状与所述待配准图像的形状不同,将所述目标图像外扩为与所述待配准图像形状相同的图像。
  15. 一种图像配准装置,包括:
    图像获取部分,被配置为获取目标图像和待配准图像;
    特征提取部分,被配置为提取所述目标图像的若干第一特征点和所述待配准图像的若干第二特征点;
    特征匹配部分,被配置为基于所述第一特征点和所述第二特征点之间的匹配程度,选出至少一组第一匹配点对,其中,每组所述第一匹配点对包括所述第一特征点和所述第二特征点;
    确定部分,被配置为基于所述第一匹配点对的方向信息,得到所述目标图像与所述待配准图像之间的最终变换参数。
  16. 一种电子设备,包括相互耦接的存储器和处理器,所述处理器用于执行所述存储器中存储的程序指令,以实现权利要求1至14任一项所述的图像配准方法。
  17. 一种计算机可读存储介质,其上存储有程序指令,所述程序指令被处理器执行时实现权利要求1至14任一项所述的图像配准方法。
  18. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至14任一项所述的图像配准方法。
PCT/CN2021/127346 2021-06-25 2021-10-29 图像配准方法及相关装置、设备和存储介质 WO2022267287A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110711211.6 2021-06-25
CN202110711211.6A CN113409372B (zh) 2021-06-25 2021-06-25 图像配准方法及相关装置、设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022267287A1 true WO2022267287A1 (zh) 2022-12-29

Family

ID=77679439

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/127346 WO2022267287A1 (zh) 2021-06-25 2021-10-29 图像配准方法及相关装置、设备和存储介质

Country Status (3)

Country Link
CN (1) CN113409372B (zh)
TW (1) TW202301274A (zh)
WO (1) WO2022267287A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984341A (zh) * 2023-03-20 2023-04-18 深圳市朗诚科技股份有限公司 海洋水质微生物检测方法、装置、设备及存储介质
CN116612390A (zh) * 2023-07-21 2023-08-18 山东鑫邦建设集团有限公司 一种建筑工程用的信息管理系统
CN116625385A (zh) * 2023-07-25 2023-08-22 高德软件有限公司 路网匹配方法、高精地图构建方法、装置及设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409372B (zh) * 2021-06-25 2023-03-24 浙江商汤科技开发有限公司 图像配准方法及相关装置、设备和存储介质
CN117173439A (zh) * 2023-11-01 2023-12-05 腾讯科技(深圳)有限公司 基于gpu的图像处理方法、装置、存储介质及电子设备

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872475A (zh) * 2009-04-22 2010-10-27 中国科学院自动化研究所 一种扫描文档图像自动配准方法
CN103871063A (zh) * 2014-03-19 2014-06-18 中国科学院自动化研究所 一种基于点集匹配的图像配准方法
CN105160654A (zh) * 2015-07-09 2015-12-16 浙江工商大学 基于特征点提取的毛巾标签缺陷检测方法
WO2016062159A1 (zh) * 2014-10-20 2016-04-28 网易(杭州)网络有限公司 图像匹配方法及手机应用测试平台
CN107665479A (zh) * 2017-09-05 2018-02-06 平安科技(深圳)有限公司 一种特征提取方法、全景拼接方法及其装置、设备及计算机可读存储介质
CN111091590A (zh) * 2019-12-18 2020-05-01 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111598176A (zh) * 2020-05-19 2020-08-28 北京明略软件系统有限公司 一种图像匹配处理方法及装置
CN111709980A (zh) * 2020-06-10 2020-09-25 北京理工大学 基于深度学习的多尺度图像配准方法和装置
CN112102383A (zh) * 2020-09-18 2020-12-18 深圳市赛为智能股份有限公司 图像配准方法、装置、计算机设备及存储介质
CN112184783A (zh) * 2020-09-22 2021-01-05 西安交通大学 一种结合图像信息的三维点云配准方法
CN113409372A (zh) * 2021-06-25 2021-09-17 浙江商汤科技开发有限公司 图像配准方法及相关装置、设备和存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021229B (zh) * 2014-06-25 2017-07-25 厦门大学 一种用于商标图像检索的形状表示与匹配方法
CN105551012B (zh) * 2014-11-04 2019-04-05 阿里巴巴集团控股有限公司 计算机图像配准中降低错误匹配对的方法及其系统
CN104517287A (zh) * 2014-12-10 2015-04-15 广州赛意信息科技有限公司 一种图像匹配的方法及装置
CN106023187B (zh) * 2016-05-17 2019-04-19 西北工业大学 一种基于sift特征和角度相对距离的图像配准方法
CN109118525B (zh) * 2017-06-23 2021-08-13 北京遥感设备研究所 一种双波段红外图像空域配准方法
CN109559339B (zh) * 2018-11-21 2020-07-28 上海交通大学 基于图像控制点配准的刚性表面接触过程分析方法与系统
CN111079803B (zh) * 2019-12-02 2023-04-07 易思维(杭州)科技有限公司 基于梯度信息的模板匹配方法
CN111223133B (zh) * 2020-01-07 2022-10-11 上海交通大学 一种异源图像的配准方法

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872475A (zh) * 2009-04-22 2010-10-27 中国科学院自动化研究所 一种扫描文档图像自动配准方法
CN103871063A (zh) * 2014-03-19 2014-06-18 中国科学院自动化研究所 一种基于点集匹配的图像配准方法
WO2016062159A1 (zh) * 2014-10-20 2016-04-28 网易(杭州)网络有限公司 图像匹配方法及手机应用测试平台
CN105160654A (zh) * 2015-07-09 2015-12-16 浙江工商大学 基于特征点提取的毛巾标签缺陷检测方法
CN107665479A (zh) * 2017-09-05 2018-02-06 平安科技(深圳)有限公司 一种特征提取方法、全景拼接方法及其装置、设备及计算机可读存储介质
CN111091590A (zh) * 2019-12-18 2020-05-01 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111598176A (zh) * 2020-05-19 2020-08-28 北京明略软件系统有限公司 一种图像匹配处理方法及装置
CN111709980A (zh) * 2020-06-10 2020-09-25 北京理工大学 基于深度学习的多尺度图像配准方法和装置
CN112102383A (zh) * 2020-09-18 2020-12-18 深圳市赛为智能股份有限公司 图像配准方法、装置、计算机设备及存储介质
CN112184783A (zh) * 2020-09-22 2021-01-05 西安交通大学 一种结合图像信息的三维点云配准方法
CN113409372A (zh) * 2021-06-25 2021-09-17 浙江商汤科技开发有限公司 图像配准方法及相关装置、设备和存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984341A (zh) * 2023-03-20 2023-04-18 深圳市朗诚科技股份有限公司 海洋水质微生物检测方法、装置、设备及存储介质
CN115984341B (zh) * 2023-03-20 2023-05-23 深圳市朗诚科技股份有限公司 海洋水质微生物检测方法、装置、设备及存储介质
CN116612390A (zh) * 2023-07-21 2023-08-18 山东鑫邦建设集团有限公司 一种建筑工程用的信息管理系统
CN116612390B (zh) * 2023-07-21 2023-10-03 山东鑫邦建设集团有限公司 一种建筑工程用的信息管理系统
CN116625385A (zh) * 2023-07-25 2023-08-22 高德软件有限公司 路网匹配方法、高精地图构建方法、装置及设备
CN116625385B (zh) * 2023-07-25 2024-01-26 高德软件有限公司 路网匹配方法、高精地图构建方法、装置及设备

Also Published As

Publication number Publication date
CN113409372B (zh) 2023-03-24
TW202301274A (zh) 2023-01-01
CN113409372A (zh) 2021-09-17

Similar Documents

Publication Publication Date Title
WO2022267287A1 (zh) 图像配准方法及相关装置、设备和存储介质
CN113393505B (zh) 图像配准方法、视觉定位方法及相关装置、设备
WO2012046426A1 (ja) 物体検出装置、物体検出方法および物体検出プログラム
CN113409391A (zh) 视觉定位方法及相关装置、设备和存储介质
TW201926244A (zh) 即時視訊畫面拼接方法
Furnari et al. Distortion adaptive Sobel filters for the gradient estimation of wide angle images
Kabbai et al. Image matching based on LBP and SIFT descriptor
CN114331879A (zh) 一种均衡化二阶梯度直方图描述子的可见光与红外图像配准方法
CN112017197A (zh) 一种图像特征提取方法及系统
WO2022063321A1 (zh) 图像处理方法、装置、设备及存储介质
CN112102404B (zh) 物体检测追踪方法、装置及头戴显示设备
CN112767457A (zh) 一种基于主成分分析的平面点云匹配方法及装置
JP6086491B2 (ja) 画像処理装置およびそのデータベース構築装置
KR101733288B1 (ko) 방향정보를 이용한 객체 검출기 생성 방법, 이를 이용한 객체 검출 장치 및 방법
CN106709942B (zh) 一种基于特征方位角的全景图像误匹配消除方法
CN111310818B (zh) 特征描述子确定方法、装置及计算机可读存储介质
CN113409365B (zh) 图像处理方法及相关终端、设备和存储介质
Kim et al. Recognition of face orientation by divided hausdorff distance
CN113409373B (zh) 图像处理方法及相关终端、设备和存储介质
Bo et al. A robust image registration algorithm used for panoramic image mosaic
CN112131971A (zh) 一种将HardNet的128维浮点型特征描述子进行256维二进制量化的方法
KR101878668B1 (ko) 360도 파노라마 이미지 내의 미술저작물 인식 방법
JP7088618B2 (ja) 情報処理装置及びプログラム
Song et al. Object tracking with dual field-of-view switching in aerial videos
JP6813440B2 (ja) 情報処理装置及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21946769

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE