CN111915645A - Image matching method and device, computer equipment and computer readable storage medium - Google Patents

Image matching method and device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN111915645A
CN111915645A CN202010677868.0A CN202010677868A CN111915645A CN 111915645 A CN111915645 A CN 111915645A CN 202010677868 A CN202010677868 A CN 202010677868A CN 111915645 A CN111915645 A CN 111915645A
Authority
CN
China
Prior art keywords
image
matching
matched
characteristic
normalized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010677868.0A
Other languages
Chinese (zh)
Other versions
CN111915645B (en
Inventor
邓练兵
朱俊
余大勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Dahengqin Technology Development Co Ltd
Original Assignee
Zhuhai Dahengqin Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Dahengqin Technology Development Co Ltd filed Critical Zhuhai Dahengqin Technology Development Co Ltd
Priority to CN202010677868.0A priority Critical patent/CN111915645B/en
Publication of CN111915645A publication Critical patent/CN111915645A/en
Application granted granted Critical
Publication of CN111915645B publication Critical patent/CN111915645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image matching method, an image matching device, computer equipment and a computer readable storage medium, wherein the method comprises the following steps: respectively acquiring local area images of a reference image and an image to be matched; performing characteristic transformation on the local area image to obtain a normalized area image; respectively detecting the characteristic points inside and outside the normalized area image to obtain characteristic points; matching the characteristic points of the reference image and the image to be matched according to the characteristic points inside and outside the normalized area image to obtain a matching result; and determining the matching result of the image to be matched and the reference image according to the matching result of the characteristic points. By implementing the method and the device, the deformation error of the image caused by the angle and scale problems in the image matching process is solved, the extracted feature points can better cover the whole image, and the efficiency and the robustness of point set matching under the practical problems of large visual angle change of large video data, complex imaging environment and the like are greatly improved.

Description

Image matching method and device, computer equipment and computer readable storage medium
Technical Field
The invention relates to the technical field of image matching, in particular to an image matching method, an image matching device, computer equipment and a computer readable storage medium.
Background
Target matching is a key step in dynamic target tracking, and is divided from feature types, and currently, two main methods are point feature matching and line feature matching. Target matching may be converted into a feature point set matching problem for the target. The roundabout electronic purse net video data may have noise caused by complex field imaging environment, weather conditions, atmospheric cloud layer interference, and an outlier phenomenon (that is, two obtained feature point sets may be different) caused by factors such as illumination change, view angle change or shielding, which severely restricts the application performance of the existing algorithms such as ransac (new Neighbor relation Distance ratio) when processing the roundabout electronic purse net video data. This is because the presence of noise means that exact matching under ideal conditions will no longer hold, and in order to prevent overfitting, the matching algorithm needs to find the true position of the data point and estimate the transformation function accordingly, which greatly increases the difficulty of correspondence search and transformation function estimation, increasing the matching error; to deal with existing outliers, the algorithm needs to match a subset of the set of points to an appropriate subset of another set of points, and the number of points included in the subset is not known in advance, making the problem more difficult; in addition, point set matching is essentially a NPC complex combinatorial optimization problem, and in order to obtain a good understanding, a search is often required in a variable parameter space, and when the dimension of the parameter space is high, the search cost becomes very high. These practical problems existing in the video big data environment pose a huge challenge to the point set matching step in dynamic target tracking.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image matching method, an image matching device, a computer device, and a computer-readable storage medium, so as to solve the problem that the actual problem existing in a video big data environment brings a huge challenge to the point set matching step in the dynamic target tracking.
According to a first aspect, an embodiment of the present invention provides an image matching method, including: respectively acquiring local area images of a reference image and an image to be matched; performing characteristic transformation on the local area image to obtain a normalized area image; respectively detecting the characteristic points inside and outside the normalized area image to obtain characteristic points; matching the characteristic points of the reference image and the image to be matched according to the characteristic points inside and outside the normalized area image to obtain a matching result; and determining the matching result of the image to be matched and the reference image according to the matching result of the characteristic points.
Optionally, the obtaining local area images of the reference image and the image to be matched respectively includes: acquiring a reference image and an image to be matched; respectively carrying out smoothing processing and down-sampling processing on the reference image and the image to be matched by utilizing Gaussian convolution kernels with different variances to form a Gaussian scale pyramid image; carrying out maximum stable extremum region detection on the pyramid images of each layer respectively to obtain a plurality of maximum stable extremum regions; removing repeated maximum stable extremum regions on each layer of pyramid images according to the positions and areas of the maximum stable extremum regions; and forming a reference image and a local area image of the image to be matched according to the maximum stable extremum area of each layer of pyramid images after the repeated maximum stable extremum areas are removed.
Optionally, the positions of the maximally stable extremal regions include a centroid position, a direction of the principal axis, and a perimeter, and the repeated maximally stable extremal regions on each layer of pyramid images are removed according to the positions and areas of the maximally stable extremal regions, including: judging whether the centroid distance of the two maximum stable extremal regions on the pyramid images of the adjacent layers is smaller than a first preset threshold value or not according to the centroid position of each maximum stable extremal region; if the centroid distance of the two maximum stable extremum regions on the adjacent pyramid images is smaller than a first preset threshold, judging whether the areas of the two maximum stable extremum regions meet a first relational expression; if the areas of the two maximum stable extremum regions meet the first relational expression, judging whether the directions and the circumferences of the main shafts of the two maximum stable extremum regions meet the second relational expression; and if the directions and the circumferences of the main shafts of the two maximum stable extremum regions meet the second relational expression, determining that the two maximum stable extremum regions are the same maximum stable extremum region, and rejecting any one of the two maximum stable extremum regions.
Optionally, matching the feature points of the reference image and the image to be matched according to the feature points outside the normalized region image includes: respectively clustering feature points outside the normalized area images of the reference image and the image to be matched into local areas of the reference image and the image to be matched; respectively determining the characteristic regions of the characteristic points outside the normalized area images of the reference image and the image to be matched according to the regional parameters of the local regions to which the characteristic points outside the normalized area images of the reference image and the image to be matched belong; performing characteristic transformation on the characteristic region to obtain a normalized characteristic region image; according to the normalized characteristic region image, carrying out characteristic description on characteristic points outside the normalized region image; and matching the characteristic points outside the normalized area image after the characteristic description of the reference image and the image to be matched by adopting a preset characteristic point matching method.
Optionally, clustering feature points outside the normalized region images of the reference image and the image to be matched into local regions of the reference image and the image to be matched respectively, including: respectively calculating the distance from each characteristic point outside the normalized area images of the reference image and the image to be matched to the central point of each local area; judging whether the ratio of the minimum distance from each characteristic point to the central point of each local area to the secondary minimum distance is smaller than a second preset threshold value or not; and if the ratio of the minimum distance from each characteristic point to the center point of each local area to the secondary minimum distance is smaller than a second preset threshold, determining that each characteristic point belongs to the local area corresponding to the minimum distance.
Optionally, if the ratio of the minimum distance from each feature point to the center point of each local region to the next minimum distance is greater than or equal to a second preset threshold, it is determined that each feature point belongs to the local region corresponding to the minimum distance and the local region corresponding to the next minimum distance.
Optionally, after matching the feature points of the reference image and the image to be matched according to the feature points inside and outside the normalized region, the image matching method further includes: and purifying the matched initial matching points based on a polar geometric constraint method to obtain final matching points.
According to a second aspect, an embodiment of the present invention provides an image matching apparatus, including: the acquisition module is used for respectively acquiring local area images of the reference image and the image to be matched; the characteristic transformation module is used for carrying out characteristic transformation on the local area image to obtain a normalized area image; the detection module is used for respectively detecting the feature points inside and outside the normalized area image to obtain the feature points; the matching module is used for matching the characteristic points of the reference image and the image to be matched according to the characteristic points inside and outside the normalized area image to obtain a matching result; and the determining module is used for determining the matching result of the image to be matched and the reference image according to the matching result of the characteristic points.
According to a third aspect, an embodiment of the present invention provides a computer device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the processor, and the instructions are executed by the at least one processor to cause the at least one processor to perform the image matching method according to the first aspect or any of the embodiments of the first aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where computer instructions are stored, and the computer instructions are configured to cause a computer to execute the image matching method according to the first aspect or any implementation manner of the first aspect.
According to the image matching method, the image matching device, the computer equipment and the computer readable storage medium, local area images of a reference image and an image to be matched are respectively obtained, the whole image is decomposed into a set of local area images, and offline multi-scale feature extraction is performed on the reference image and the image to be matched; then, the scene characteristics of the image, such as angle, contrast, environment and the like are considered, the local area image is subjected to characteristic transformation to obtain a normalized area image, so that the initial visual angle change between the processed local area images is simplified into scale and rotation change, and the deformation error of the image caused by the angle and scale problems is solved; then, respectively detecting the characteristic points inside and outside the normalized area image to obtain the characteristic points, so that the characteristic points can better cover the whole image; and finally, matching the characteristic points of the reference image and the image to be matched according to the characteristic points inside and outside the normalized area image to obtain a matching result, and determining the matching result of the image to be matched and the reference image according to the matching result of the characteristic points, so that the efficiency and the robustness of point set matching under the practical problems of large visual angle change of video big data, complex imaging environment and the like are greatly improved. Experiments show that the image matching method has strong robustness on matching between the view angle change images, and can obtain a good matching effect even under the condition of great degree of view angle change.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart illustrating an image matching method according to an embodiment of the invention;
FIG. 2 is a diagram of a set of matching images for an experiment according to an embodiment of the present invention;
FIG. 3 is a diagram of a matched image set according to an embodiment of the present invention;
FIG. 4 is a block diagram of an image matching apparatus according to an embodiment of the present invention;
fig. 5 shows a hardware configuration block diagram of a computer apparatus of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the process of matching the visible light wave band images, although radiation differences exist between different visible light wave band images to a certain degree, the radiation differences between the images are small, the radiation differences are often linear, and the radiation changes can be well processed by common normalization and other methods. Thus, the main problem to be solved in visible light image to visible light image matching comes from the geometrical distortion between image pairs. When there is a large change in the viewing angle between two images, the similarity of the same object appearing on the images is small, and at this time, if it is desired to try to extract and match the features with unchanged viewing angle on the images, the difficulty is very high. In order to improve the matching effect, researchers have proposed many methods, but the current image matching method has difficulty in obtaining a robust matching result in the matching of the view angle change images.
Aiming at the problem that the existing image matching method is difficult to obtain a stable matching result for the view angle change image, the invention provides an image matching method which can stably match the scale, the rotation and the large view angle change from the view angle change image geometric distortion mechanism. As shown in fig. 1, the image matching method includes:
s101, local area images of a reference image and an image to be matched are respectively obtained; specifically, there are many methods for obtaining the local area of the image, such as image segmentation, area feature extraction, and the like. However, since the image segmentation still remains to be solved, the local region of the image may be obtained by selecting the region feature extraction method according to the embodiment of the present invention. Since the purpose of local region detection is to perform subsequent image matching with large view angle change, the adopted local region feature extraction method needs to have strong robustness to view angle change of the image. The Maximum Stable Extremum Region (MSER) has strong robustness to the visual angle change of the image, so the MSER operator is selected to extract the local region characteristics of the image. In the original MSER algorithm, all maximally stable extremal regions are from a single image scale, so that when the image is blurred or the observation distance is changed, some MSERs will disappear or some new MSERs will be generated. At this time, the repetition rate of the local regions extracted from different images will be reduced, which is not favorable for the subsequent image matching. In order to overcome the problem, the embodiment of the invention adopts a multi-scale MSER extraction method to obtain the local area of the image.
S102, performing feature transformation on the local area image to obtain a normalized area image; specifically, after a local area image is extracted from an image by considering scene characteristics such as an angle, a contrast, an environment and the like of the image, the local area of the image is fitted and normalized into a circular area according to a second moment of the local area by combining with the analysis of an image visual angle change model. For any elliptical area EAP(Elliptical Area), based on the Elliptical Area image, normalization is performed to obtain a circular feature Area CA with a constant view angleP(Circular Area). The initial visual angle change between the processed local area images is simplified into the scale and rotation change, and the deformation error of the image caused by the angle and scale problems can be solved.
S103, respectively detecting feature points inside and outside the normalized area image to obtain feature points; specifically, since the number of local regions detected on the image is often small, and the positioning accuracy of the fitted elliptical region is low, the embodiment of the present invention defines such image local regions as coarse features, and defines point features detected in the image local regions as fine features. In the embodiment of the invention, point characteristics are selected for image matching. The point features are not only more in number, but also higher in positioning accuracy. In the matching method of the embodiment of the invention, any point feature detection operator can be selected according to application requirements to carry out feature point detection.
Since the detected local area of the image may not completely cover the entire image, performing point feature detection and matching in the normalized area image may not obtain the homonymous features covering the entire image area. Therefore, it is necessary to detect and match feature points also in an image region not covered by the local image region. In the embodiment of the invention, a DoG (Difference of Guassian) operator with scale and rotation invariance can be selected for detecting the characteristics of the inner point and the outer point of the normalized region image.
S104, matching the characteristic points of the reference image and the characteristic points of the image to be matched according to the characteristic points inside and outside the normalized area image to obtain a matching result; specifically, for the feature points in the normalized region image, the feature regions of the feature points in the normalized region image may be determined according to the region parameters of the normalized region, and after the feature regions of the feature points are determined, feature description may be performed on the feature points by using a Scale-invariant feature transform (SIFT) method. After the feature descriptors are obtained, feature matching can be performed by using a commonly used nndr (nearest Neighbor Distance ratio) method. The NNDR characteristic matching method comprises the following steps: and calculating the distance between the feature descriptors, and regarding each feature as a pair of initial matching feature point pairs if the distance between the two features closest to the feature descriptor satisfies that the ratio of the minimum distance to the next minimum distance is less than a certain threshold.
For the feature points outside the normalized region image, the circular feature regions of the feature points can be determined first, and then the feature description and matching are carried out by adopting the SIFT feature descriptor and the NNDR method.
And S105, determining a matching result of the image to be matched and the reference image according to the matching result of the feature points.
According to the image matching method provided by the embodiment of the invention, local area images of a reference image and an image to be matched are respectively obtained, the whole image is decomposed into a set of local area images, and offline multi-scale feature extraction is carried out on the reference image and the image to be matched; then, the scene characteristics of the image, such as angle, contrast, environment and the like are considered, the local area image is subjected to characteristic transformation to obtain a normalized area image, so that the initial visual angle change between the processed local area images is simplified into scale and rotation change, and the deformation error of the image caused by the angle and scale problems is solved; then, respectively detecting the characteristic points inside and outside the normalized area image to obtain the characteristic points, so that the characteristic points can better cover the whole image; and finally, matching the characteristic points of the reference image and the image to be matched according to the characteristic points inside and outside the normalized area image to obtain a matching result, and determining the matching result of the image to be matched and the reference image according to the matching result of the characteristic points, so that the efficiency and the robustness of point set matching under the practical problems of large visual angle change of video big data, complex imaging environment and the like are greatly improved. Experiments show that the image matching method has strong robustness on matching between the view angle change images, and can obtain a good matching effect even under the condition of great degree of view angle change.
In an alternative embodiment, in step S101, local area images of the reference image and the image to be matched are respectively obtained, and a multi-scale MSER extraction method may be adopted to obtain the local area of the image, which specifically includes: acquiring a reference image and an image to be matched; respectively carrying out smoothing processing and down-sampling processing on the reference image and the image to be matched by utilizing Gaussian convolution kernels with different variances to form a Gaussian scale pyramid image; carrying out maximum stable extremum region detection on the pyramid images of each layer respectively to obtain a plurality of maximum stable extremum regions; removing repeated maximum stable extremum regions on each layer of pyramid images according to the positions and areas of the maximum stable extremum regions; and forming a reference image and a local area image of the image to be matched according to the maximum stable extremum area of each layer of pyramid images after the repeated maximum stable extremum areas are removed.
Specifically, an original MSER algorithm may be used to extract MSER from each layer of pyramid image, so as to obtain a plurality of maximum stable extremal regions. The Gaussian scale pyramid images are formed on the reference image and the image to be matched, then the maximum stable extremum region is extracted from the pyramid image of each layer, and the repeated maximum stable extremum regions are removed, so that the maximum stable extremum regions on the reference image and the image to be matched can be accurately extracted, the repetition rate of the local region extracted from the image with fuzzy image or observation distance change is improved, and subsequent image matching is facilitated.
In an alternative embodiment, the positions of the maximally stable extremal regions include a centroid position, a direction of the principal axis, and a perimeter, and the repeated maximally stable extremal regions on each layer of pyramid image are removed according to the positions and areas of the maximally stable extremal regions, including: judging whether the centroid distance of the two maximum stable extremal regions on the pyramid images of the adjacent layers is smaller than a first preset threshold value or not according to the centroid position of each maximum stable extremal region; if the centroid distance of the two maximum stable extremum regions on the adjacent pyramid images is smaller than a first preset threshold, judging whether the areas of the two maximum stable extremum regions meet a first relational expression; if the areas of the two maximum stable extremum regions meet the first relational expression, judging whether the directions and the circumferences of the main shafts of the two maximum stable extremum regions meet the second relational expression; and if the directions and the circumferences of the main shafts of the two maximum stable extremum regions meet the second relational expression, determining that the two maximum stable extremum regions are the same maximum stable extremum region, and rejecting any one of the two maximum stable extremum regions.
Specifically, the following two rules may be employed for repeated MSER discrimination: firstly, on pyramid images of adjacent layers, the centroid distance of two MSERs is smaller than a first preset threshold value d; areas S1 and S2 of two MSERs on adjacent pyramid images satisfy the first relational expression.
The first relation is:
Figure BDA0002583957170000091
wherein, S1 and S2 are the areas of two MSERs on the adjacent pyramid images.
In order to further enhance the reliability of repeated MSER discrimination, the following rule can be added on the basis of the two previous discrimination rules: the directions and the circumferences of the elliptic main shafts of the two MSERs on the adjacent layer pyramid images meet a second relational expression.
The second relation is:
Figure BDA0002583957170000092
where θ 1 and θ 2 are directions of major axes of the ellipses of the two MSERs on the adjacent pyramid images, and L1 and L2 are circumferences of the elliptical regions of the two MSERs on the adjacent pyramid images.
And if the two maximally stable extremal regions on the adjacent pyramid images simultaneously satisfy the three conditions, considering the two maximally stable extremal regions as the repetition regions. At this time, any one of the two maximum stable extremum regions is rejected. Preferably, the maximum stable extremum region corresponding to the coarse-scale image is eliminated.
In an optional embodiment, in step S104, matching the feature points of the reference image and the image to be matched according to the feature points outside the normalized region image includes: respectively clustering feature points outside the normalized area images of the reference image and the image to be matched into local areas of the reference image and the image to be matched; respectively determining the characteristic regions of the characteristic points outside the normalized area images of the reference image and the image to be matched according to the regional parameters of the local regions to which the characteristic points outside the normalized area images of the reference image and the image to be matched belong; performing characteristic transformation on the characteristic region to obtain a normalized characteristic region image; according to the normalized characteristic region image, carrying out characteristic description on characteristic points outside the normalized region image; and matching the characteristic points outside the normalized area image after the characteristic description of the reference image and the image to be matched by adopting a preset characteristic point matching method.
Specifically, after feature point detection is performed on an image area uncovered by the normalized feature area image, the image coordinate position of the feature point may be acquired. The feature points can be clustered into local areas of the reference image and the image to be matched according to the image coordinate positions of the feature points. The clustering rule is as follows: respectively calculating the distance from each characteristic point outside the normalized area images of the reference image and the image to be matched to the central point of each local area; judging whether the ratio of the minimum distance from each characteristic point to the central point of each local area to the secondary minimum distance is smaller than a second preset threshold value or not; and if the ratio of the minimum distance from each characteristic point to the center point of each local area to the secondary minimum distance is smaller than a second preset threshold, determining that each characteristic point belongs to the local area corresponding to the minimum distance. And if the ratio of the minimum distance from each feature point to the center point of each local area to the secondary minimum distance is greater than or equal to a second preset threshold, determining that each feature point belongs to the local area corresponding to the minimum distance and the local area corresponding to the secondary minimum distance. And determining the elliptical characteristic region of the characteristic point according to the elliptical parameter of the local region of the image to which the characteristic point belongs. And (3) normalizing the feature point elliptic feature region into a circular feature region, and performing feature description and matching by adopting an SIFT feature descriptor and an NNDR method.
In an alternative embodiment, after the matching of the feature points of the local region and the non-local region is completed, some error matches may exist in the initial matching result. The conventional method is to eliminate the mismatch by estimating the affine transformation matrix between the images using the ransac (random Sample consensus). However, in a high-resolution image, the whole image pair does not obey the same affine transformation, and if the method of estimating the affine transformation matrix is adopted, a large number of correct matching points are mistaken for wrong matching and are removed. Therefore, in order to overcome this problem, after matching the feature points of the reference image and the image to be matched according to the feature points inside and outside the normalized region, the image matching method further includes: and (3) purifying the matched initial matching points based on a polar geometric constraint method, and eliminating wrong matching by estimating a basic matrix between image pairs to obtain final matching points.
To further illustrate the image matching method of the present invention, an embodiment of the invention is described. The data used for image matching is pictures of the same ship extracted from adjacent cameras of the video of the roundabout of the violin, as shown in fig. 2, a group of pictures is selected for experiments, SIFT matching is carried out on the ship in the designated area by adopting the image matching method of the embodiment of the invention, and the matching result is shown in fig. 3. As can be seen from fig. 3, the image matching method of the embodiment of the invention can well match the ship in the designated area.
An embodiment of the present invention further provides an image matching apparatus, as shown in fig. 4, including:
an obtaining module 21, configured to obtain local area images of a reference image and an image to be matched respectively; the specific implementation manner is described in detail in step S101 of the above embodiment, and is not described again here.
The characteristic transformation module 22 is used for carrying out characteristic transformation on the local area image to obtain a normalized area image; the specific implementation manner is described in detail in step S102 of the above embodiment, and is not described again here.
The detection module 23 is configured to perform feature point detection on the inside and outside of the normalized region image, respectively, to obtain feature points; the specific implementation manner is described in detail in step S103 of the above embodiment, and is not described again here.
The matching module 24 is configured to match the feature points of the reference image and the image to be matched according to the feature points inside and outside the normalized region image to obtain a matching result; the specific implementation manner is described in detail in step S104 of the above embodiment, and is not described herein again.
And the determining module 25 is configured to determine a matching result between the image to be matched and the reference image according to the matching result of the feature points. The specific implementation manner is described in detail in step S105 of the above embodiment, and is not described again here.
According to the image matching device provided by the embodiment of the invention, the local area images of the reference image and the image to be matched are respectively obtained, the whole image is decomposed into a set of local area images, and the reference image and the image to be matched are subjected to offline multi-scale feature extraction; then, the scene characteristics of the image, such as angle, contrast, environment and the like are considered, the local area image is subjected to characteristic transformation to obtain a normalized area image, so that the initial visual angle change between the processed local area images is simplified into scale and rotation change, and the deformation error of the image caused by the angle and scale problems is solved; then, respectively detecting the characteristic points inside and outside the normalized area image to obtain the characteristic points, so that the characteristic points can better cover the whole image; and finally, matching the characteristic points of the reference image and the image to be matched according to the characteristic points inside and outside the normalized area image to obtain a matching result, and determining the matching result of the image to be matched and the reference image according to the matching result of the characteristic points, so that the efficiency and the robustness of point set matching under the practical problems of large visual angle change of video big data, complex imaging environment and the like are greatly improved. Experiments show that the image matching method has strong robustness on matching between the view angle change images, and can obtain a good matching effect even under the condition of great degree of view angle change.
An embodiment of the present invention further provides a computer device, as shown in fig. 5, including: a processor 31 and a memory 32, wherein the processor 31 and the memory 32 may be connected by a bus or other means, and fig. 5 illustrates the connection by the bus as an example.
The processor 31 may be a Central Processing Unit (CPU). The Processor 31 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 32 is a non-transitory computer readable storage medium, and can be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the image matching method in the embodiment of the present invention. The processor 31 executes various functional applications and data processing of the processor by running non-transitory software programs, instructions and modules stored in the memory 32, so as to implement the image matching method in the above-described method embodiment.
The memory 32 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 31, and the like. Further, the memory 32 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 32 may optionally include memory located remotely from the processor 31, and these remote memories may be connected to the processor 31 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more of the modules described above are stored in the memory 32 and, when executed by the processor 31, perform the image matching method in the embodiment shown in fig. 1.
The details of the computer device can be understood with reference to the corresponding related descriptions and effects in the embodiment shown in fig. 1, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. An image matching method, comprising:
respectively acquiring local area images of a reference image and an image to be matched;
performing characteristic transformation on the local area image to obtain a normalized area image;
respectively detecting characteristic points inside and outside the normalized area image to obtain characteristic points;
matching the characteristic points of the reference image and the image to be matched according to the characteristic points inside and outside the normalized area image to obtain a matching result;
and determining the matching result of the image to be matched and the reference image according to the matching result of the feature points.
2. The image matching method according to claim 1, wherein the obtaining of the local area images of the reference image and the image to be matched respectively comprises:
acquiring a reference image and an image to be matched;
respectively carrying out smoothing processing and down-sampling processing on the reference image and the image to be matched by utilizing Gaussian convolution kernels with different variances to form a Gaussian scale pyramid image;
carrying out maximum stable extremum region detection on the pyramid images of each layer respectively to obtain a plurality of maximum stable extremum regions;
removing repeated maximum stable extremum regions on each layer of pyramid images according to the positions and areas of the maximum stable extremum regions;
and forming a reference image and a local area image of the image to be matched according to the maximum stable extremum area of each layer of pyramid images after the repeated maximum stable extremum areas are removed.
3. The image matching method of claim 2, wherein the positions of the maximally stable extremal regions comprise centroid positions, principal axis directions and perimeters, and wherein the removing of the repeated maximally stable extremal regions from each layer of pyramid images according to the positions and areas of the maximally stable extremal regions comprises:
judging whether the centroid distance of the two maximum stable extremal regions on the pyramid images of the adjacent layers is smaller than a first preset threshold value or not according to the centroid position of each maximum stable extremal region;
if the centroid distance of the two maximum stable extremum regions on the adjacent pyramid images is smaller than a first preset threshold, judging whether the areas of the two maximum stable extremum regions meet a first relational expression;
if the areas of the two maximum stable extremum regions meet the first relational expression, judging whether the directions and the circumferences of the main shafts of the two maximum stable extremum regions meet the second relational expression;
and if the directions and the circumferences of the main shafts of the two maximum stable extremum regions meet a second relational expression, determining that the two maximum stable extremum regions are the same maximum stable extremum region, and rejecting any one of the two maximum stable extremum regions.
4. The image matching method according to claim 1, wherein matching the feature points of the reference image and the image to be matched according to the feature points outside the normalized region image comprises:
respectively clustering feature points outside the normalized region images of the reference image and the image to be matched into local regions of the reference image and the image to be matched;
respectively determining the characteristic regions of the characteristic points outside the normalized region images of the reference image and the image to be matched according to the region parameters of the local regions to which the characteristic points outside the normalized region images of the reference image and the image to be matched belong;
performing feature transformation on the feature region to obtain a normalized feature region image;
according to the normalized characteristic region image, carrying out characteristic description on characteristic points outside the normalized region image;
and matching the characteristic points outside the normalized area image after the characteristic description of the reference image and the image to be matched by adopting a preset characteristic point matching method.
5. The image matching method according to claim 4, wherein the clustering the feature points outside the normalized region images of the reference image and the image to be matched into local regions of the reference image and the image to be matched respectively comprises:
respectively calculating the distance from each characteristic point outside the normalized region images of the reference image and the image to be matched to the central point of each local region;
judging whether the ratio of the minimum distance from each characteristic point to the central point of each local area to the secondary minimum distance is smaller than a second preset threshold value or not;
and if the ratio of the minimum distance from each characteristic point to the center point of each local area to the secondary minimum distance is smaller than a second preset threshold, determining that each characteristic point belongs to the local area corresponding to the minimum distance.
6. The image matching method according to claim 5,
and if the ratio of the minimum distance from each feature point to the center point of each local area to the secondary minimum distance is greater than or equal to a second preset threshold, determining that each feature point belongs to the local area corresponding to the minimum distance and the local area corresponding to the secondary minimum distance.
7. The image matching method according to any one of claims 1 to 6, wherein after the matching of the feature points of the reference image and the image to be matched according to the feature points inside and outside the normalized region, the method further comprises:
and purifying the matched initial matching points based on a polar geometric constraint method to obtain final matching points.
8. An image matching apparatus, comprising:
the acquisition module is used for respectively acquiring local area images of the reference image and the image to be matched;
the characteristic transformation module is used for carrying out characteristic transformation on the local area image to obtain a normalized area image;
the detection module is used for respectively detecting the characteristic points inside and outside the normalized area image to obtain the characteristic points;
the matching module is used for matching the characteristic points of the reference image and the image to be matched according to the characteristic points inside and outside the normalized area image to obtain a matching result;
and the determining module is used for determining the matching result of the image to be matched and the reference image according to the matching result of the characteristic points.
9. A computer device, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the image matching method of any one of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a computer to perform the image matching method according to any one of claims 1 to 7.
CN202010677868.0A 2020-07-14 2020-07-14 Image matching method and device, computer equipment and computer readable storage medium Active CN111915645B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010677868.0A CN111915645B (en) 2020-07-14 2020-07-14 Image matching method and device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010677868.0A CN111915645B (en) 2020-07-14 2020-07-14 Image matching method and device, computer equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111915645A true CN111915645A (en) 2020-11-10
CN111915645B CN111915645B (en) 2021-08-27

Family

ID=73280273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010677868.0A Active CN111915645B (en) 2020-07-14 2020-07-14 Image matching method and device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111915645B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381785A (en) * 2020-11-12 2021-02-19 北京一起教育科技有限责任公司 Image detection method and device and electronic equipment
CN114866853A (en) * 2022-04-12 2022-08-05 咪咕文化科技有限公司 Live broadcast interaction method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050238198A1 (en) * 2004-04-27 2005-10-27 Microsoft Corporation Multi-image feature matching using multi-scale oriented patches
CN103310439A (en) * 2013-05-09 2013-09-18 浙江大学 Method for detecting maximally stable extremal region of image based on scale space
CN103400384A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Large viewing angle image matching method capable of combining region matching and point matching
CN104700401A (en) * 2015-01-30 2015-06-10 天津科技大学 Image affine transformation control point selecting method based on K-Means clustering method
CN106529591A (en) * 2016-11-07 2017-03-22 湖南源信光电科技有限公司 Improved MSER image matching algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050238198A1 (en) * 2004-04-27 2005-10-27 Microsoft Corporation Multi-image feature matching using multi-scale oriented patches
CN103310439A (en) * 2013-05-09 2013-09-18 浙江大学 Method for detecting maximally stable extremal region of image based on scale space
CN103400384A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Large viewing angle image matching method capable of combining region matching and point matching
CN104700401A (en) * 2015-01-30 2015-06-10 天津科技大学 Image affine transformation control point selecting method based on K-Means clustering method
CN106529591A (en) * 2016-11-07 2017-03-22 湖南源信光电科技有限公司 Improved MSER image matching algorithm

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381785A (en) * 2020-11-12 2021-02-19 北京一起教育科技有限责任公司 Image detection method and device and electronic equipment
CN114866853A (en) * 2022-04-12 2022-08-05 咪咕文化科技有限公司 Live broadcast interaction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111915645B (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
WO2020156361A1 (en) Training sample obtaining method and apparatus, electronic device and storage medium
CN110097050B (en) Pedestrian detection method, device, computer equipment and storage medium
CN106682700B (en) Block rapid matching method based on key point description operator
CN106709500B (en) Image feature matching method
CN110222572B (en) Tracking method, tracking device, electronic equipment and storage medium
CN111915645B (en) Image matching method and device, computer equipment and computer readable storage medium
CN109447117B (en) Double-layer license plate recognition method and device, computer equipment and storage medium
CN111009001A (en) Image registration method, device, equipment and storage medium
CN114743259A (en) Pose estimation method, pose estimation system, terminal, storage medium and application
CN111369605A (en) Infrared and visible light image registration method and system based on edge features
Zhang et al. An improved vehicle panoramic image generation algorithm
CN111199558A (en) Image matching method based on deep learning
Flenner et al. Resampling forgery detection using deep learning and a-contrario analysis
CN112446379A (en) Self-adaptive intelligent processing method for dynamic large scene
CN110516731B (en) Visual odometer feature point detection method and system based on deep learning
CN111950554A (en) Identification card identification method, device, equipment and storage medium
CN110738222B (en) Image matching method and device, computer equipment and storage medium
CN113763274B (en) Multisource image matching method combining local phase sharpness orientation description
CN111898589B (en) Unmanned aerial vehicle image rapid registration method based on GPU+feature recognition
CN116229406B (en) Lane line detection method, system, electronic equipment and storage medium
CN114998630B (en) Ground-to-air image registration method from coarse to fine
CN111768436B (en) Improved image feature block registration method based on fast-RCNN
Wu et al. An accurate feature point matching algorithm for automatic remote sensing image registration
CN112184776A (en) Target tracking method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant