CN115620030A - Image matching method, device, equipment and medium - Google Patents

Image matching method, device, equipment and medium Download PDF

Info

Publication number
CN115620030A
CN115620030A CN202211553079.1A CN202211553079A CN115620030A CN 115620030 A CN115620030 A CN 115620030A CN 202211553079 A CN202211553079 A CN 202211553079A CN 115620030 A CN115620030 A CN 115620030A
Authority
CN
China
Prior art keywords
image
matching
parameters
light image
infrared light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211553079.1A
Other languages
Chinese (zh)
Other versions
CN115620030B (en
Inventor
张天文
向巧罗
历小润
郭浩
陈璐
芦清
杨淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chint Group R & D Center Shanghai Co ltd
Zhejiang Zhengtai Zhiwei Energy Service Co ltd
Zhejiang University ZJU
Original Assignee
Chint Group R & D Center Shanghai Co ltd
Zhejiang Zhengtai Zhiwei Energy Service Co ltd
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chint Group R & D Center Shanghai Co ltd, Zhejiang Zhengtai Zhiwei Energy Service Co ltd, Zhejiang University ZJU filed Critical Chint Group R & D Center Shanghai Co ltd
Priority to CN202211553079.1A priority Critical patent/CN115620030B/en
Publication of CN115620030A publication Critical patent/CN115620030A/en
Application granted granted Critical
Publication of CN115620030B publication Critical patent/CN115620030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application discloses an image matching method, device, equipment and medium, relating to the technical field of image matching and comprising the following steps: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. The initial parameters are obtained through the imaging parameters and the imaging characteristics of the images, so that the problem that rough matching is difficult to realize due to insufficient matching characteristic points of infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.

Description

Image matching method, device, equipment and medium
Technical Field
The present invention relates to the field of image matching technologies, and in particular, to an image matching method, apparatus, device, and medium.
Background
In recent years, with the development of unmanned aerial vehicle technology, unmanned aerial vehicle imaging is utilized to carry out patrol operation in many industries such as power grids, railways and wind power. Due to the influence of various factors such as seasons, terrain, weather conditions and the like, a single sensor in a complex environment can only provide partial or inaccurate doped information, and therefore the unmanned aerial vehicle inspection tour is carried with various sensors. The unmanned aerial vehicle can carry sensors such as visible light, multispectral, hyperspectral, thermal infrared and laser radar, the imaging characteristics of different sensors are different, the heterogeneous image cooperative processing can produce more accurate, more complete and more reliable description and judgment, and the application effect is improved. Such as. The visible light image has higher spatial resolution and rich background information, but is easily influenced by illumination or weather conditions, the infrared sensor is less influenced by illumination or weather conditions, the image is relatively stable, but often lacks enough scene background detail information, the infrared and weak visible light images are fused, and a synthetic image more suitable for human eye observation or computer vision tasks can be generated. The accurate matching of the heterogeneous images is the basis of the cooperative processing of the heterogeneous images.
Currently, the common methods for infrared and visible image matching are mainly based on feature matching and coarse-to-fine matching of feature and region combination. Because the gray difference between the infrared image and the visible light image is obvious, enough high-precision feature matching pairs are difficult to find, and the matching precision is insufficient by directly using a feature-based matching method. The coarse-to-fine matching method of combining the features and the regions realizes coarse matching by using a feature matching method, and then optimizes registration parameters by using a gray-based method. Because the optimization algorithm generally has initial value sensitivity, the matching result of the method is greatly influenced by the initial feature matching result; in addition, a similarity measure of image matching regions is one of the key points of such methods. The information considered by the similarity measurement indexes commonly used in the existing image matching is single, the measurement possibly has the defect of non-conformity with the subjective evaluation of human eyes, and although the visual fidelity is an image quality evaluation index conforming to the subjective evaluation of human eyes, the index is only used for fusing the image quality evaluation at present and is not used in the image matching result measurement.
In conclusion, how to make subjective and objective evaluation of matching results of an infrared light image and a visible light image more consistent, and the higher matching precision is a technical problem to be solved in the field.
Disclosure of Invention
In view of the above, an object of the present invention is to provide an image matching method, apparatus, device, and medium, which can make subjective and objective evaluations of matching results of an infrared light image and a visible light image more consistent, and make matching accuracy higher. The specific scheme is as follows:
in a first aspect, the present application discloses an image matching method, comprising:
determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters;
constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image;
performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters;
and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
Optionally, the constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image includes:
and constructing an image optimization function by using the visual fidelity of the visible light image and the infrared light image and any mutual information of normalized mutual information, regional mutual information or rotation invariant regional mutual information contained in the mutual information.
Optionally, before constructing the image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image, the method further includes:
and determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling mode and an offset to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image.
Optionally, the determining a rectangular overlapping area between the visible light image and the infrared light image by an image scaling manner and an offset includes:
the method includes the steps of determining center position information of a rectangular overlapping area based on image information of the visible light image and the infrared light image, and determining the rectangular overlapping area between the visible light image and the infrared light image based on the center position information and an offset.
Optionally, the constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image includes:
determining visual fidelity of the first overlapping region and the second overlapping region based on sub-image information corresponding to the first overlapping region and the second overlapping region;
and constructing an image optimization function by using mutual information of the first overlapping area and the second overlapping area and the visual fidelity.
Optionally, the determining the visual fidelity of the first overlapping area and the second overlapping area based on the sub-graph information corresponding to the first overlapping area and the second overlapping area includes:
respectively performing wavelet transformation on a first sub-image corresponding to the first overlapping area and a second sub-image corresponding to the second overlapping area to obtain a preset number of coefficient blocks and extract wavelet coefficient vectors;
calculating a covariance matrix and a likelihood estimation of the wavelet coefficient vector;
constructing a respective sample vector based on window samples in the middle of the coefficient block to determine a respective variance;
calculating visual fidelity of the first and second overlapping regions based on the likelihood estimate, the variance, and a visual noise variance.
Optionally, the iteratively calculating the image optimization function based on multiple sets of matching parameters to determine target matching parameters includes:
and performing iterative computation on the image optimization function by utilizing any one of a particle swarm optimization algorithm, a quantum particle swarm optimization algorithm or an ant colony optimization algorithm based on the matching parameters to determine target matching parameters.
In a second aspect, the present application discloses an image matching apparatus, comprising:
the parameter determining module is used for determining initial parameters based on image imaging parameters of the visible light image and the infrared light image and constructing multiple groups of matching parameters by using the initial parameters;
the function construction module is used for constructing an image optimization function by utilizing the visual fidelity and mutual information of the visible light image and the infrared light image;
the target parameter determining module is used for performing iterative calculation on the image optimization function based on the multiple groups of matching parameters to determine target matching parameters;
and the image matching module is used for performing affine transformation on the infrared light image by using the target matching parameters and outputting a target infrared light matching image.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the image matching method disclosed in the foregoing.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program; wherein the computer program realizes the steps of the image matching method disclosed in the foregoing when being executed by a processor.
Thus, the application discloses an image matching method, which comprises the following steps: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the imaging parameters and the imaging characteristics of the images, and the problem that rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of an image matching method disclosed in the present application;
FIG. 2 is a fused image under initial parameters as disclosed herein;
FIG. 3 is a fused image under optimal parameters according to the present disclosure;
FIG. 4 is a flow chart of a specific image matching method disclosed in the present application;
FIG. 5 is a schematic diagram of an image matching apparatus according to the present disclosure;
fig. 6 is a block diagram of an electronic device disclosed in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In recent years, with the development of unmanned aerial vehicle technology, unmanned aerial vehicle imaging is utilized to carry out patrol operation in many industries such as power grids, railways and wind power. Due to the influence of various factors such as seasons, terrains and weather conditions, a single sensor in a complex environment can only provide partial or doped inaccurate information, and therefore the unmanned aerial vehicle inspection tour is carried with various sensors. The unmanned aerial vehicle can carry sensors such as visible light, multispectral, hyperspectral, thermal infrared and laser radar, the imaging characteristics of different sensors are different, the heterogeneous image cooperative processing can produce more accurate, more complete and more reliable description and judgment, and the application effect is improved. Such as. The visible light image has higher spatial resolution and rich background information, but is easily influenced by illumination or weather conditions, the infrared sensor is slightly influenced by the illumination or the weather conditions, the image is relatively stable, but often lacks enough scene background detail information, the infrared and weak visible light images are fused, and a composite image more suitable for human eye observation or computer vision tasks can be generated. The exact matching of the heterogeneous images is the basis for the co-processing of the heterogeneous images.
Currently, the common methods for infrared and visible image matching are mainly based on feature matching and coarse-to-fine matching of feature and region combination. Because the gray difference between the infrared image and the visible light image is obvious, enough high-precision feature matching pairs are difficult to find, and the matching precision is insufficient by directly using a feature-based matching method. The coarse-to-fine matching method of combining the features and the regions realizes coarse matching by using a feature matching method, and then optimizes registration parameters by using a gray-based method. Because the optimization algorithm generally has initial value sensitivity, the matching result of the method is greatly influenced by the initial feature matching result; in addition, a similarity measure of image matching regions is one of the key points of such methods. The information considered by the similarity measurement indexes commonly used in the existing image matching is single, the measurement possibly has the defect of non-conformity with the subjective evaluation of human eyes, and although the visual fidelity is an image quality evaluation index conforming to the subjective evaluation of human eyes, the index is only used for fusing the image quality evaluation at present and is not used in the image matching result measurement.
Therefore, according to the image matching scheme disclosed by the application, subjective and objective evaluation of the matching result of the infrared light image and the visible light image can be more consistent, and the matching precision is higher.
Referring to fig. 1, an embodiment of the present invention discloses an image matching method, including:
step S11: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters.
In this embodiment, the visible light image I r As a reference image, an infrared image I s For the image to be matched, the initial matching parameter a is obtained according to the imaging parameters and the imaging characteristics of the image 1 (0) ,b 1 (0) ,c 1 (0) ,a 2 (0) ,b 2 (0) ,c 2 (0) (ii) a Specifically, firstly, a scale ratio s of the visible light image and the infrared image is calculated according to the camera focal length and the unit pixel physical length of the visible light image and the infrared image, and the calculation formula is as follows:
Figure 698574DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure 504463DEST_PATH_IMAGE002
and
Figure 893987DEST_PATH_IMAGE003
respectively representing the camera focal length of the visible light image and the camera focal length of the infrared image,
Figure 459835DEST_PATH_IMAGE004
and
Figure 352836DEST_PATH_IMAGE005
respectively representing the unit pixel physical lengths calculated from the camera parameters of the visible light image and the infrared image.
After the scale ratio of the visible light image and the infrared light image is obtained, calculating initial parameters according to the scale ratio, the length and the width of the visible light image and the length and the width of the infrared light image, wherein the calculation formula is as follows:
Figure 972036DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 816233DEST_PATH_IMAGE008
is the length of the visible light image,
Figure 483975DEST_PATH_IMAGE009
is a width of the visible-light image,
Figure 746329DEST_PATH_IMAGE010
is the length of the image in the infrared light,
Figure 942955DEST_PATH_IMAGE011
is the width of the infrared light image; therefore, the problem that rough matching is difficult to realize due to insufficient matching characteristic points of the infrared light image and the visible light image is solved by determining the initial parameters through the image imaging parameters and the image imaging characteristics.
In this embodiment, the initial parameters are regarded as initial particles, and a random perturbation method is adopted inConstructing n groups of matching parameters a in a certain range of initial particles 1i (0) ,b 1i (0) ,c 1i (0) ,a 2i (0) ,b 2i (0) ,c 2i (0) I =1, \ 8943j, n; taking a group of matching parameters as a population; initializing iteration times t =0; the matching parameter data is subjected to random disturbance by a random disturbance method for capacity expansion, namely, the matching parameter data is subjected to up-and-down floating within a range so as to increase the data volume and improve the robustness of the algorithm, and the limitation of the specific range is automatically limited according to the actual condition of a user.
Step S12: and constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image.
In this embodiment, an image optimization function is constructed by using the visual fidelity of the visible light image and the infrared light image and any mutual information of normalized mutual information, regional mutual information or rotation invariant regional mutual information included in the mutual information. It is understood that the visual fidelity is an image quality evaluation parameter applied to a method for evaluating image quality, and the similarity measure commonly used in image matching includes various mutual information and structural similarity of matching images, and the mutual information specifically includes: the mutual information, the area mutual information and the rotation invariant area mutual information are normalized, so that one of the mutual information can be selected as a mutual information parameter in the embodiment, the common similarity measurement in image matching possibly has the defect of non-conformity with the subjective evaluation of human eyes, and the visual fidelity is an image quality evaluation index conforming to the subjective evaluation of human eyes.
Step S13: and performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters.
In this embodiment, based on the matching parameters, the image optimization function is iteratively calculated by using any one of a particle swarm optimization algorithm, a quantum particle swarm optimization algorithm, or an ant colony optimization algorithm, so as to determine target matching parameters. It can be understood that the image optimization parameters constructed by combining the mutual information with the visual fidelity of the fused image are iteratively solved by using any one of the PSO algorithm, the QPSO algorithm, or the ant colony algorithm to obtain the optimal matching parameters, i.e., the target matching parameters. It will be appreciated that when the QPSO algorithm is used to find the target matching parameters, the maximum number of iterations MAXITER =100 is set, and the given error is 0.0001, remembering
Figure 307072DEST_PATH_IMAGE012
(ii) a That is to say
Figure 575242DEST_PATH_IMAGE013
Is the current position of the ith particle;
Figure 410212DEST_PATH_IMAGE015
(ii) a That is to say
Figure 981001DEST_PATH_IMAGE016
The current optimal position of the ith particle is taken as the position of the ith particle;
Figure 957048DEST_PATH_IMAGE018
(ii) a That is to say
Figure 825646DEST_PATH_IMAGE019
A global optimal position for the particle swarm; initialization
Figure 780702DEST_PATH_IMAGE020
. Sequentially executing the steps of determining an initial global optimal position, updating the position of each particle, updating the current optimal position of the ith example, updating the optimal position of the population, and the like, wherein the step of determining the initial global optimal position may specifically include: directly taking the initial particles as target matching parameters, namely taking the initial parameters as the target matching parameters, and correcting the infrared light image to obtain corrected infrared light imageInfrared light image, then calculating the average optimal position of the population according to the global optimal position of each particle, and then according to a formula
Figure 912606DEST_PATH_IMAGE021
Calculating a random position, then according to a formula
Figure 454577DEST_PATH_IMAGE022
Updating the position of the particles, wherein
Figure 267812DEST_PATH_IMAGE023
(ii) a Setting the iteration times t = t +1, and repeating the steps of updating the position of each particle, updating the current optimal position of the ith example, updating the optimal position of the population and the like until the iteration times>The maximum, or the matching parameter value difference of the two optimal matching is smaller than the given error, and the global optimal position of the output group is the optimal matching parameter, namely the target matching parameter.
Step S14: and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
In this embodiment, the target matching parameters are used to match I s Affine transformation is carried out, and the corrected graph to be matched is output
Figure 359265DEST_PATH_IMAGE024
And matching the fused results
Figure 599753DEST_PATH_IMAGE025
Therefore, a target infrared light matching image, namely a matched fusion image result, can be obtained.
The following describes a specific embodiment by taking a real hyperspectral image as an example. Adopt the photovoltaic module aerial photography picture of shooing under a set of roof scene of big jiangchan si XT2 two photothermographic camera acquisition, carry out geometric correction and barrel correction to it, relevant image acquisition parameter is as shown in Table 1:
TABLE 1
Item Visible light image Infrared image
Image resolution WxH 4000x3000 640x512
Focal length f 8mm 19mm
Pixel pitch 1.9μm 17μm
The initial parameters obtained by the image imaging parameters and the image imaging characteristics and the QPSO iterative optimization to obtain the matching parameters are shown in the table 2,
TABLE 2
Type of parameter a 1 b 1 c 1 a 2 b 2 c 2 RMSE
Real parameters 3.6259 0.0110 839.2111 -0.0234 3.6326 563.4898 -
Initial parameters 3.7673 0 794.4598 0 3.7673 535.5678 21.5342
Optimizing parameters 3.6324 -0.0058 837.6085 0.0273 3.6350 563.5631 0.6553
The result of the fusion experiment performed on the initial parameters determined by calculation and the visible light image is shown in fig. 2, and it can be seen that the component has a large overlap offset and fails to correspond accurately. From the steps of constructing multiple sets of matching parameters, the initial position of the particle is set to be (3.7673, 0,794.4598,0,3.7673, 535.5678), the iteration number is 100, the given error is 0.0001, the number of particles in the particle group is set to be 50, the initial particle group is constructed through a random perturbation algorithm, and the matching parameters obtained through QPSO iterative optimization are shown in Table 2. The real registration parameters are obtained by calculating 20 pairs of feature point pairs with errors smaller than 0.5 pixel through ENVI manual selection, the root mean square error RMSE of the optimized parameters and the real parameters is obviously reduced compared with the initial parameters, and the result verifies the effectiveness of the proposed fine matching method in the aspect of optimizing the position of the matching point. The result obtained by using the optimized registration parameters in the fusion experiment is shown in fig. 3, compared with the initial parameter fusion fig. 2, the overlapping part of the image matched by the method is smoothly connected, and the high precision of the method is verified visually.
Thus, the present application discloses an image matching method, comprising: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the image imaging parameters and the imaging characteristics, and the problem that the rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Referring to fig. 4, the embodiment of the present invention discloses a specific image matching method, and compared with the previous embodiment, the present embodiment further describes and optimizes the technical solution. Specifically, the method comprises the following steps:
step S21: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters.
Step S22: and determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling mode and an offset to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image.
In this embodiment, the center position information of the rectangular overlapping area is determined based on the image information of the visible light image and the infrared light image, and the rectangular overlapping area between the visible light image and the infrared light image is determined based on the center position information and the offset. It will be appreciated that the upper left and lower right coordinates of the rectangular overlap region are first determined from the length and width of the visible light image, the length and width of the infrared light image and the offset, wherein the upper left and lower right coordinates (x) are calculated L ,y L ),(x R ,y R ) The calculation formula is as follows:
Figure 596397DEST_PATH_IMAGE026
Figure 541219DEST_PATH_IMAGE027
Figure 706752DEST_PATH_IMAGE028
Figure 242776DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 149552DEST_PATH_IMAGE030
is the length of the visible-light image,
Figure 412912DEST_PATH_IMAGE031
is the length of the image in the infrared light,
Figure 354323DEST_PATH_IMAGE032
is the width of the image in the visible light,
Figure 61248DEST_PATH_IMAGE033
p is the offset for the width of the infrared light image. Then, the center position information of the overlapping area at this time is determined according to the coordinates of the upper left corner and the lower right corner, the range of the overlapping area is determined based on the center position information and the offset, and the first overlapping area and the second overlapping area are determined according to the determined overlapping area range in the visible light image and the infrared light image respectively.
Step S23: determining visual fidelity of the first overlapping region and the second overlapping region based on sub-image information corresponding to the first overlapping region and the second overlapping region; and constructing an image optimization function by using the mutual information of the first overlapping area and the second overlapping area and the visual fidelity.
In this embodiment, wavelet transformation is performed on a first sub-image corresponding to the first overlapping region and a second sub-image corresponding to the second overlapping region, so as to obtain a preset number of coefficient blocks and extract a wavelet coefficient vector; calculating a covariance matrix and a likelihood estimate of the wavelet coefficient vector; constructing a respective sample vector based on window samples in the middle of the coefficient block to determine a respective variance; calculating a visual fidelity of the first overlap region and the second overlap region based on the likelihood estimate, the variance, and a visual noise variance. It can be understood that s-level wavelet transformation is respectively carried out on two sub-images corresponding to the overlapping regions of the visible light image and the matching fusion image, each wavelet sub-band is divided into N non-overlapping coefficient blocks, and a wavelet coefficient vector set c is extracted l ={c l1 ,c l2 ,⋯,c lN And d l ={d l1 ,d l2 ,⋯,d lN L =1,2, \ 8943j, s; computing a covariance matrix
Figure 799528DEST_PATH_IMAGE034
Let c be lj Is a random vector in a Gaussian mixture model, and the likelihood is estimated as
Figure 86153DEST_PATH_IMAGE035
Wherein M is l Is a wavelet coefficient vector c lj The dimension of (a); is provided with
Figure 413229DEST_PATH_IMAGE036
Zero mean white Gaussian noise representing independent stationary with a variance of
Figure 166422DEST_PATH_IMAGE037
From distortion models
Figure 421691DEST_PATH_IMAGE038
Respectively recording B multiplied by B window samples in the middle of the jth coefficient block of the two sub-graphs in the step to form vectors C and D, and fusing gain scalars
Figure 918532DEST_PATH_IMAGE039
Sum variance
Figure 693590DEST_PATH_IMAGE037
The estimation is as follows:
Figure 352104DEST_PATH_IMAGE041
Figure 596135DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 631087DEST_PATH_IMAGE044
representing the correlation coefficient of C and D.
And calculating the visual fidelity of the overlapped area of the visible light image and the matching fusion image, wherein the formula is as follows:
Figure 526231DEST_PATH_IMAGE045
wherein
Figure 90067DEST_PATH_IMAGE046
Is a covariance matrix
Figure 585508DEST_PATH_IMAGE047
Is determined by the characteristic value of (a),
Figure 424151DEST_PATH_IMAGE048
the visual noise variance is represented, and the value of the visual noise variance can be 0.1, and the value of the visual noise variance has little influence on the result.
By using the mutual information of the first overlapping area and the second overlapping area and the visual fidelity to construct an image optimization function, it can be understood that after the metric index of the visual fidelity is obtained, the image optimization function is constructed according to the mutual information and the visual fidelity, and the formula of the image optimization parameter is defined as follows:
Figure 111484DEST_PATH_IMAGE049
wherein the visible light image is
Figure 174118DEST_PATH_IMAGE050
The infrared image is
Figure 330424DEST_PATH_IMAGE051
Taking a group of matching parameters as a population,
Figure 831813DEST_PATH_IMAGE051
obtaining a correction waiting chart by the ith population affine transformation after the t-th iteration
Figure 888499DEST_PATH_IMAGE052
The corresponding fusion result graph is
Figure 122035DEST_PATH_IMAGE053
Figure 14904DEST_PATH_IMAGE054
Representing the similarity function values obtained for image matching after the ith population (i =1, \ 8943;, n) was iterated for the t-th time,
Figure 336295DEST_PATH_IMAGE055
as a visible light image
Figure 263800DEST_PATH_IMAGE050
Graph to be matched after correction
Figure 871499DEST_PATH_IMAGE052
The mutual information of the overlapping rectangular areas,
Figure 524370DEST_PATH_IMAGE056
is a visible light image
Figure 39665DEST_PATH_IMAGE050
And matching fused images
Figure 618414DEST_PATH_IMAGE053
Visual fidelity of the overlapping rectangular regions.
When the mutual information adopts normalized mutual information, the specific formula of the mutual information is expressed as follows:
Figure 741222DEST_PATH_IMAGE057
wherein H (\8729;) represents the entropy of an image,
Figure 920268DEST_PATH_IMAGE058
is the joint entropy of the image.
Step S24: and performing iterative computation on the image optimization function based on the matching parameters, and determining target matching parameters.
In this embodiment, an initial global optimal position is first determined, initial particles are used as matching parameters, that is, the initial parameters are used as target matching parameters, and the method is based on the initial particles
Figure 442516DEST_PATH_IMAGE059
And determining corresponding coordinate position information by using a calculation formula of the coordinates of the upper left corner and the coordinates of the lower right corner of the rectangular overlapping area. The similarity of gray scale statistics adopts normalized mutual information to calculate the similarity function value of the matched image
Figure 141351DEST_PATH_IMAGE060
The initial global optimum position is
Figure 169481DEST_PATH_IMAGE061
The minimum particle position is updated, the position of each particle is updated, the current optimal position of the ith particle is updated, the optimization function value of the matched image is calculated, each optimization function value is compared with a preset standard function value, if the current optimization function value is larger than the initial optimization function value, the current position of the current particle is used as the current optimal position of the current example, if the current optimization function value is smaller than the initial optimization function value, the current position of the previous example is used as the current optimal position of the current particle, the current optimal position of the ith example is further determined, and specifically, for each particle, the current optimal position of the ith particle is determined
Figure 930763DEST_PATH_IMAGE062
By using
Figure 381336DEST_PATH_IMAGE062
As matching parameters, correcting the infrared light image
Figure 856049DEST_PATH_IMAGE063
Obtaining corrected infrared light image
Figure 101085DEST_PATH_IMAGE064
(ii) a Determining the coordinates of the upper left corner and the lower right corner of the overlapping area according to the formula above, wherein
Figure 615243DEST_PATH_IMAGE065
Calculating visual protection of the visible light image and the fusion image for the image area of the rectangular overlapping areaPerforming s-level wavelet transformation on sub-images corresponding to the overlapping region of the two images, and calculating corresponding visual fidelity by using parameters of the wavelet transformation and parameters of likelihood estimation, variance and the like of a Gaussian mixture model; then determining the optimized function value of the matched image according to the visual fidelity and the normalized mutual information of the overlapped area of the visible light image and the corrected infrared light image, if so
Figure 354660DEST_PATH_IMAGE066
Then, then
Figure 372295DEST_PATH_IMAGE067
Figure 53812DEST_PATH_IMAGE068
And if not, the step (B),
Figure 789687DEST_PATH_IMAGE069
Figure 96909DEST_PATH_IMAGE070
(ii) a Calculating the optimal position of the update group
Figure 437892DEST_PATH_IMAGE071
Correcting the infrared image as a matching parameter
Figure 900097DEST_PATH_IMAGE072
Obtaining corrected infrared image
Figure 762749DEST_PATH_IMAGE073
(ii) a Determining coordinates of upper left corner and lower right corner of the overlapping region, wherein
Figure 296498DEST_PATH_IMAGE074
(ii) a Then calculating the optimization function value
Figure 632933DEST_PATH_IMAGE075
If, if
Figure 797198DEST_PATH_IMAGE076
Then, then
Figure 366719DEST_PATH_IMAGE077
(ii) a Repeating the process of confirming the particle position and the optimal position of the group until the iteration times are more than the preset iteration times or the difference of the similarity function values of the optimal matching of the previous time and the next time is less than a given error, and outputting the global optimal position of the group
Figure 953427DEST_PATH_IMAGE071
Parameters are matched for the target.
Step S25: and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
Therefore, the initial matching parameters are obtained through the image imaging parameters and the imaging characteristics, and the problem that rough matching is difficult to realize due to insufficient infrared and visible light image matching feature points is solved; by utilizing the characteristic of coaxial imaging of the unmanned aerial vehicle and estimating the overlapping area matched with the image through zooming and translation, the complexity of calculation of the overlapping area can be reduced.
Referring to fig. 5, an embodiment of the present invention further discloses an image matching apparatus, which includes:
a parameter determining module 11, configured to determine initial parameters based on image imaging parameters of the visible light image and the infrared light image, and construct matching parameters using the initial parameters;
a function constructing module 12, configured to construct an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image;
a target parameter determining module 13, configured to perform iterative computation on the image optimization function based on the matching parameters, and determine target matching parameters;
and the image matching module 14 is configured to perform affine transformation on the infrared light image by using the target matching parameter, and output a target infrared light matching image.
Thus, the present application discloses an image matching method, comprising: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the image imaging parameters and the imaging characteristics, and the problem that the rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Further, an electronic device is disclosed in the embodiments of the present application, and fig. 6 is a block diagram of an electronic device 20 according to an exemplary embodiment, which should not be construed as limiting the scope of the application.
Fig. 6 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein, the memory 22 is used for storing a computer program, and the computer program is loaded and executed by the processor 21 to implement the relevant steps in the image matching method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in this embodiment may be specifically an electronic computer.
In this embodiment, the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to acquire external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
The processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 21 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 21 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 21 may further include an AI (Artificial Intelligence) processor for processing a calculation operation related to machine learning.
In addition, the storage 22 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon may include an operating system 221, a computer program 222, etc., and the storage manner may be a transient storage or a permanent storage.
The operating system 221 is used for managing and controlling each hardware device and the computer program 222 on the electronic device 20, so as to realize the operation and processing of the mass data 223 in the memory 22 by the processor 21, and may be Windows Server, netware, unix, linux, and the like. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the image matching method disclosed in any of the foregoing embodiments and executed by the electronic device 20. The data 223 may include data received by the electronic device and transmitted from an external device, or may include data collected by the input/output interface 25 itself.
Further, the present application also discloses a computer readable storage medium for storing a computer program; wherein the computer program when executed by a processor implements the image matching method disclosed in the foregoing. For the specific steps of the method, reference may be made to the corresponding contents disclosed in the foregoing embodiments, which are not described herein again.
In the present specification, the embodiments are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same or similar parts between the embodiments are referred to each other. The device disclosed in the embodiment corresponds to the method disclosed in the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The image matching method, device, apparatus and medium provided by the present invention are described in detail above, and the principle and the implementation of the present invention are explained in this document by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An image matching method, comprising:
determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters;
constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image;
performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters;
and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
2. The image matching method of claim 1, wherein the constructing an image optimization function using the visual fidelity of the visible light image and the infrared light image and mutual information comprises:
and constructing the image optimization function by using the visual fidelity of the visible light image and the infrared light image and any mutual information of normalized mutual information, region mutual information or rotation-invariant region mutual information contained in the mutual information.
3. The image matching method of claim 1, wherein before constructing the image optimization function using the visual fidelity of the visible light image and the infrared light image and the mutual information, the method further comprises:
and determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling mode and an offset to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image.
4. The image matching method according to claim 3, wherein the determining the rectangular overlapping area between the visible light image and the infrared light image by the image scaling manner and the offset amount comprises:
the method includes the steps of determining center position information of a rectangular overlapping area based on image information of the visible light image and the infrared light image, and determining the rectangular overlapping area between the visible light image and the infrared light image based on the center position information and an offset.
5. The image matching method according to claim 3, wherein the constructing an image optimization function using the visual fidelity and mutual information of the visible light image and the infrared light image comprises:
determining visual fidelity of the first overlapping region and the second overlapping region based on sub-image information corresponding to the first overlapping region and the second overlapping region;
and constructing an image optimization function by using the mutual information of the first overlapping area and the second overlapping area and the visual fidelity.
6. The image matching method of claim 5, wherein the determining the visual fidelity of the first overlapping region and the second overlapping region based on the corresponding sub-image information of the first overlapping region and the second overlapping region comprises:
performing wavelet transformation on a first sub-image corresponding to the first overlapping region and a second sub-image corresponding to the second overlapping region respectively to obtain a preset number of coefficient blocks and extract wavelet coefficient vectors;
calculating a covariance matrix and a likelihood estimate of the wavelet coefficient vector;
constructing a respective sample vector based on window samples in the middle of the coefficient block to determine a respective variance;
calculating a visual fidelity of the first overlap region and the second overlap region based on the likelihood estimate, the variance, and a visual noise variance.
7. The image matching method of claim 1, wherein the iteratively calculating the image optimization function based on the matching parameters to determine target matching parameters comprises:
and performing iterative computation on the image optimization function by utilizing any one of a particle swarm optimization algorithm, a quantum particle swarm optimization algorithm or an ant colony optimization algorithm based on the matching parameters to determine target matching parameters.
8. An image matching apparatus, characterized by comprising:
the parameter determining module is used for determining initial parameters based on image imaging parameters of the visible light image and the infrared light image and constructing matching parameters by using the initial parameters;
the function construction module is used for constructing an image optimization function by utilizing the visual fidelity and mutual information of the visible light image and the infrared light image;
the target parameter determining module is used for performing iterative calculation on the image optimization function based on the matching parameters to determine target matching parameters;
and the image matching module is used for carrying out affine transformation on the infrared light image by using the target matching parameters and outputting a target infrared light matching image.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program for carrying out the steps of the image matching method according to any of claims 1 to 7.
10. A computer-readable storage medium for storing a computer program; wherein the computer program realizes the steps of the image matching method according to any one of claims 1 to 7 when executed by a processor.
CN202211553079.1A 2022-12-06 2022-12-06 Image matching method, device, equipment and medium Active CN115620030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211553079.1A CN115620030B (en) 2022-12-06 2022-12-06 Image matching method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211553079.1A CN115620030B (en) 2022-12-06 2022-12-06 Image matching method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN115620030A true CN115620030A (en) 2023-01-17
CN115620030B CN115620030B (en) 2023-04-18

Family

ID=84880942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211553079.1A Active CN115620030B (en) 2022-12-06 2022-12-06 Image matching method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115620030B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915945A (en) * 2015-02-04 2015-09-16 中国人民解放军海军装备研究院信息工程技术研究所 Quality evaluation method without reference image based on regional mutual information
CN110084774A (en) * 2019-04-11 2019-08-02 江南大学 A kind of method of the gradient transmitting and minimum total variation blending image of enhancing
CN110148104A (en) * 2019-05-14 2019-08-20 西安电子科技大学 Infrared and visible light image fusion method based on significance analysis and low-rank representation
CN110505472A (en) * 2019-07-15 2019-11-26 武汉大学 A kind of H.265 ultra high-definition method for evaluating video quality
CN110555843A (en) * 2019-09-11 2019-12-10 浙江师范大学 High-precision non-reference fusion remote sensing image quality analysis method and system
US20210270755A1 (en) * 2018-06-29 2021-09-02 Universiteit Antwerpen Item inspection by dynamic selection of projection angle
CN113706406A (en) * 2021-08-11 2021-11-26 武汉大学 Infrared and visible light image fusion method based on feature space multi-classification countermeasure mechanism
CN114072818A (en) * 2019-06-28 2022-02-18 谷歌有限责任公司 Bayesian quantum circuit fidelity estimation
CN114298950A (en) * 2021-12-20 2022-04-08 扬州大学 Infrared and visible light image fusion method based on improved GoDec algorithm
WO2022116104A1 (en) * 2020-12-03 2022-06-09 华为技术有限公司 Image processing method and apparatus, and device and storage medium
CN115409879A (en) * 2022-08-24 2022-11-29 苏州国科康成医疗科技有限公司 Data processing method and device for image registration, storage medium and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915945A (en) * 2015-02-04 2015-09-16 中国人民解放军海军装备研究院信息工程技术研究所 Quality evaluation method without reference image based on regional mutual information
US20210270755A1 (en) * 2018-06-29 2021-09-02 Universiteit Antwerpen Item inspection by dynamic selection of projection angle
CN110084774A (en) * 2019-04-11 2019-08-02 江南大学 A kind of method of the gradient transmitting and minimum total variation blending image of enhancing
CN110148104A (en) * 2019-05-14 2019-08-20 西安电子科技大学 Infrared and visible light image fusion method based on significance analysis and low-rank representation
CN114072818A (en) * 2019-06-28 2022-02-18 谷歌有限责任公司 Bayesian quantum circuit fidelity estimation
CN110505472A (en) * 2019-07-15 2019-11-26 武汉大学 A kind of H.265 ultra high-definition method for evaluating video quality
CN110555843A (en) * 2019-09-11 2019-12-10 浙江师范大学 High-precision non-reference fusion remote sensing image quality analysis method and system
WO2022116104A1 (en) * 2020-12-03 2022-06-09 华为技术有限公司 Image processing method and apparatus, and device and storage medium
CN113706406A (en) * 2021-08-11 2021-11-26 武汉大学 Infrared and visible light image fusion method based on feature space multi-classification countermeasure mechanism
CN114298950A (en) * 2021-12-20 2022-04-08 扬州大学 Infrared and visible light image fusion method based on improved GoDec algorithm
CN115409879A (en) * 2022-08-24 2022-11-29 苏州国科康成医疗科技有限公司 Data processing method and device for image registration, storage medium and electronic equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
H.R. SHEIKH等: "Image information and visual quality" *
HEENA R. KHER: "Implementation of Image Registration for Satellite Images using Mutual Information and Particle Swarm Optimization Techniques" *
杨艳春;李娇;王阳萍;: "图像融合质量评价方法研究综述" *
牛威;郭世平;史江林;邹建华;张荣之;: "自适应光学成像事后处理LoG域匹配图像质量评价" *

Also Published As

Publication number Publication date
CN115620030B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
CN110728658A (en) High-resolution remote sensing image weak target detection method based on deep learning
CN104077760A (en) Rapid splicing system for aerial photogrammetry and implementing method thereof
CN108229274B (en) Method and device for training multilayer neural network model and recognizing road characteristics
CN112183171A (en) Method and device for establishing beacon map based on visual beacon
CN109063549B (en) High-resolution aerial video moving target detection method based on deep neural network
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN116129020A (en) Novel live-action three-dimensional modeling method
Cao Applying image registration algorithm combined with CNN model to video image stitching
CN115240089A (en) Vehicle detection method of aerial remote sensing image
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN111444923A (en) Image semantic segmentation method and device under natural scene
CN111079826A (en) SLAM and image processing fused construction progress real-time identification method
CN113837134A (en) Wetland vegetation identification method based on object-oriented deep learning model and transfer learning
CN116258877A (en) Land utilization scene similarity change detection method, device, medium and equipment
CN115620030B (en) Image matching method, device, equipment and medium
CN114998630B (en) Ground-to-air image registration method from coarse to fine
CN113781375B (en) Vehicle-mounted vision enhancement method based on multi-exposure fusion
CN115527050A (en) Image feature matching method, computer device and readable storage medium
CN114863235A (en) Fusion method of heterogeneous remote sensing images
CN113936047A (en) Dense depth map generation method and system
CN114972451A (en) Rotation-invariant SuperGlue matching-based remote sensing image registration method
CN113624223A (en) Indoor parking lot map construction method and device
Arevalo et al. Improving piecewise linear registration of high-resolution satellite images through mesh optimization
CN113344006A (en) Polarization image analysis method adopting learnable parameter fusion network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Zhang Tianwen

Inventor after: Xiang Luoqiao

Inventor after: Li Xiaorun

Inventor after: Guo Hao

Inventor after: Chen Lu

Inventor after: Lu Qing

Inventor after: Yang Miao

Inventor before: Zhang Tianwen

Inventor before: Xiang Qiaoluo

Inventor before: Li Xiaorun

Inventor before: Guo Hao

Inventor before: Chen Lu

Inventor before: Lu Qing

Inventor before: Yang Miao

CB03 Change of inventor or designer information