Disclosure of Invention
In view of the above, an object of the present invention is to provide an image matching method, apparatus, device, and medium, which can make subjective and objective evaluations of matching results of an infrared light image and a visible light image more consistent, and make matching accuracy higher. The specific scheme is as follows:
in a first aspect, the present application discloses an image matching method, comprising:
determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters;
constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image;
performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters;
and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
Optionally, the constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image includes:
and constructing an image optimization function by using the visual fidelity of the visible light image and the infrared light image and any mutual information of normalized mutual information, regional mutual information or rotation invariant regional mutual information contained in the mutual information.
Optionally, before constructing the image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image, the method further includes:
and determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling mode and an offset to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image.
Optionally, the determining a rectangular overlapping area between the visible light image and the infrared light image by an image scaling manner and an offset includes:
the method includes the steps of determining center position information of a rectangular overlapping area based on image information of the visible light image and the infrared light image, and determining the rectangular overlapping area between the visible light image and the infrared light image based on the center position information and an offset.
Optionally, the constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image includes:
determining visual fidelity of the first overlapping region and the second overlapping region based on sub-image information corresponding to the first overlapping region and the second overlapping region;
and constructing an image optimization function by using mutual information of the first overlapping area and the second overlapping area and the visual fidelity.
Optionally, the determining the visual fidelity of the first overlapping area and the second overlapping area based on the sub-graph information corresponding to the first overlapping area and the second overlapping area includes:
respectively performing wavelet transformation on a first sub-image corresponding to the first overlapping area and a second sub-image corresponding to the second overlapping area to obtain a preset number of coefficient blocks and extract wavelet coefficient vectors;
calculating a covariance matrix and a likelihood estimation of the wavelet coefficient vector;
constructing a respective sample vector based on window samples in the middle of the coefficient block to determine a respective variance;
calculating visual fidelity of the first and second overlapping regions based on the likelihood estimate, the variance, and a visual noise variance.
Optionally, the iteratively calculating the image optimization function based on multiple sets of matching parameters to determine target matching parameters includes:
and performing iterative computation on the image optimization function by utilizing any one of a particle swarm optimization algorithm, a quantum particle swarm optimization algorithm or an ant colony optimization algorithm based on the matching parameters to determine target matching parameters.
In a second aspect, the present application discloses an image matching apparatus, comprising:
the parameter determining module is used for determining initial parameters based on image imaging parameters of the visible light image and the infrared light image and constructing multiple groups of matching parameters by using the initial parameters;
the function construction module is used for constructing an image optimization function by utilizing the visual fidelity and mutual information of the visible light image and the infrared light image;
the target parameter determining module is used for performing iterative calculation on the image optimization function based on the multiple groups of matching parameters to determine target matching parameters;
and the image matching module is used for performing affine transformation on the infrared light image by using the target matching parameters and outputting a target infrared light matching image.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the image matching method disclosed in the foregoing.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program; wherein the computer program realizes the steps of the image matching method disclosed in the foregoing when being executed by a processor.
Thus, the application discloses an image matching method, which comprises the following steps: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the imaging parameters and the imaging characteristics of the images, and the problem that rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In recent years, with the development of unmanned aerial vehicle technology, unmanned aerial vehicle imaging is utilized to carry out patrol operation in many industries such as power grids, railways and wind power. Due to the influence of various factors such as seasons, terrains and weather conditions, a single sensor in a complex environment can only provide partial or doped inaccurate information, and therefore the unmanned aerial vehicle inspection tour is carried with various sensors. The unmanned aerial vehicle can carry sensors such as visible light, multispectral, hyperspectral, thermal infrared and laser radar, the imaging characteristics of different sensors are different, the heterogeneous image cooperative processing can produce more accurate, more complete and more reliable description and judgment, and the application effect is improved. Such as. The visible light image has higher spatial resolution and rich background information, but is easily influenced by illumination or weather conditions, the infrared sensor is slightly influenced by the illumination or the weather conditions, the image is relatively stable, but often lacks enough scene background detail information, the infrared and weak visible light images are fused, and a composite image more suitable for human eye observation or computer vision tasks can be generated. The exact matching of the heterogeneous images is the basis for the co-processing of the heterogeneous images.
Currently, the common methods for infrared and visible image matching are mainly based on feature matching and coarse-to-fine matching of feature and region combination. Because the gray difference between the infrared image and the visible light image is obvious, enough high-precision feature matching pairs are difficult to find, and the matching precision is insufficient by directly using a feature-based matching method. The coarse-to-fine matching method of combining the features and the regions realizes coarse matching by using a feature matching method, and then optimizes registration parameters by using a gray-based method. Because the optimization algorithm generally has initial value sensitivity, the matching result of the method is greatly influenced by the initial feature matching result; in addition, a similarity measure of image matching regions is one of the key points of such methods. The information considered by the similarity measurement indexes commonly used in the existing image matching is single, the measurement possibly has the defect of non-conformity with the subjective evaluation of human eyes, and although the visual fidelity is an image quality evaluation index conforming to the subjective evaluation of human eyes, the index is only used for fusing the image quality evaluation at present and is not used in the image matching result measurement.
Therefore, according to the image matching scheme disclosed by the application, subjective and objective evaluation of the matching result of the infrared light image and the visible light image can be more consistent, and the matching precision is higher.
Referring to fig. 1, an embodiment of the present invention discloses an image matching method, including:
step S11: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters.
In this embodiment, the visible light image I r As a reference image, an infrared image I s For the image to be matched, the initial matching parameter a is obtained according to the imaging parameters and the imaging characteristics of the image 1 (0) ,b 1 (0) ,c 1 (0) ,a 2 (0) ,b 2 (0) ,c 2 (0) (ii) a Specifically, firstly, a scale ratio s of the visible light image and the infrared image is calculated according to the camera focal length and the unit pixel physical length of the visible light image and the infrared image, and the calculation formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,
and
respectively representing the camera focal length of the visible light image and the camera focal length of the infrared image,
and
respectively representing the unit pixel physical lengths calculated from the camera parameters of the visible light image and the infrared image.
After the scale ratio of the visible light image and the infrared light image is obtained, calculating initial parameters according to the scale ratio, the length and the width of the visible light image and the length and the width of the infrared light image, wherein the calculation formula is as follows:
wherein the content of the first and second substances,
is the length of the visible light image,
is a width of the visible-light image,
is the length of the image in the infrared light,
is the width of the infrared light image; therefore, the problem that rough matching is difficult to realize due to insufficient matching characteristic points of the infrared light image and the visible light image is solved by determining the initial parameters through the image imaging parameters and the image imaging characteristics.
In this embodiment, the initial parameters are regarded as initial particles, and a random perturbation method is adopted inConstructing n groups of matching parameters a in a certain range of initial particles 1i (0) ,b 1i (0) ,c 1i (0) ,a 2i (0) ,b 2i (0) ,c 2i (0) I =1, \ 8943j, n; taking a group of matching parameters as a population; initializing iteration times t =0; the matching parameter data is subjected to random disturbance by a random disturbance method for capacity expansion, namely, the matching parameter data is subjected to up-and-down floating within a range so as to increase the data volume and improve the robustness of the algorithm, and the limitation of the specific range is automatically limited according to the actual condition of a user.
Step S12: and constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image.
In this embodiment, an image optimization function is constructed by using the visual fidelity of the visible light image and the infrared light image and any mutual information of normalized mutual information, regional mutual information or rotation invariant regional mutual information included in the mutual information. It is understood that the visual fidelity is an image quality evaluation parameter applied to a method for evaluating image quality, and the similarity measure commonly used in image matching includes various mutual information and structural similarity of matching images, and the mutual information specifically includes: the mutual information, the area mutual information and the rotation invariant area mutual information are normalized, so that one of the mutual information can be selected as a mutual information parameter in the embodiment, the common similarity measurement in image matching possibly has the defect of non-conformity with the subjective evaluation of human eyes, and the visual fidelity is an image quality evaluation index conforming to the subjective evaluation of human eyes.
Step S13: and performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters.
In this embodiment, based on the matching parameters, the image optimization function is iteratively calculated by using any one of a particle swarm optimization algorithm, a quantum particle swarm optimization algorithm, or an ant colony optimization algorithm, so as to determine target matching parameters. It can be understood that the image optimization parameters constructed by combining the mutual information with the visual fidelity of the fused image are iteratively solved by using any one of the PSO algorithm, the QPSO algorithm, or the ant colony algorithm to obtain the optimal matching parameters, i.e., the target matching parameters. It will be appreciated that when the QPSO algorithm is used to find the target matching parameters, the maximum number of iterations MAXITER =100 is set, and the given error is 0.0001, remembering
(ii) a That is to say
Is the current position of the ith particle;
(ii) a That is to say
The current optimal position of the ith particle is taken as the position of the ith particle;
(ii) a That is to say
A global optimal position for the particle swarm; initialization
. Sequentially executing the steps of determining an initial global optimal position, updating the position of each particle, updating the current optimal position of the ith example, updating the optimal position of the population, and the like, wherein the step of determining the initial global optimal position may specifically include: directly taking the initial particles as target matching parameters, namely taking the initial parameters as the target matching parameters, and correcting the infrared light image to obtain corrected infrared light imageInfrared light image, then calculating the average optimal position of the population according to the global optimal position of each particle, and then according to a formula
Calculating a random position, then according to a formula
Updating the position of the particles, wherein
(ii) a Setting the iteration times t = t +1, and repeating the steps of updating the position of each particle, updating the current optimal position of the ith example, updating the optimal position of the population and the like until the iteration times>The maximum, or the matching parameter value difference of the two optimal matching is smaller than the given error, and the global optimal position of the output group is the optimal matching parameter, namely the target matching parameter.
Step S14: and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
In this embodiment, the target matching parameters are used to match I
s Affine transformation is carried out, and the corrected graph to be matched is output
And matching the fused results
Therefore, a target infrared light matching image, namely a matched fusion image result, can be obtained.
The following describes a specific embodiment by taking a real hyperspectral image as an example. Adopt the photovoltaic module aerial photography picture of shooing under a set of roof scene of big jiangchan si XT2 two photothermographic camera acquisition, carry out geometric correction and barrel correction to it, relevant image acquisition parameter is as shown in Table 1:
TABLE 1
Item
|
Visible light image
|
Infrared image
|
Image resolution WxH
|
4000x3000
|
640x512
|
Focal length f
|
8mm
|
19mm
|
Pixel pitch
|
1.9μm
|
17μm |
The initial parameters obtained by the image imaging parameters and the image imaging characteristics and the QPSO iterative optimization to obtain the matching parameters are shown in the table 2,
TABLE 2
Type of parameter
|
a 1 |
b 1 |
c 1 |
a 2 |
b 2 |
c 2 |
RMSE
|
Real parameters
|
3.6259
|
0.0110
|
839.2111
|
-0.0234
|
3.6326
|
563.4898
|
-
|
Initial parameters
|
3.7673
|
0
|
794.4598
|
0
|
3.7673
|
535.5678
|
21.5342
|
Optimizing parameters
|
3.6324
|
-0.0058
|
837.6085
|
0.0273
|
3.6350
|
563.5631
|
0.6553 |
The result of the fusion experiment performed on the initial parameters determined by calculation and the visible light image is shown in fig. 2, and it can be seen that the component has a large overlap offset and fails to correspond accurately. From the steps of constructing multiple sets of matching parameters, the initial position of the particle is set to be (3.7673, 0,794.4598,0,3.7673, 535.5678), the iteration number is 100, the given error is 0.0001, the number of particles in the particle group is set to be 50, the initial particle group is constructed through a random perturbation algorithm, and the matching parameters obtained through QPSO iterative optimization are shown in Table 2. The real registration parameters are obtained by calculating 20 pairs of feature point pairs with errors smaller than 0.5 pixel through ENVI manual selection, the root mean square error RMSE of the optimized parameters and the real parameters is obviously reduced compared with the initial parameters, and the result verifies the effectiveness of the proposed fine matching method in the aspect of optimizing the position of the matching point. The result obtained by using the optimized registration parameters in the fusion experiment is shown in fig. 3, compared with the initial parameter fusion fig. 2, the overlapping part of the image matched by the method is smoothly connected, and the high precision of the method is verified visually.
Thus, the present application discloses an image matching method, comprising: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the image imaging parameters and the imaging characteristics, and the problem that the rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Referring to fig. 4, the embodiment of the present invention discloses a specific image matching method, and compared with the previous embodiment, the present embodiment further describes and optimizes the technical solution. Specifically, the method comprises the following steps:
step S21: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters.
Step S22: and determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling mode and an offset to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image.
In this embodiment, the center position information of the rectangular overlapping area is determined based on the image information of the visible light image and the infrared light image, and the rectangular overlapping area between the visible light image and the infrared light image is determined based on the center position information and the offset. It will be appreciated that the upper left and lower right coordinates of the rectangular overlap region are first determined from the length and width of the visible light image, the length and width of the infrared light image and the offset, wherein the upper left and lower right coordinates (x) are calculated L ,y L ),(x R ,y R ) The calculation formula is as follows:
wherein the content of the first and second substances,
is the length of the visible-light image,
is the length of the image in the infrared light,
is the width of the image in the visible light,
p is the offset for the width of the infrared light image. Then, the center position information of the overlapping area at this time is determined according to the coordinates of the upper left corner and the lower right corner, the range of the overlapping area is determined based on the center position information and the offset, and the first overlapping area and the second overlapping area are determined according to the determined overlapping area range in the visible light image and the infrared light image respectively.
Step S23: determining visual fidelity of the first overlapping region and the second overlapping region based on sub-image information corresponding to the first overlapping region and the second overlapping region; and constructing an image optimization function by using the mutual information of the first overlapping area and the second overlapping area and the visual fidelity.
In this embodiment, wavelet transformation is performed on a first sub-image corresponding to the first overlapping region and a second sub-image corresponding to the second overlapping region, so as to obtain a preset number of coefficient blocks and extract a wavelet coefficient vector; calculating a covariance matrix and a likelihood estimate of the wavelet coefficient vector; constructing a respective sample vector based on window samples in the middle of the coefficient block to determine a respective variance; calculating a visual fidelity of the first overlap region and the second overlap region based on the likelihood estimate, the variance, and a visual noise variance. It can be understood that s-level wavelet transformation is respectively carried out on two sub-images corresponding to the overlapping regions of the visible light image and the matching fusion image, each wavelet sub-band is divided into N non-overlapping coefficient blocks, and a wavelet coefficient vector set c is extracted
l ={c
l1 ,c
l2 ,⋯,c
lN And d
l ={d
l1 ,d
l2 ,⋯,d
lN L =1,2, \ 8943j, s; computing a covariance matrix
Let c be
lj Is a random vector in a Gaussian mixture model, and the likelihood is estimated as
Wherein M is
l Is a wavelet coefficient vector c
lj The dimension of (a); is provided with
Zero mean white Gaussian noise representing independent stationary with a variance of
From distortion models
Respectively recording B multiplied by B window samples in the middle of the jth coefficient block of the two sub-graphs in the step to form vectors C and D, and fusing gain scalars
Sum variance
The estimation is as follows:
wherein the content of the first and second substances,
representing the correlation coefficient of C and D.
And calculating the visual fidelity of the overlapped area of the visible light image and the matching fusion image, wherein the formula is as follows:
wherein
Is a covariance matrix
Is determined by the characteristic value of (a),
the visual noise variance is represented, and the value of the visual noise variance can be 0.1, and the value of the visual noise variance has little influence on the result.
By using the mutual information of the first overlapping area and the second overlapping area and the visual fidelity to construct an image optimization function, it can be understood that after the metric index of the visual fidelity is obtained, the image optimization function is constructed according to the mutual information and the visual fidelity, and the formula of the image optimization parameter is defined as follows:
wherein the visible light image is
The infrared image is
Taking a group of matching parameters as a population,
obtaining a correction waiting chart by the ith population affine transformation after the t-th iteration
The corresponding fusion result graph is
,
Representing the similarity function values obtained for image matching after the ith population (i =1, \ 8943;, n) was iterated for the t-th time,
as a visible light image
Graph to be matched after correction
The mutual information of the overlapping rectangular areas,
is a visible light image
And matching fused images
Visual fidelity of the overlapping rectangular regions.
When the mutual information adopts normalized mutual information, the specific formula of the mutual information is expressed as follows:
wherein H (\8729;) represents the entropy of an image,
is the joint entropy of the image.
Step S24: and performing iterative computation on the image optimization function based on the matching parameters, and determining target matching parameters.
In this embodiment, an initial global optimal position is first determined, initial particles are used as matching parameters, that is, the initial parameters are used as target matching parameters, and the method is based on the initial particles
And determining corresponding coordinate position information by using a calculation formula of the coordinates of the upper left corner and the coordinates of the lower right corner of the rectangular overlapping area. The similarity of gray scale statistics adopts normalized mutual information to calculate the similarity function value of the matched image
The initial global optimum position is
The minimum particle position is updated, the position of each particle is updated, the current optimal position of the ith particle is updated, the optimization function value of the matched image is calculated, each optimization function value is compared with a preset standard function value, if the current optimization function value is larger than the initial optimization function value, the current position of the current particle is used as the current optimal position of the current example, if the current optimization function value is smaller than the initial optimization function value, the current position of the previous example is used as the current optimal position of the current particle, the current optimal position of the ith example is further determined, and specifically, for each particle, the current optimal position of the ith particle is determined
By using
As matching parameters, correcting the infrared light image
Obtaining corrected infrared light image
(ii) a Determining the coordinates of the upper left corner and the lower right corner of the overlapping area according to the formula above, wherein
Calculating visual protection of the visible light image and the fusion image for the image area of the rectangular overlapping areaPerforming s-level wavelet transformation on sub-images corresponding to the overlapping region of the two images, and calculating corresponding visual fidelity by using parameters of the wavelet transformation and parameters of likelihood estimation, variance and the like of a Gaussian mixture model; then determining the optimized function value of the matched image according to the visual fidelity and the normalized mutual information of the overlapped area of the visible light image and the corrected infrared light image, if so
Then, then
,
And if not, the step (B),
,
(ii) a Calculating the optimal position of the update group
Correcting the infrared image as a matching parameter
Obtaining corrected infrared image
(ii) a Determining coordinates of upper left corner and lower right corner of the overlapping region, wherein
(ii) a Then calculating the optimization function value
If, if
Then, then
(ii) a Repeating the process of confirming the particle position and the optimal position of the group until the iteration times are more than the preset iteration times or the difference of the similarity function values of the optimal matching of the previous time and the next time is less than a given error, and outputting the global optimal position of the group
Parameters are matched for the target.
Step S25: and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
Therefore, the initial matching parameters are obtained through the image imaging parameters and the imaging characteristics, and the problem that rough matching is difficult to realize due to insufficient infrared and visible light image matching feature points is solved; by utilizing the characteristic of coaxial imaging of the unmanned aerial vehicle and estimating the overlapping area matched with the image through zooming and translation, the complexity of calculation of the overlapping area can be reduced.
Referring to fig. 5, an embodiment of the present invention further discloses an image matching apparatus, which includes:
a parameter determining module 11, configured to determine initial parameters based on image imaging parameters of the visible light image and the infrared light image, and construct matching parameters using the initial parameters;
a function constructing module 12, configured to construct an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image;
a target parameter determining module 13, configured to perform iterative computation on the image optimization function based on the matching parameters, and determine target matching parameters;
and the image matching module 14 is configured to perform affine transformation on the infrared light image by using the target matching parameter, and output a target infrared light matching image.
Thus, the present application discloses an image matching method, comprising: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the image imaging parameters and the imaging characteristics, and the problem that the rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Further, an electronic device is disclosed in the embodiments of the present application, and fig. 6 is a block diagram of an electronic device 20 according to an exemplary embodiment, which should not be construed as limiting the scope of the application.
Fig. 6 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein, the memory 22 is used for storing a computer program, and the computer program is loaded and executed by the processor 21 to implement the relevant steps in the image matching method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in this embodiment may be specifically an electronic computer.
In this embodiment, the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to acquire external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
The processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 21 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 21 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 21 may further include an AI (Artificial Intelligence) processor for processing a calculation operation related to machine learning.
In addition, the storage 22 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon may include an operating system 221, a computer program 222, etc., and the storage manner may be a transient storage or a permanent storage.
The operating system 221 is used for managing and controlling each hardware device and the computer program 222 on the electronic device 20, so as to realize the operation and processing of the mass data 223 in the memory 22 by the processor 21, and may be Windows Server, netware, unix, linux, and the like. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the image matching method disclosed in any of the foregoing embodiments and executed by the electronic device 20. The data 223 may include data received by the electronic device and transmitted from an external device, or may include data collected by the input/output interface 25 itself.
Further, the present application also discloses a computer readable storage medium for storing a computer program; wherein the computer program when executed by a processor implements the image matching method disclosed in the foregoing. For the specific steps of the method, reference may be made to the corresponding contents disclosed in the foregoing embodiments, which are not described herein again.
In the present specification, the embodiments are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same or similar parts between the embodiments are referred to each other. The device disclosed in the embodiment corresponds to the method disclosed in the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The image matching method, device, apparatus and medium provided by the present invention are described in detail above, and the principle and the implementation of the present invention are explained in this document by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.