Image definition matching method and system
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an image definition matching method and system.
Background
Image stitching techniques are currently widely used.
At present, the quality of image splicing greatly depends on the consistency of definition among spliced images, and when the definition among images to be spliced is greatly different, the image splicing effect is poor, which has become a technical problem to be solved urgently in the field of computer vision.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides an image definition matching method and an image definition matching system, and aims to solve the technical problem of poor splicing effect caused by large difference of the definition of the images to be spliced in the image splicing process in the prior art by adopting a classification model to automatically select a proper fuzzy algorithm and process the images with high definition under the guidance of an image definition evaluation algorithm.
To achieve the above object, according to one aspect of the present invention, there is provided an image sharpness matching method, including the steps of:
(1) obtaining two face images, and calculating the two face images by using a plurality of definition evaluation algorithms to respectively obtain definition value sets S of the two face imagesP=(S1,P,S2,P,…,Sn,P) And SQ=(S1,Q,S2,Q,…,Sn,Q) Where P denotes one image, Q denotes another image, n denotes the number of sharpness evaluation algorithms used, Sn,PRepresenting the sharpness value, S, of the face image P calculated using the nth sharpness evaluation algorithmn,QThe sharpness value of the face image Q calculated by the nth sharpness evaluation algorithm is shown.
(2) Acquiring definition difference vectors of the two face images according to the definition value sets of the two face images obtained in the step (1), wherein the definition difference vectors are represented by the following equation:
wherein SDP,QAnd representing the definition difference vectors of the two face images.
(3) Inputting the definition difference vectors of the two human face images obtained in the step (2) into a trained classification model to obtain a fuzzy algorithm corresponding to the definition difference vectors;
(4) and (4) processing one face image with higher definition value in the two face images according to the fuzzy algorithm obtained in the step (3).
Preferably, the sharpness evaluation algorithm includes a gradient function-based evaluation algorithm, an image frequency domain-based evaluation algorithm, an entropy function-based evaluation algorithm, and an evaluation algorithm in combination with a human visual system.
Preferably, the classification model is a decision tree model, a support vector machine model, a naive bayes model, or a random forest model.
Preferably, the fuzzy algorithm is a mean filtering algorithm, a gaussian fuzzy algorithm, a median filtering algorithm, a bilateral filtering algorithm, or a gaussian low-pass filtering algorithm.
Preferably, the training process of the classification model is realized by making a training set of the classification model and then training the classification model by using the made training set.
Preferably, the making of the training set of classification models comprises:
a) acquiring a face image from a face data set;
b) processing the acquired face images by using different fuzzy algorithms to obtain m fuzzy images, wherein m represents the number of the used fuzzy algorithms;
c) calculating definition difference vectors between the face image and each fuzzy image, wherein each definition difference vector and the sequence number of the corresponding fuzzy algorithm form a training sample;
d) and repeating the steps a) to d) aiming at the rest face images in the face data set, thereby obtaining a training set of which k x m training samples form a classification model, wherein k represents the number of all face images in the face data set.
According to another aspect of the present invention, there is provided an image sharpness matching system including:
a first module for obtaining two face images and calculating the two face images by using a plurality of definition evaluation algorithms to respectively obtain the two face imagesSet of sharpness values SP=(S1,P,S2,P,…,Sn,P) And SQ=(S1,Q,S2,Q,…,Sn,Q) Where P denotes one image, Q denotes another image, n denotes the number of sharpness evaluation algorithms used, Sn,PRepresenting the sharpness value, S, of the face image P calculated using the nth sharpness evaluation algorithmn,QThe sharpness value of the face image Q calculated by the nth sharpness evaluation algorithm is shown.
The second module is used for acquiring the definition difference vectors of the two face images according to the definition value sets of the two face images obtained in the first module, and the definition difference vectors are expressed by the following equation:
wherein SDP,QAnd representing the definition difference vectors of the two face images.
The third module is used for inputting the definition difference vectors of the two human face images obtained in the second module into a trained classification model so as to obtain a fuzzy algorithm corresponding to the definition difference vectors;
and the fourth module is used for processing the face image with higher definition value in the two face images according to the fuzzy algorithm obtained by the third module.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) according to the invention, a proper fuzzy algorithm is automatically selected by adopting an image definition evaluation algorithm and a classification model, and an image with high definition in images to be spliced is processed, so that the technical problem of poor splicing effect caused by large difference in definition of the images to be spliced in the existing image splicing process can be solved.
(2) Because the invention adopts various definition evaluation algorithms and various fuzzy algorithms, the precise fineness matching effect can be realized.
(3) The method has short processing time and high efficiency.
Drawings
Fig. 1(a) and (b) are examples of two face images acquired in step (1) of the image sharpness matching method of the present invention, respectively.
Fig. 2(a) and (b) are examples of two face images processed by the image sharpness matching method of the present invention, respectively.
Fig. 3 is a flowchart of an image sharpness matching method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 3, the image sharpness matching method of the present invention includes the steps of:
(1) obtaining two face images, calculating the two face images by using a plurality of Sharpness evaluation algorithms (Sharpness evaluation algorithms) to respectively obtain Sharpness value sets S of the two face imagesP=(S1,P,S2,P,…,Sn,P) And SQ=(S1,Q,S2,Q,…,Sn,Q) Where P denotes one image, Q denotes another image, n denotes the number of sharpness evaluation algorithms used, Sn,PRepresenting the sharpness value, S, of the face image P calculated using the nth sharpness evaluation algorithmn,QThe sharpness value of the face image Q calculated by the nth sharpness evaluation algorithm is shown.
Specifically, the Sharpness evaluation algorithm used in this step includes a gradient function-based evaluation algorithm (gradient evaluation based on gradient function), an image frequency domain-based evaluation algorithm (entropy evaluation based on frequency domain), an entropy function-based evaluation algorithm (entropy evaluation based on entropy function), and an evaluation algorithm in combination with a human visual system (human vision system). It should be appreciated that although only four sharpness evaluation algorithms are listed in the steps of the present invention for exemplary purposes, the present invention is not so limited and any number of sharpness evaluation algorithms can be used.
Referring to fig. 1(a) and (b), which show examples of two images inputted, it can be seen that the two images are greatly different in definition.
(2) Acquiring definition difference vectors of the two face images according to the definition value sets of the two face images obtained in the step (1), wherein the definition difference vectors are represented by the following equation:
wherein SDP,QAnd representing the definition difference vectors of the two face images.
(3) Inputting the definition difference vectors of the two human face images obtained in the step (2) into a trained classification model to obtain a fuzzy algorithm corresponding to the definition difference vectors;
specifically, the classification model used in this step may be a decision tree model, a support vector machine model, a naive bayes model, a random forest model, or the like.
The fuzzy algorithm used in the step can be a mean filtering algorithm, a gaussian fuzzy algorithm, a median filtering algorithm, a bilateral filtering algorithm, a gaussian low-pass filtering algorithm or the like, different fuzzy processing effects can be obtained by changing the window sizes (for example, three window sizes) of the four algorithms of the mean filtering algorithm, the gaussian fuzzy algorithm, the median filtering algorithm and the bilateral filtering algorithm, and different fuzzy processing effects can also be obtained by changing the cut-off frequency (for example, 4 cut-off frequencies) of the gaussian low-pass filtering algorithm.
The classification model used in this step is trained by the following procedures:
firstly, a training set of a classification model is manufactured, and the method specifically comprises the following steps:
a) acquiring a face image from a face data set (a hull data set used in the present embodiment);
b) processing the acquired face image by using different blurring algorithms (specifically, the same blurring algorithms listed above) to obtain m blurred images, where m represents the number of blurring algorithms used (for the above case, there are 4 × 3+4 — 16 blurring algorithms, that is, m — 16 blurring algorithms);
c) calculating definition difference vectors between the face image and each fuzzy image, wherein each definition difference vector and the sequence number of the corresponding fuzzy algorithm form a training sample;
specifically, the process of calculating the sharpness difference vector in this step is completely the same as that in step (2), and is not described herein again.
d) And repeating the steps a) to d) aiming at the rest face images in the face data set, thereby obtaining a training set of which k x m training samples form a classification model, wherein k represents the number of all face images in the face data set.
Then, the classification model is trained using the created training set.
(4) And (4) processing one face image with higher definition value in the two face images according to the fuzzy algorithm obtained in the step (3), and obtaining a result as shown in fig. 2 (a).
Results of the experiment
The 16 fuzzy algorithms and the 4 sharpness evaluation algorithms mentioned above in the present invention are implemented and tested on the basis of the OpenCV image processing library by using Python3 language, and the obtained experimental data statistics are shown in table 1 below.
TABLE 1 clarity matching experimental data statistics table
The percent reduction in sharpness difference in the table above is calculated as:
the closer the percentage is to 100%, the better the effect of the definition matching method is, and the more natural the spliced image is. As can be seen from the test data in the table, the definition difference between the original two images can be reduced by about 90% by using the definition matching method, and the natural degree of the images after splicing is greatly improved. In terms of time loss, each pair of images requires only 0.08s on average.
After the method is used for carrying out definition matching treatment, the obtained effect is shown in fig. 2(a) and (b), wherein fig. 2(b) is completely the same as fig. 1(b), and the method can be seen that the definition difference between the two images is greatly optimized, so that the splicing effect of the two images can be ensured.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.