CN108682024B - Image definition matching method and system - Google Patents

Image definition matching method and system Download PDF

Info

Publication number
CN108682024B
CN108682024B CN201810360215.2A CN201810360215A CN108682024B CN 108682024 B CN108682024 B CN 108682024B CN 201810360215 A CN201810360215 A CN 201810360215A CN 108682024 B CN108682024 B CN 108682024B
Authority
CN
China
Prior art keywords
definition
image
face images
algorithm
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810360215.2A
Other languages
Chinese (zh)
Other versions
CN108682024A (en
Inventor
李方敏
沈逸
马小林
杨志邦
栾悉道
王雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Cloud Archive Information Technology Co ltd
Original Assignee
Changsha University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University filed Critical Changsha University
Priority to CN201810360215.2A priority Critical patent/CN108682024B/en
Publication of CN108682024A publication Critical patent/CN108682024A/en
Application granted granted Critical
Publication of CN108682024B publication Critical patent/CN108682024B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image definition matching method, which comprises the following steps: the method comprises the steps of obtaining two face images, calculating the two face images by using a plurality of definition evaluation algorithms to respectively obtain definition value sets of the two face images, obtaining definition difference vectors of the two face images according to the obtained definition value sets of the two face images, inputting the obtained definition difference vectors of the two face images into a trained classification model to obtain a fuzzy algorithm corresponding to the definition difference vectors, and processing one face image with higher definition value in the two face images according to the obtained fuzzy algorithm. Under the guidance of an image definition evaluation algorithm, the invention adopts a classification model to automatically select a proper fuzzy algorithm and process images with high definition, thereby solving the technical problem of poor splicing effect caused by larger difference of the definition of the images to be spliced in the existing image splicing process.

Description

Image definition matching method and system
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an image definition matching method and system.
Background
Image stitching techniques are currently widely used.
At present, the quality of image splicing greatly depends on the consistency of definition among spliced images, and when the definition among images to be spliced is greatly different, the image splicing effect is poor, which has become a technical problem to be solved urgently in the field of computer vision.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides an image definition matching method and an image definition matching system, and aims to solve the technical problem of poor splicing effect caused by large difference of the definition of the images to be spliced in the image splicing process in the prior art by adopting a classification model to automatically select a proper fuzzy algorithm and process the images with high definition under the guidance of an image definition evaluation algorithm.
To achieve the above object, according to one aspect of the present invention, there is provided an image sharpness matching method, including the steps of:
(1) obtaining two face images, and calculating the two face images by using a plurality of definition evaluation algorithms to respectively obtain definition value sets S of the two face imagesP=(S1,P,S2,P,…,Sn,P) And SQ=(S1,Q,S2,Q,…,Sn,Q) Where P denotes one image, Q denotes another image, n denotes the number of sharpness evaluation algorithms used, Sn,PRepresenting the sharpness value, S, of the face image P calculated using the nth sharpness evaluation algorithmn,QThe sharpness value of the face image Q calculated by the nth sharpness evaluation algorithm is shown.
(2) Acquiring definition difference vectors of the two face images according to the definition value sets of the two face images obtained in the step (1), wherein the definition difference vectors are represented by the following equation:
Figure BDA0001635764230000021
wherein SDP,QAnd representing the definition difference vectors of the two face images.
(3) Inputting the definition difference vectors of the two human face images obtained in the step (2) into a trained classification model to obtain a fuzzy algorithm corresponding to the definition difference vectors;
(4) and (4) processing one face image with higher definition value in the two face images according to the fuzzy algorithm obtained in the step (3).
Preferably, the sharpness evaluation algorithm includes a gradient function-based evaluation algorithm, an image frequency domain-based evaluation algorithm, an entropy function-based evaluation algorithm, and an evaluation algorithm in combination with a human visual system.
Preferably, the classification model is a decision tree model, a support vector machine model, a naive bayes model, or a random forest model.
Preferably, the fuzzy algorithm is a mean filtering algorithm, a gaussian fuzzy algorithm, a median filtering algorithm, a bilateral filtering algorithm, or a gaussian low-pass filtering algorithm.
Preferably, the training process of the classification model is realized by making a training set of the classification model and then training the classification model by using the made training set.
Preferably, the making of the training set of classification models comprises:
a) acquiring a face image from a face data set;
b) processing the acquired face images by using different fuzzy algorithms to obtain m fuzzy images, wherein m represents the number of the used fuzzy algorithms;
c) calculating definition difference vectors between the face image and each fuzzy image, wherein each definition difference vector and the sequence number of the corresponding fuzzy algorithm form a training sample;
d) and repeating the steps a) to d) aiming at the rest face images in the face data set, thereby obtaining a training set of which k x m training samples form a classification model, wherein k represents the number of all face images in the face data set.
According to another aspect of the present invention, there is provided an image sharpness matching system including:
a first module for obtaining two face images and calculating the two face images by using a plurality of definition evaluation algorithms to respectively obtain the two face imagesSet of sharpness values SP=(S1,P,S2,P,…,Sn,P) And SQ=(S1,Q,S2,Q,…,Sn,Q) Where P denotes one image, Q denotes another image, n denotes the number of sharpness evaluation algorithms used, Sn,PRepresenting the sharpness value, S, of the face image P calculated using the nth sharpness evaluation algorithmn,QThe sharpness value of the face image Q calculated by the nth sharpness evaluation algorithm is shown.
The second module is used for acquiring the definition difference vectors of the two face images according to the definition value sets of the two face images obtained in the first module, and the definition difference vectors are expressed by the following equation:
Figure BDA0001635764230000031
wherein SDP,QAnd representing the definition difference vectors of the two face images.
The third module is used for inputting the definition difference vectors of the two human face images obtained in the second module into a trained classification model so as to obtain a fuzzy algorithm corresponding to the definition difference vectors;
and the fourth module is used for processing the face image with higher definition value in the two face images according to the fuzzy algorithm obtained by the third module.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) according to the invention, a proper fuzzy algorithm is automatically selected by adopting an image definition evaluation algorithm and a classification model, and an image with high definition in images to be spliced is processed, so that the technical problem of poor splicing effect caused by large difference in definition of the images to be spliced in the existing image splicing process can be solved.
(2) Because the invention adopts various definition evaluation algorithms and various fuzzy algorithms, the precise fineness matching effect can be realized.
(3) The method has short processing time and high efficiency.
Drawings
Fig. 1(a) and (b) are examples of two face images acquired in step (1) of the image sharpness matching method of the present invention, respectively.
Fig. 2(a) and (b) are examples of two face images processed by the image sharpness matching method of the present invention, respectively.
Fig. 3 is a flowchart of an image sharpness matching method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 3, the image sharpness matching method of the present invention includes the steps of:
(1) obtaining two face images, calculating the two face images by using a plurality of Sharpness evaluation algorithms (Sharpness evaluation algorithms) to respectively obtain Sharpness value sets S of the two face imagesP=(S1,P,S2,P,…,Sn,P) And SQ=(S1,Q,S2,Q,…,Sn,Q) Where P denotes one image, Q denotes another image, n denotes the number of sharpness evaluation algorithms used, Sn,PRepresenting the sharpness value, S, of the face image P calculated using the nth sharpness evaluation algorithmn,QThe sharpness value of the face image Q calculated by the nth sharpness evaluation algorithm is shown.
Specifically, the Sharpness evaluation algorithm used in this step includes a gradient function-based evaluation algorithm (gradient evaluation based on gradient function), an image frequency domain-based evaluation algorithm (entropy evaluation based on frequency domain), an entropy function-based evaluation algorithm (entropy evaluation based on entropy function), and an evaluation algorithm in combination with a human visual system (human vision system). It should be appreciated that although only four sharpness evaluation algorithms are listed in the steps of the present invention for exemplary purposes, the present invention is not so limited and any number of sharpness evaluation algorithms can be used.
Referring to fig. 1(a) and (b), which show examples of two images inputted, it can be seen that the two images are greatly different in definition.
(2) Acquiring definition difference vectors of the two face images according to the definition value sets of the two face images obtained in the step (1), wherein the definition difference vectors are represented by the following equation:
Figure BDA0001635764230000051
wherein SDP,QAnd representing the definition difference vectors of the two face images.
(3) Inputting the definition difference vectors of the two human face images obtained in the step (2) into a trained classification model to obtain a fuzzy algorithm corresponding to the definition difference vectors;
specifically, the classification model used in this step may be a decision tree model, a support vector machine model, a naive bayes model, a random forest model, or the like.
The fuzzy algorithm used in the step can be a mean filtering algorithm, a gaussian fuzzy algorithm, a median filtering algorithm, a bilateral filtering algorithm, a gaussian low-pass filtering algorithm or the like, different fuzzy processing effects can be obtained by changing the window sizes (for example, three window sizes) of the four algorithms of the mean filtering algorithm, the gaussian fuzzy algorithm, the median filtering algorithm and the bilateral filtering algorithm, and different fuzzy processing effects can also be obtained by changing the cut-off frequency (for example, 4 cut-off frequencies) of the gaussian low-pass filtering algorithm.
The classification model used in this step is trained by the following procedures:
firstly, a training set of a classification model is manufactured, and the method specifically comprises the following steps:
a) acquiring a face image from a face data set (a hull data set used in the present embodiment);
b) processing the acquired face image by using different blurring algorithms (specifically, the same blurring algorithms listed above) to obtain m blurred images, where m represents the number of blurring algorithms used (for the above case, there are 4 × 3+4 — 16 blurring algorithms, that is, m — 16 blurring algorithms);
c) calculating definition difference vectors between the face image and each fuzzy image, wherein each definition difference vector and the sequence number of the corresponding fuzzy algorithm form a training sample;
specifically, the process of calculating the sharpness difference vector in this step is completely the same as that in step (2), and is not described herein again.
d) And repeating the steps a) to d) aiming at the rest face images in the face data set, thereby obtaining a training set of which k x m training samples form a classification model, wherein k represents the number of all face images in the face data set.
Then, the classification model is trained using the created training set.
(4) And (4) processing one face image with higher definition value in the two face images according to the fuzzy algorithm obtained in the step (3), and obtaining a result as shown in fig. 2 (a).
Results of the experiment
The 16 fuzzy algorithms and the 4 sharpness evaluation algorithms mentioned above in the present invention are implemented and tested on the basis of the OpenCV image processing library by using Python3 language, and the obtained experimental data statistics are shown in table 1 below.
TABLE 1 clarity matching experimental data statistics table
Figure BDA0001635764230000061
The percent reduction in sharpness difference in the table above is calculated as:
Figure BDA0001635764230000062
the closer the percentage is to 100%, the better the effect of the definition matching method is, and the more natural the spliced image is. As can be seen from the test data in the table, the definition difference between the original two images can be reduced by about 90% by using the definition matching method, and the natural degree of the images after splicing is greatly improved. In terms of time loss, each pair of images requires only 0.08s on average.
After the method is used for carrying out definition matching treatment, the obtained effect is shown in fig. 2(a) and (b), wherein fig. 2(b) is completely the same as fig. 1(b), and the method can be seen that the definition difference between the two images is greatly optimized, so that the splicing effect of the two images can be ensured.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. An image definition matching method is characterized by comprising the following steps:
(1) obtaining two face images, and calculating the two face images by using a plurality of definition evaluation algorithms to respectively obtain definition value sets S of the two face imagesP=(S1,P,S2,P,...,Sn,P) And SQ=(S1,Q,S2,Q,...,Sn,Q) Where P denotes one image, Q denotes another image, n denotes the number of sharpness evaluation algorithms used, Sn,PRepresenting the sharpness value, S, of the face image P calculated using the nth sharpness evaluation algorithmn,QRepresenting the definition value of the face image Q calculated by using the nth definition evaluation algorithm;
(2) acquiring definition difference vectors of the two face images according to the definition value sets of the two face images obtained in the step (1), wherein the definition difference vectors are represented by the following equation:
Figure FDA0001635764220000011
wherein SDP,QRepresenting definition difference vectors of two human face images;
(3) inputting the definition difference vectors of the two human face images obtained in the step (2) into a trained classification model to obtain a fuzzy algorithm corresponding to the definition difference vectors;
(4) and (4) processing one face image with higher definition value in the two face images according to the fuzzy algorithm obtained in the step (3).
2. An image sharpness matching method according to claim 1, wherein the sharpness evaluation algorithm includes a gradient function-based evaluation algorithm, an image frequency domain-based evaluation algorithm, an entropy function-based evaluation algorithm, and an evaluation algorithm in combination with a human visual system.
3. A method of image sharpness matching according to claim 1 or 2, wherein the classification model is a decision tree model, a support vector machine model, a naive bayes model, or a random forest model.
4. A method of image sharpness matching according to any one of claims 1 to 3, characterized in that the blurring algorithm is a mean filtering algorithm, a gaussian blurring algorithm, a median filtering algorithm, a bilateral filtering algorithm, or a gaussian low-pass filtering algorithm.
5. A method for image sharpness matching according to any one of claims 1 to 4, wherein the training process of the classification model is performed by making a training set of the classification model and then training the classification model using the made training set.
6. A method for image sharpness matching according to any one of claims 1 to 5, wherein making a training set of classification models includes:
a) acquiring a face image from a face data set;
b) processing the acquired face images by using different fuzzy algorithms to obtain m fuzzy images, wherein m represents the number of the used fuzzy algorithms;
c) calculating definition difference vectors between the face image and each fuzzy image, wherein each definition difference vector and the sequence number of the corresponding fuzzy algorithm form a training sample;
d) and repeating the steps a) to d) aiming at the rest face images in the face data set, thereby obtaining a training set of which k x m training samples form a classification model, wherein k represents the number of all face images in the face data set.
7. An image sharpness matching system, comprising:
a first module for obtaining two face images and calculating the two face images by using a plurality of definition evaluation algorithms to respectively obtain definition value sets S of the two face imagesP=(S1,P,S2,P,...,Sn,P) And SQ=(S1,Q,S2,Q,...,Sn,Q) Where P denotes one image, Q denotes another image, n denotes the number of sharpness evaluation algorithms used, Sn,PRepresenting the sharpness value, S, of the face image P calculated using the nth sharpness evaluation algorithmn,QRepresenting the definition value of the face image Q calculated by using the nth definition evaluation algorithm;
the second module is used for acquiring the definition difference vectors of the two face images according to the definition value sets of the two face images obtained in the first module, and the definition difference vectors are expressed by the following equation:
Figure FDA0001635764220000021
wherein SDP,QRepresenting definition difference vectors of two human face images;
the third module is used for inputting the definition difference vectors of the two human face images obtained in the second module into a trained classification model so as to obtain a fuzzy algorithm corresponding to the definition difference vectors;
and the fourth module is used for processing the face image with higher definition value in the two face images according to the fuzzy algorithm obtained by the third module.
CN201810360215.2A 2018-04-20 2018-04-20 Image definition matching method and system Expired - Fee Related CN108682024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810360215.2A CN108682024B (en) 2018-04-20 2018-04-20 Image definition matching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810360215.2A CN108682024B (en) 2018-04-20 2018-04-20 Image definition matching method and system

Publications (2)

Publication Number Publication Date
CN108682024A CN108682024A (en) 2018-10-19
CN108682024B true CN108682024B (en) 2021-05-18

Family

ID=63801524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810360215.2A Expired - Fee Related CN108682024B (en) 2018-04-20 2018-04-20 Image definition matching method and system

Country Status (1)

Country Link
CN (1) CN108682024B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179259B (en) * 2019-12-31 2023-09-26 北京灵犀微光科技有限公司 Optical definition testing method and device
CN111414842B (en) * 2020-03-17 2021-04-13 腾讯科技(深圳)有限公司 Video comparison method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093580A (en) * 2007-08-29 2007-12-26 华中科技大学 Image interfusion method based on wave transform of not sub sampled contour
CN101872424A (en) * 2010-07-01 2010-10-27 重庆大学 Facial expression recognizing method based on Gabor transform optimal channel blur fusion
CN104486555A (en) * 2014-10-28 2015-04-01 北京智谷睿拓技术服务有限公司 Image acquisition control method and device
CN106780326A (en) * 2016-11-30 2017-05-31 长沙全度影像科技有限公司 A kind of fusion method for improving panoramic picture definition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741239B (en) * 2014-12-11 2018-11-30 合肥美亚光电技术股份有限公司 Generation method, device and the panorama machine for shooting tooth of tooth panoramic picture
US10026163B2 (en) * 2015-02-25 2018-07-17 Cale Fallgatter Hydrometeor identification methods and systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093580A (en) * 2007-08-29 2007-12-26 华中科技大学 Image interfusion method based on wave transform of not sub sampled contour
CN101872424A (en) * 2010-07-01 2010-10-27 重庆大学 Facial expression recognizing method based on Gabor transform optimal channel blur fusion
CN104486555A (en) * 2014-10-28 2015-04-01 北京智谷睿拓技术服务有限公司 Image acquisition control method and device
CN106780326A (en) * 2016-11-30 2017-05-31 长沙全度影像科技有限公司 A kind of fusion method for improving panoramic picture definition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Scale-Based Fuzzy Connected Image Segmentation: Theory, Algorithms, and Validation;Punam K. Saha等;《Computer Vision and Image Understanding》;20000229;第77卷(第2期);全文 *
区域清晰度的小波变换图像融合算法研究;叶明等;《电子测量与仪器学报》;20150930;第29卷(第9期);全文 *
基于像素过滤配准插值的图像修复算法研究;潘多等;《计算机仿真》;20120331;第29卷(第3期);全文 *

Also Published As

Publication number Publication date
CN108682024A (en) 2018-10-19

Similar Documents

Publication Publication Date Title
Hosseini et al. Encoding visual sensitivity by maxpol convolution filters for image sharpness assessment
CN111179189B (en) Image processing method and device based on generation of countermeasure network GAN, electronic equipment and storage medium
Ma et al. A wavelet-based dual-stream network for underwater image enhancement
CN108682024B (en) Image definition matching method and system
CN109583364A (en) Image-recognizing method and equipment
CN114187201A (en) Model training method, image processing method, device, equipment and storage medium
CN111179333B (en) Defocus blur kernel estimation method based on binocular stereo vision
CN113177892A (en) Method, apparatus, medium, and program product for generating image inpainting model
Kwok et al. Design of unsharp masking filter kernel and gain using particle swarm optimization
Shuchun et al. Preprocessing for stereo vision based on LOG filter
CN112381845B (en) Rock core image generation method, model training method and device
El Hassani et al. Efficient image denoising method based on mathematical morphology reconstruction and the Non-Local Means filter for the MRI of the head
DE102023125923A1 (en) GENERATIVE MACHINE LEARNING MODELS FOR PRIVACY PRESERVING SYNTHETIC DATA GENERATION USING DIFFUSION
Kumar et al. Performance evaluation of joint filtering and histogram equalization techniques for retinal fundus image enhancement
CN113344935B (en) Image segmentation method and system based on multi-scale difficulty perception
Xie et al. DHD-Net: A novel deep-learning-based dehazing network
Viacheslav et al. Low-level features for inpainting quality assessment
CN106897975B (en) Image denoising method for hypercube particle calculation
Peng et al. Detail enhancement for infrared images based on propagated image filter
Holla et al. EFID: edge-focused image denoising using a convolutional neural network
CN114004974A (en) Method and device for optimizing images shot in low-light environment
Tian et al. A modeling method for face image deblurring
CN113344802A (en) Underwater image restoration method based on self-adaptive atmospheric light fusion
Manek et al. A soft weighted median filter for removing general purpose image noise
Kryjak et al. Pipeline implementation of peer group filtering in FPGA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220228

Address after: 410000 401, 4th floor, complex building, 1318 Kaiyuan East Road, Xingsha industrial base, Changsha Economic and Technological Development Zone, Changsha City, Hunan Province

Patentee after: HUNAN CLOUD ARCHIVE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 410003 science and Technology Office of Changsha University, 98 Hongshan Road, Kaifu District, Changsha City, Hunan Province

Patentee before: CHANGSHA University

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210518