WO2018035794A1 - System and method for measuring image resolution value - Google Patents

System and method for measuring image resolution value Download PDF

Info

Publication number
WO2018035794A1
WO2018035794A1 PCT/CN2016/096658 CN2016096658W WO2018035794A1 WO 2018035794 A1 WO2018035794 A1 WO 2018035794A1 CN 2016096658 W CN2016096658 W CN 2016096658W WO 2018035794 A1 WO2018035794 A1 WO 2018035794A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
value
sharpness
feature extractor
predictor
Prior art date
Application number
PCT/CN2016/096658
Other languages
French (fr)
Chinese (zh)
Inventor
余绍德
江帆
陈璐明
姬治华
伍世宾
谢耀钦
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2018035794A1 publication Critical patent/WO2018035794A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Definitions

  • the invention belongs to the technical field of image processing, and in particular to a system for measuring image sharpness values.
  • an object of the present invention is to provide a system and method for measuring image sharpness values with less parameters, small amount of calculation, easy training, high precision, and high speed.
  • the present invention provides a method of measuring image sharpness values, the method comprising:
  • the image feature extractor is used to perform convolution processing on the image to obtain a first feature image in the local extraction of the image
  • the single column feature vector is used to calculate a sharpness value using a predictor to obtain a sharpness value of the image.
  • X i is the received image to be tested, i represents the number of layers, k is the convolution kernel, M j is the receptive field of the input layer, B is the bias term, and f is the activation function.
  • the downsampling process is used to reduce the spatial resolution of the model and eliminate the offset and image distortion.
  • the calculation formula of the downsampling process is:
  • X i , Y are sample observations
  • is the smoothing factor
  • n is the number of samples.
  • the method further includes:
  • the image feature extractor is trained to enable the image feature extractor to perform convolution and downsampling processing on the image to obtain a single column feature vector
  • the predictor is trained to enable the predictor to perform an arithmetic process on the single-column feature vector to obtain a sharpness value of the image.
  • the method of obtaining the sharpness value includes:
  • the image feature extractor Inputting a reference image into the trained image feature extractor, the image feature extractor extracting features of the reference image and obtaining a single column reference feature vector;
  • the trained predictor inputs the single-column reference feature vector and performs an operation to obtain a sharpness reference value of the reference image
  • n is the number of samples
  • ⁇ x 1, x 2, ... , x n ⁇ and ⁇ y 1, y 2, ... , y n ⁇ is the mean, ⁇ x, ⁇ y, respectively, the difference to their standard, r xi, r yi are x The sorting position of i and y i in their respective data sequences.
  • the method for training the image feature extractor includes:
  • m is the number of samples per batch
  • y is the sample label
  • nl is the output layer
  • y is the sample label
  • a is the output value
  • f is the activation function
  • z is the upper layer neuron of the output layer.
  • ⁇ (l) ((W (l) ) T ⁇ (l+1) ) ⁇ f'(z (l) ),
  • W (l) is the weight of the first layer.
  • I the bias term of the first layer
  • the learning rate
  • the method of training the predictor includes:
  • the regression value of the dependent variable to the independent variable is obtained.
  • the present invention also provides a system for measuring image sharpness values, the method of measuring image sharpness values as described above for measuring sharpness values of an image, the system comprising:
  • the image feature extractor is configured to: receive an image; extract a first feature image from the local image; and perform blur processing on the extracted first feature image to obtain a second feature image with a lower resolution;
  • the processed second feature image is transformed into a single column feature vector;
  • the predictor is configured to perform a scoring calculation on the single-column feature vector to obtain a sharpness value of the image.
  • system further includes a trainer configured to train the image feature extractor to enable the image feature extractor to perform convolution and downsampling processing on the image to obtain a single column feature vector.
  • the trainer is configured to:
  • the training ends; if not, the residual is calculated; the residual is inversely transmitted layer by layer to obtain the residual value of each layer; and the parameter values of the weight and the offset term are updated.
  • the trainer is further configured to train the predictor to enable the predictor to perform an operation process on the single-column feature vector to obtain a sharpness value of the image.
  • the trainer is configured to:
  • the regression value of the dependent variable to the independent variable is obtained.
  • system further includes a verifier configured to verify a sharpness value measured by the image feature extractor in conjunction with the predictor.
  • the verifier is configured to:
  • the image feature extractor Inputting the reference image into the trained image feature extractor, the image feature extractor performs feature extraction on the reference image and obtains a single column reference feature vector, and the trained predictor inputs the single column reference feature vector, and performs an operation to Obtaining a sharpness reference value of the reference image;
  • the system and method for measuring image sharpness value provided by the invention can perform fast convolution and downsampling processing on the image to be measured by the image feature extractor to obtain a single column feature vector, and use a predictor to perform arithmetic processing on a single column feature vector. To get the sharpness value of the image.
  • the system for measuring the image sharpness value of the invention can quickly measure the sharpness value of the image after the training is finished, the training difficulty is low, the used parameters are few, the measured sharpness value is accurate and high, and the real-time property is strong. Can be widely used in optical imaging systems and medical imaging systems.
  • FIG. 1 is a block diagram of a preferred embodiment of a system for measuring image sharpness values in accordance with an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a system for measuring image sharpness values in an operating state according to an embodiment of the present invention
  • FIG. 3 is a flow chart of a method for measuring image sharpness values according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of a method for training an image feature extractor according to an embodiment of the present invention
  • FIG. 5 is a flowchart of a method for training a predictor according to an embodiment of the present invention
  • FIG. 6 is a flow chart of a method for verifying whether a measured sharpness value is accurate according to an embodiment of the present invention.
  • the current methods for measuring image sharpness values are mainly divided into reference (the optimal image quality considered by the senses as a comparison), semi-reference (partial information of the optimal image with sensory perception as a comparison) and no reference ( There are no direct or indirect information to evaluate the image.
  • Embodiments of the present invention are based on the principle of no reference image sharpness measurement to achieve measurement of image sharpness values.
  • the present invention also implements measurement of image sharpness values based on the principle of deep learning.
  • Deep learning transforms the feature representation of the sample in the original space into a new feature space by transforming the feature image of the sample image layer by layer. It automatically learns the hierarchical features, reduces manual parameter selection and feature selection, and is more conducive to classification or feature. Visualization, at the same time, can well avoid the limitations caused by manually setting features, greatly improving the accuracy and efficiency of image sharpness measurement.
  • 1 is a block diagram of a preferred embodiment of a system for measuring image sharpness values in accordance with an embodiment of the present invention.
  • 2 is a schematic diagram of a system for measuring image sharpness values in an operating state according to an embodiment of the present invention.
  • a system for measuring image sharpness values includes an image feature extractor 10, a predictor 20, a trainer 30, and a verifier 40.
  • the image feature extractor 10 includes an image receiving module 11, a convolution module 12, a downsampling module 13, and a transform processing module 14.
  • the image receiving module 10 is configured to receive an image.
  • the convolution module 12 is configured to extract a first feature image in a local portion of the image;
  • the downsampling module 13 is configured to perform a downsampling process on the first feature image obtained by the convolution process to obtain a resolution a lower second feature image;
  • the transform processing module 14 is configured to transform the second feature image processed by the downsampling module 13 into a single column feature vector.
  • the single column feature vector is specifically a 200-dimensional vector.
  • the invention is not limited thereto.
  • the image feature extractor 10 is preferably formed by a convolutional neural network configuration, and the convolutional neural network is simply referred to as CNN.
  • the image feature extractor 10 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto.
  • the predictor 20 includes an arithmetic processing module 21.
  • the operation processing module 21 is configured to perform a calculation score on the single column feature vector to obtain a sharpness value of the image.
  • the operation processing module 21 specifically includes a summation layer 211 and an output layer 212.
  • the predictor 20 further includes an input layer 22, a mode layer 23 configured to pass a single column feature vector obtained by the image feature extractor 10 to the mode layer 22; the input layer 22 includes a plurality of first neurons, The number of first neurons is equal to the dimension of the single column feature vector extracted by the image feature extractor 10 from the image.
  • the mode layer 23 is configured to correspond one-to-one to each sample data in the single-column feature vector, and the mode layer 23 also includes a plurality of second neurons, the number of the second neurons being equal to the number of the first neural units.
  • the search layer 211 includes only two third neurons, the summation layer 211 is fully connected to the mode layer 23, and the operation between the summation layer 211 and the mode layer 23 (as shown in the following formula), and the output layer 212 is calculated. And the two output quotients of layer 211, the sharpness value of the final image is obtained.
  • the formula for the predictor 20 to calculate the sharpness value of the image is:
  • X i , Y are sample observations
  • is the smoothing factor
  • n is the number of samples.
  • the predictor 20 is preferably formed by a generalized regression neural network configuration.
  • the predictor 20 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto.
  • the generalized regression neural network is abbreviated as GRNN, and the English is called the General regression neural network.
  • Generalized regression neural network Neural network is a variant of artificial neural networks with strong nonlinear mapping and generalization capabilities for small sample data.
  • the present invention is not limited thereto, and for example, as another embodiment of the present invention, the predictor 20 may also be a support vector regression. Support vector regression is abbreviated as SVR, and English is called Support vector regression.
  • the trainer 30 is configured to train the image feature extractor 10 and the predictor 20 to enable the image feature extractor 10 and the predictor 20 to automatically learn hierarchical features, thereby achieving depth-based reference-free image clarity.
  • the measurement of the degree value obtains an accurate sharpness value of the image to be tested.
  • the trainer is configured to train the image feature extractor to enable the image feature extractor to perform convolution and downsampling processing on the image to obtain a single column feature vector.
  • the trainer is further configured to train the predictor to enable the predictor to perform an arithmetic process on the single column feature vector to obtain a sharpness value of the image.
  • the trainer 30 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto.
  • the verifier 40 is configured to verify the sharpness value obtained by the image feature extractor 10 in conjunction with the predictor measurement 20. More specifically, the verifier 40 is configured to verify whether the sharpness value obtained by the image feature extractor 10 in conjunction with the predictor measurement 20 is accurate or accurate.
  • the verifier 40 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto.
  • the system for measuring image sharpness values of embodiments of the present invention may be integrated into a machine system to score the acquired images to determine imaging capabilities of the machine system, including but not limited to optical imaging systems, and may also include medical imaging systems.
  • the system for measuring image sharpness values can be applied to an image pickup apparatus.
  • the image capturing apparatus is capable of continuously taking a plurality of photographs, and the system for measuring the image sharpness value respectively measures the sharpness values of the plurality of photographs of the continuous shooting, and the image capturing apparatus is configured to compare the plurality of sharpness values.
  • the size which in turn outputs the photo corresponding to the maximum sharpness value (that is, the highest quality photo). It can be seen that the system for measuring the image sharpness value is applied to the image pickup device, and the photograph of the photograph with higher quality can be output for the user.
  • the system for measuring image sharpness values can also be applied to a device for judging image quality.
  • the system for measuring the image sharpness value measures the sharpness value of the image to be tested.
  • the means for judging image quality is configured to compare a sharpness value of the image to be tested with a reference threshold, and when the sharpness value is greater than the reference threshold, determine that the sharpness of the image is clear; The sharpness value is small At the reference threshold, it is determined that the sharpness of the image is unclear.
  • the present invention also provides a method of the system for measuring image sharpness values.
  • the method for measuring the image sharpness value of the system of the embodiment of the invention can be integrated into some image enhancement algorithms, and the parameters of the algorithm are optimized.
  • FIG. 3 is a flow chart of a method of measuring image sharpness values in accordance with an embodiment of the present invention. Specifically, referring to FIG. 1 , FIG. 2 and FIG. 3 , in combination with the above system for measuring image sharpness value, the method for measuring image sharpness value specifically includes:
  • the image feature extractor 10 is trained to enable the image feature extractor 10 to sequentially perform convolution and downsampling processing on the image, thereby obtaining a single column feature vector.
  • the image feature extractor 10 is trained by the trainer 30. It should be noted that in the system framework of this embodiment, there is one and only one feature layer.
  • FIG. 4 is a flow chart of a method of training an image feature extractor in accordance with an embodiment of the present invention. Specifically, referring to FIG. 4, the method for training the image feature extractor 10 specifically includes the following operations:
  • initialization is performed and a training set is input. Specifically, all convolution kernel weights and offset tops are initialized while the training set sample image is input to the image feature extractor 10.
  • the training set includes sample images with precise sharpness values.
  • the average squared error of image feature extractor 10 is calculated. Specifically, the sample image is obtained by calculating the output value O, and then calculating the output value O and the sample label y to obtain the model error value E. Whether the image feature extractor 10 model converges is judged by the error value, and if it converges, the training ends; if it does not converge, the residual of the output layer is continuously calculated.
  • the specific formula for calculating the average squared error and residual of the image feature extractor 10 model is:
  • m is the number of samples per batch
  • y is the sample label
  • nl is the output layer
  • y is the sample label
  • a is the output value
  • f is the activation function
  • z is the upper layer neuron of the output layer.
  • the residuals are reversed layer by layer to obtain residual values for each layer.
  • the residual value of each layer indicates that the node has a corresponding effect on the residual of the final output value.
  • the formula for calculating the residual value of each layer is:
  • ⁇ (l) ((W (l) ) T ⁇ (l+1) ) ⁇ f'(z (l) ),
  • W (l) is the weight of the first layer.
  • the parameter values of the weights and offsets are updated according to each layer residual calculation formula.
  • the calculation formula of the parameter value of the update weight and offset term is:
  • I the bias term of the first layer
  • the learning rate
  • the predictor 20 is trained to enable the predictor 20 to perform an arithmetic process on the single column feature vector to obtain a sharpness value of the image.
  • the predictor 20 is trained using the trainer 30.
  • FIG. 5 is a flow chart of a method for training a predictor according to an embodiment of the present invention. Specifically, referring to FIG. 5, the method for training the predictor 20 includes the following operations:
  • the training set is input to the trained image feature extractor 10 to obtain a single column feature vector of the training set.
  • the single column feature vector is a 200-dimensional vector.
  • a regression value of the dependent variable to the independent variable is calculated according to the single column feature vector and the label of the training set.
  • operation 230 after training the image feature extractor 10 and the predictor 20, it is further included to verify whether the sharpness value measured by the image feature extractor 10 in conjunction with the predictor 20 is accurate.
  • the verifier 40 is used to verify whether the sharpness value measured by the image feature extractor 10 in conjunction with the predictor 20 is accurate.
  • FIG. 6 is a flow chart of a method for verifying whether a measured sharpness value is accurate according to an embodiment of the present invention. Specifically, referring to FIG. 6, the method for verifying whether the image feature extractor 10 cooperates with the predictor 20 to determine whether the sharpness value is accurate includes the following operations:
  • the reference image is input to the trained image feature extractor 10, which performs feature extraction on the reference image and obtains a single column reference feature vector.
  • the trained predictor 20 inputs the single column reference feature vector and performs an operation to obtain a sharpness reference value of the reference image.
  • a Pearson linear correlation parameter (LCC) and a spearson ordering correlation parameter (SROCC) are calculated based on the calculated sharpness reference value and the subjective sharpness value of the reference image; the Pearson linearity The relevant parameters (LCC) and the Spearson Sorting Related Parameters (SROCC) are calculated as:
  • n is the number of samples
  • ⁇ x 1, x 2, ... , x n ⁇ and ⁇ y 1, y 2, ... , y n ⁇ is the mean, ⁇ x, ⁇ y, respectively, the difference to their standard, r xi, r yi are x The sorting position of i and y i in their respective data sequences.
  • the Pearson linear correlation coefficient (LCC) is used to measure the accuracy of the prediction results.
  • the Pearson rank-order correlation coefficient (SROCC) is used to measure the monotonicity of the prediction results.
  • the first threshold value may be 0.8, 0.9, 0.91, 0.92.
  • the first threshold value is preferably 0.9.
  • the second threshold value may also be 0.8, 0.9, 0.91, 0.92.
  • the second threshold value is preferably 0.9.
  • the present invention is not limited thereto, and the first threshold value and the second threshold value may be appropriately changed according to actual conditions.
  • the method operation is an operation that needs to be performed when constructing a system for measuring image sharpness values, and it is not necessary to perform the above operations each time the image sharpness value is measured.
  • the system for measuring the image sharpness value is trained and verified, it is equivalent to completing the training and learning process of the system. Therefore, after the test When the image sharpness value is measured, the sharpness value measurement of the non-reference image can be realized, and the measurement speed and accuracy can be greatly improved.
  • the operation of measuring the image sharpness value is as follows:
  • the image is subjected to convolution processing using image feature extractor 10 to obtain a first feature image at a localized extraction of the image.
  • the image to be tested is received by the image receiving module 11, and the image is convoluted by the convolution module 12.
  • the image receiving module 11 directly receives the image to be tested, and does not require excessive preprocessing of the image to be measured, thereby improving work efficiency.
  • the image is convoluted using a plurality of convolution kernels k, and the calculation formula is as follows:
  • X i is the received image to be tested
  • l represents the number of layers
  • k is the convolution kernel
  • M j is the receptive field of the input layer
  • B is the bias term
  • f is the activation function.
  • the number of the convolution cores is preferably eight, but the present invention is not limited thereto.
  • the local feature image obtained by the convolution process is downsampled by the image feature extractor 10 to obtain a second feature image.
  • the downsampling module 13 performs downsampling processing on the local first feature image obtained by the convolution process.
  • the downsampling process is used to reduce the spatial resolution of the model and eliminate offset and image distortion.
  • the calculation formula for downsampling is:
  • the downsampled second feature image is transformed by the image feature extractor 10 to obtain a single column feature vector.
  • the image feature extractor 10 After multiple convolution and downsampling operations, several feature images (feature vectors) can be obtained, and all feature vectors are transformed into a single column of feature vectors.
  • the second feature image after the downsampling process is transformed by the transform processing module 14.
  • the single column feature vector is input using the predictor 20, and the single column feature vector is calculated to obtain a sharpness value of the image.
  • the calculation of the image sharpness value is performed on the single-column feature vector by the arithmetic processing module 21.
  • a single column feature vector is input to the input layer 22, and the input layer 22 passes the single column feature vector obtained by the image feature extractor 10 to the mode layer 22, and the mode layer 23 and each sample data in the single column feature vector are a correspondence, and layer 211 and mode
  • the layer 23 is fully connected, and an operation is performed between the summing layer 211 and the mode layer 23 (as shown in the following "The formula of the predictor 20 for calculating the sharpness value of the image"), and the output layer 212 calculates the summing layer 211.
  • the two output quotients give the final image clarity value.
  • the formula for the predictor 20 to calculate the sharpness value of the image is:
  • X i , Y are sample observations
  • is the smoothing factor
  • n is the number of samples.
  • the parameters of the system and method for measuring the image sharpness value are: (1) the number of image blocks to be tested is 200; (2) the size of the image block to be tested is [16 16]; (3) the size of the convolution kernel is [7 7]; (4) the number of convolution kernels is 8; (5) the number of iterations is 120; (6) generalization of generalized regression neural networks The parameter is 0.01; (6) the verification parameter is 1.8, and the verification parameter is used for selecting the advantages and disadvantages of the network learning.
  • the invention is not limited thereto.
  • the Pearson linear correlation parameter (LCC) and the Spearson sorting correlation parameter (SROCC) range from 0 to 1, and the Pearson linear correlation parameter (LCC) and the Spearson ordering related parameter (SROCC) The larger the value, the higher the accuracy and performance of the system for measuring image sharpness values.
  • the image feature extractor 10 proposed in this embodiment is a convolutional neural network (CNN) and the predictor 20 is a generalized regression neural network (GRNN) on the LIVE, CSIQ library, or TID2008 and TID2013 libraries.
  • the system (CNN-GRNN) is able to effectively predict the sharpness of the image.
  • the image feature extractor 10 proposed by another embodiment of the present invention is a convolutional neural network (CNN), and the predictor 20 is a support vector regression (SVR) system (CNN-SVR) can achieve the same technical effect. .
  • the system consisting of a system of generalized regression neural networks (CNN-GRNN) and a system of support vector regression (CNN-SVR) is more accurate than the system composed of the other three algorithms. Reliable (approximately 0.05 to 0.16), more efficient and accurate.
  • the image feature extractor 10 is a convolutional neural network (CNN)
  • the predictor 20 is a generalized regression neural network (GRNN) system (CNN).
  • -GRNN and image feature extractor 10 is a convolutional neural network (CNN)
  • predictor 20 solutions for support vector regression (SVR) are superior to other solutions.
  • the system consisting of a generalized regression neural network (CNN-GRNN) and a support vector regression (CNN-SVR) is more accurate and effective on the TID2008 and TID2013 libraries.
  • the method for measuring the image sharpness value can perform fast convolution and downsampling processing on the image to be measured by the feature extractor to obtain a single-column feature vector, and use the predictor to single-column feature vector. Perform a quick calculation to get the sharpness value of the image.
  • the system for measuring the image sharpness value of the invention can quickly measure the sharpness value of the image after the training is finished, the training difficulty is low, the used parameters are few, the measured sharpness value is accurate and high, and the real-time property is strong. Can be widely used in optical imaging systems and medical imaging systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A system and method for measuring an image resolution value. The method for measuring an image resolution value comprises: convoluting an image by using an image feature extractor, to extract a first feature image from a part of the image (240); down-sampling, by using the image feature extractor, the first feature image obtained by convolution to obtain a second feature image with low resolution (250); performing column transformation on the down-sampled second feature image by using the image feature extractor, to obtain a single-column feature vector (260); and performing resolution value calculation on the single-column feature vector by using a predicator, to obtain a resolution value of the image (270). The method can measure a resolution value of an image quickly, is easy in training, uses few parameters, and can measure the resolution value accurately in real time. The method is widely applicable to an optical imaging system and a medical imaging system.

Description

用于测量图像清晰度值的系统及其方法System and method for measuring image sharpness value 技术领域Technical field
本发明属于图像处理技术领域,具体地讲,涉及一种用于测量图像清晰度值的系统。The invention belongs to the technical field of image processing, and in particular to a system for measuring image sharpness values.
背景技术Background technique
随着生活节奏的加快以及无线网络和手机的大量普及,图像成为我们信息获取和交流的重要手段。由于在手机拍摄过程中,相机的晃动、目标的相对运动、相机的自身质量等,导致图像的清晰度会有不同,而图像清晰度是用户最直观的感受,更关系到用户对图像的信息获取和场景解释,是图像质量的一个关键因素。目前无参考图像清晰度度量方法存在以下问题:(1)图像清晰度度量的精度不高;(2)需要用到大量实验数据进行参数选取,其计算量大可能无法应用在现实图片中;(3)试验在LIVE数据库上进行,扩展性差;(4)图像清晰度度量的方法复杂、测量时耗时较长。With the accelerated pace of life and the proliferation of wireless networks and mobile phones, images have become an important means of information acquisition and communication. Because the camera shakes, the relative motion of the target, the camera's own quality, etc., the sharpness of the image will be different, and the image sharpness is the most intuitive feeling of the user, and it is related to the user's information about the image. Acquisition and scene interpretation are a key factor in image quality. At present, there is no problem with the reference image sharpness measurement method: (1) the accuracy of the image sharpness measurement is not high; (2) a large amount of experimental data is needed for parameter selection, and the calculation amount thereof may not be applied to the real picture; 3) The test is performed on the LIVE database, and the scalability is poor; (4) The method of image sharpness measurement is complicated, and the measurement takes a long time.
因此,现有技术还有待于改进和发展。Therefore, the prior art has yet to be improved and developed.
发明内容Summary of the invention
为了解决上述现有技术中存在的问题,本发明的目的在于提供一种参数少、计算量小、易训练、精度高、速度快的用于测量图像清晰度值的系统及其方法。In order to solve the above problems in the prior art, an object of the present invention is to provide a system and method for measuring image sharpness values with less parameters, small amount of calculation, easy training, high precision, and high speed.
本发明提供了一种测量图像清晰度值的方法,所述方法包括:The present invention provides a method of measuring image sharpness values, the method comprising:
利用图像特征提取器对图像进行卷积处理,以在所述图像的局部提取得到第一特征图像;The image feature extractor is used to perform convolution processing on the image to obtain a first feature image in the local extraction of the image;
利用图像特征提取器对卷积处理得到的第一特征图像进行降采样处理,以得到分辨率较低的第二特征图像;Performing down-sampling processing on the first feature image obtained by the convolution process by using the image feature extractor to obtain a second feature image with lower resolution;
利用图像特征提取器对降采样处理后的第二特征图像进行列变换,以得到单列特征向量; Performing column transformation on the downsampled second feature image by using an image feature extractor to obtain a single column feature vector;
利用预测器对所述单列特征向量进行清晰度值的计算,以得到图像的清晰度值。The single column feature vector is used to calculate a sharpness value using a predictor to obtain a sharpness value of the image.
进一步地,利用多个卷积核对图像进行卷积处理,其计算公式如下:Further, the image is convoluted by using multiple convolution kernels, and the calculation formula is as follows:
Figure PCTCN2016096658-appb-000001
Figure PCTCN2016096658-appb-000001
其中,Xi为接收到的待测图像,i表示所在层数,k为卷积核,Mj为输入层的感受野,B为偏置项,f为激活函数。Where X i is the received image to be tested, i represents the number of layers, k is the convolution kernel, M j is the receptive field of the input layer, B is the bias term, and f is the activation function.
进一步地,所述降采样处理用于降低模型的空间分辨率,并消除偏移和图像扭曲,进行降采样处理的计算公式为:Further, the downsampling process is used to reduce the spatial resolution of the model and eliminate the offset and image distortion. The calculation formula of the downsampling process is:
Wj=f(βjp(yj)),W j =f(β j p(y j )),
其中,p为采样函数,β为权重系数。Where p is the sampling function and β is the weighting factor.
进一步地,所述预测器对图像进行清晰度值计算的公式为:Further, the formula for the predictor to calculate the sharpness value of the image is:
Figure PCTCN2016096658-appb-000002
Figure PCTCN2016096658-appb-000002
其中,Xi,Y为样本观测值,σ为光滑因子,n为样本数目。Where X i , Y are sample observations, σ is the smoothing factor, and n is the number of samples.
进一步地,在利用图像特征提取器对图像进行卷积处理之前,还包括:Further, before the image is subjected to convolution processing by using the image feature extractor, the method further includes:
对图像特征提取器进行训练,以使图像特征提取器能够对图像进行卷积、降采样处理,从而得到单列特征向量;The image feature extractor is trained to enable the image feature extractor to perform convolution and downsampling processing on the image to obtain a single column feature vector;
对预测器进行训练,以使所述预测器能够对所述单列特征向量进行运算处理,从而得到图像的清晰度值。The predictor is trained to enable the predictor to perform an arithmetic process on the single-column feature vector to obtain a sharpness value of the image.
进一步地,在对图像特征提取器和预测器进行训练后,还包括验证所述图像特征提取器配合所述预测器测量得到的清晰度值,验证所述图像特征提取器配合所述预测器测量得到的清晰度值的方法包括:Further, after training the image feature extractor and the predictor, further comprising verifying the sharpness value measured by the image feature extractor and the predictor, and verifying that the image feature extractor cooperates with the predictor to measure The method of obtaining the sharpness value includes:
将参考图像输入训练后的图像特征提取器,所述图像特征提取器对所述参考图像进行特征提取并且得到单列参考特征向量;Inputting a reference image into the trained image feature extractor, the image feature extractor extracting features of the reference image and obtaining a single column reference feature vector;
训练后的预测器输入所述单列参考特征向量,并进行运算以得到参考图像的清晰度参考值; The trained predictor inputs the single-column reference feature vector and performs an operation to obtain a sharpness reference value of the reference image;
根据计算得到的清晰度参考值和所述参考图像的主观清晰度值,计算皮尔逊线性相关参数(LCC)和斯皮尔逊排序先后相关参数(SROCC);Calculating a Pearson linear correlation parameter (LCC) and a Spearson sorting sequence related parameter (SROCC) according to the calculated sharpness reference value and the subjective sharpness value of the reference image;
判断所述皮尔逊线性相关参数(LCC)的值是否大于或等于第一门限值、且所述斯皮尔逊排序先后相关参数(SROCC)的值是否大于或等于第二门限值;若是,则图像特征提取器和预测器训练完毕;若否,则图像特征提取器和预测器未训练完毕,继续训练图像特征提取器和预测器。Determining whether the value of the Pearson linear correlation parameter (LCC) is greater than or equal to a first threshold value, and whether the value of the Spearson Sorting Sequence Related Parameter (SROCC) is greater than or equal to a second threshold value; if yes, Then, the image feature extractor and the predictor are trained; if not, the image feature extractor and the predictor are not trained, and the image feature extractor and the predictor are continuously trained.
进一步地,所述皮尔逊线性相关参数(LCC)和斯皮尔逊排序先后相关参数(SROCC)的公式为:Further, the formula of the Pearson linear correlation parameter (LCC) and the Spearson sorting correlation parameter (SROCC) is:
Figure PCTCN2016096658-appb-000003
Figure PCTCN2016096658-appb-000003
Figure PCTCN2016096658-appb-000004
Figure PCTCN2016096658-appb-000004
其中,n为样本数,
Figure PCTCN2016096658-appb-000005
分别为{x1,x2,…,xn}和{y1,y2,…,yn}的均值,σx,σy分别为它们的标准差,rxi,ryi分别为xi和yi在各自数据序列中的排序位置。
Where n is the number of samples,
Figure PCTCN2016096658-appb-000005
Are {x 1, x 2, ... , x n} and {y 1, y 2, ... , y n} is the mean, σ x, σ y, respectively, the difference to their standard, r xi, r yi are x The sorting position of i and y i in their respective data sequences.
进一步地,对图像特征提取器进行训练的方法包括:Further, the method for training the image feature extractor includes:
初始化,输入训练集;Initialize, enter the training set;
计算图像特征提取器的平均平方误差;Calculating the average squared error of the image feature extractor;
判断所述平均平方误差是否收敛;若所述平均平方误差收敛,则训练结束;若否,则计算残差;Determining whether the average squared error converges; if the average squared error converges, the training ends; if not, the residual is calculated;
将残差进行逐层反向传递,以得到每层的残差值;Passing the residuals layer by layer in reverse to obtain the residual value of each layer;
更新权值和偏置项的参数值。Update the parameter values for weights and offsets.
进一步地,计算图像特征提取器模型的平均平方误差和残差的公式为:Further, the formula for calculating the average squared error and residual of the image feature extractor model is:
Figure PCTCN2016096658-appb-000006
Figure PCTCN2016096658-appb-000006
Figure PCTCN2016096658-appb-000007
Figure PCTCN2016096658-appb-000007
其中,其中,m为每批样本数,y为样本标签,nl是指输出层,y为样本标签,a为输出值,f为激活函数,z为输出层上层神经元。Where m is the number of samples per batch, y is the sample label, nl is the output layer, y is the sample label, a is the output value, f is the activation function, and z is the upper layer neuron of the output layer.
进一步地,计算每层残差值的公式为: Further, the formula for calculating the residual value of each layer is:
δ(l)=((W(l))Tδ(l+1))·f′(z(l)),δ (l) = ((W (l) ) T δ (l+1) )·f'(z (l) ),
其中,W(l)是第l层的权值。Where W (l) is the weight of the first layer.
进一步地,所述“更新权值和偏置项的参数值”的计算公式为:Further, the calculation formula of the “update parameter value of the weighting value and the offset term” is:
Figure PCTCN2016096658-appb-000008
Figure PCTCN2016096658-appb-000008
Figure PCTCN2016096658-appb-000009
Figure PCTCN2016096658-appb-000009
其中,
Figure PCTCN2016096658-appb-000010
是第l层的偏置项,α是学习率。
among them,
Figure PCTCN2016096658-appb-000010
Is the bias term of the first layer, and α is the learning rate.
进一步地,对预测器进行训练的方法包括:Further, the method of training the predictor includes:
将训练集输入到训练完毕的图像特征提取器中,以得到所述训练集的局部的单列特征向量;Inputting the training set into the trained image feature extractor to obtain a local single-column feature vector of the training set;
根据所述训练集的单列特征向量和标签,以得到因变量对自变量的回归值。According to the single column feature vector and label of the training set, the regression value of the dependent variable to the independent variable is obtained.
本发明还提供了一种用于测量图像清晰度值的系统,用上述的测量图像清晰度值的方法来测量图像的清晰度值,所述系统包括:The present invention also provides a system for measuring image sharpness values, the method of measuring image sharpness values as described above for measuring sharpness values of an image, the system comprising:
图像特征提取器,被构造为:接收图像;在所述图像的局部提取得到第一特征图像;对提取到的第一特征图像进行模糊处理,以得到分辨率较低的第二特征图像;对处理得到的第二特征图像变换为单列特征向量;The image feature extractor is configured to: receive an image; extract a first feature image from the local image; and perform blur processing on the extracted first feature image to obtain a second feature image with a lower resolution; The processed second feature image is transformed into a single column feature vector;
预测器,被构造为:对所述单列特征向量进行计算打分,以得到图像的清晰度值。The predictor is configured to perform a scoring calculation on the single-column feature vector to obtain a sharpness value of the image.
进一步地,所述系统还包括训练器,所述训练器被构造为对图像特征提取器进行训练,以使图像特征提取器能够对图像进行卷积、降采样处理,从而得到单列特征向量。Further, the system further includes a trainer configured to train the image feature extractor to enable the image feature extractor to perform convolution and downsampling processing on the image to obtain a single column feature vector.
进一步地,所述训练器被构造为:Further, the trainer is configured to:
初始化,输入训练集至图像特征提取器;Initialize, input the training set to the image feature extractor;
计算图像特征提取器的平均平方误差;判断所述平均平方误差是否收敛;Calculating an average square error of the image feature extractor; determining whether the average squared error converges;
若所述平均平方误差收敛,则训练结束;若否,则计算残差;将残差进行逐层反向传递,以得到每层的残差值;更新权值和偏置项的参数值。 If the average squared error converges, the training ends; if not, the residual is calculated; the residual is inversely transmitted layer by layer to obtain the residual value of each layer; and the parameter values of the weight and the offset term are updated.
进一步地,所述训练器还被构造为对预测器进行训练,以使所述预测器能够对所述单列特征向量进行运算处理,从而得到图像的清晰度值。Further, the trainer is further configured to train the predictor to enable the predictor to perform an operation process on the single-column feature vector to obtain a sharpness value of the image.
进一步地,所述训练器被构造为:Further, the trainer is configured to:
将训练集输入到训练完毕的图像特征提取器中,以得到所述训练集的局部的单列特征向量;Inputting the training set into the trained image feature extractor to obtain a local single-column feature vector of the training set;
根据所述训练集的单列特征向量和标签,以得到因变量对自变量的回归值。According to the single column feature vector and label of the training set, the regression value of the dependent variable to the independent variable is obtained.
进一步地,所述系统还包括验证器,所述验证器被构造为验证所述图像特征提取器配合所述预测器测量得到的清晰度值。Further, the system further includes a verifier configured to verify a sharpness value measured by the image feature extractor in conjunction with the predictor.
进一步地,所述验证器被构造为:Further, the verifier is configured to:
将参考图像输入训练后的图像特征提取器,所述图像特征提取器对所述参考图像进行特征提取并且得到单列参考特征向量,训练后的预测器输入所述单列参考特征向量,并进行运算以得到参考图像的清晰度参考值;Inputting the reference image into the trained image feature extractor, the image feature extractor performs feature extraction on the reference image and obtains a single column reference feature vector, and the trained predictor inputs the single column reference feature vector, and performs an operation to Obtaining a sharpness reference value of the reference image;
根据计算得到的清晰度参考值和所述参考图像的主观清晰度值,并计算皮尔逊线性相关参数(LCC)和斯皮尔逊排序先后相关参数(SROCC);Calculating a sharpness reference value and a subjective sharpness value of the reference image, and calculating a Pearson linear correlation parameter (LCC) and a Spearson sorting sequence related parameter (SROCC);
判断所述皮尔逊线性相关参数(LCC)的值是否大于或等于第一门限值、且所述斯皮尔逊排序先后相关参数(SROCC)的值是否大于或等于第二门限值;若是,则图像特征提取器和预测器训练完毕;若否,则图像特征提取器和预测器未训练完毕,继续训练图像特征提取器和预测器。Determining whether the value of the Pearson linear correlation parameter (LCC) is greater than or equal to a first threshold value, and whether the value of the Spearson Sorting Sequence Related Parameter (SROCC) is greater than or equal to a second threshold value; if yes, Then, the image feature extractor and the predictor are trained; if not, the image feature extractor and the predictor are not trained, and the image feature extractor and the predictor are continuously trained.
本发明的有益效果:The beneficial effects of the invention:
本发明提供的用于测量图像清晰度值的系统及其方法能够利用图像特征提取器对待测图像进行快速的卷积、降采样处理以得到单列特征向量,利用预测器对单列特征向量进行运算处理以得到图像的清晰度值。本发明的用于测量图像清晰度值的系统在训练结束后可以快速测量图像的清晰度值,其训练的难度低,用到的参数少、测量得到的清晰度值准确高且实时性强,能够广泛运用于光学成像系统和医学成像系统中。The system and method for measuring image sharpness value provided by the invention can perform fast convolution and downsampling processing on the image to be measured by the image feature extractor to obtain a single column feature vector, and use a predictor to perform arithmetic processing on a single column feature vector. To get the sharpness value of the image. The system for measuring the image sharpness value of the invention can quickly measure the sharpness value of the image after the training is finished, the training difficulty is low, the used parameters are few, the measured sharpness value is accurate and high, and the real-time property is strong. Can be widely used in optical imaging systems and medical imaging systems.
附图说明DRAWINGS
通过结合附图进行的以下描述,本发明的实施例的上述和其它方面、特点 和优点将变得更加清楚,附图中:The above and other aspects and features of embodiments of the present invention are described by the following description in conjunction with the drawings And the advantages will become clearer, in the drawing:
图1是本发明实施例用于测量图像清晰度值的系统的较佳实施方式的模块图;1 is a block diagram of a preferred embodiment of a system for measuring image sharpness values in accordance with an embodiment of the present invention;
图2是本发明实施例用于测量图像清晰度值的系统在工作状态时的示意图;2 is a schematic diagram of a system for measuring image sharpness values in an operating state according to an embodiment of the present invention;
图3是本发明实施例测量图像清晰度值的方法的流程图;3 is a flow chart of a method for measuring image sharpness values according to an embodiment of the present invention;
图4是本发明实施例对图像特征提取器进行训练的方法的流程图;4 is a flowchart of a method for training an image feature extractor according to an embodiment of the present invention;
图5是本发明实施例对预测器进行训练的方法的流程图;FIG. 5 is a flowchart of a method for training a predictor according to an embodiment of the present invention; FIG.
图6是本发明实施例验证测量得到的清晰度值是否准确的方法的流程图。6 is a flow chart of a method for verifying whether a measured sharpness value is accurate according to an embodiment of the present invention.
具体实施方式detailed description
以下,将参照附图来详细描述本发明的实施例。然而,可以以许多不同的形式来实施本发明,并且本发明不应该被解释为限制于这里阐述的具体实施例。相反,提供这些实施例是为了解释本发明的原理及其实际应用,从而本领域的其他技术人员能够理解本发明的各种实施例和适合于特定预期应用的各种修改。相同的标号在整个说明书和附图中可用来表示相同的元件。Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. However, the invention may be embodied in many different forms and the invention should not be construed as being limited to the specific embodiments set forth herein. Rather, these embodiments are provided to explain the principles of the invention and its application, and the various embodiments of the invention can be understood by those skilled in the art. The same reference numerals are used throughout the specification and drawings to refer to the same.
目前的图像清晰度值的测量方法主要分为有参考(有感官认为的最优的图像质量作为对比)、半参考(有感官认为的最优的图像的部分信息来作为对比)和无参考(无任何要评测图像的直接或间接信息)三类。本发明实施例是基于无参考图像清晰度测量的原理,来实现对图像清晰度值的测量。The current methods for measuring image sharpness values are mainly divided into reference (the optimal image quality considered by the senses as a comparison), semi-reference (partial information of the optimal image with sensory perception as a comparison) and no reference ( There are no direct or indirect information to evaluate the image. Embodiments of the present invention are based on the principle of no reference image sharpness measurement to achieve measurement of image sharpness values.
另外,本发明还基于深度学习的原理,来实现对图像清晰度值的测量。深度学习通过对样本图像进行逐层特征变化,将样品在原空间的特征表示变换到新的特征空间,自动学习得到层次化的特征,减少手动参数选取和特征选择,从而更有利于分类或特征的可视化,同时可以很好地避免人工设定特征造成的局限,大大地提高图像清晰度测量的精度和效率。In addition, the present invention also implements measurement of image sharpness values based on the principle of deep learning. Deep learning transforms the feature representation of the sample in the original space into a new feature space by transforming the feature image of the sample image layer by layer. It automatically learns the hierarchical features, reduces manual parameter selection and feature selection, and is more conducive to classification or feature. Visualization, at the same time, can well avoid the limitations caused by manually setting features, greatly improving the accuracy and efficiency of image sharpness measurement.
图1是本发明实施例用于测量图像清晰度值的系统的较佳实施方式的模块图。图2是本发明实施例用于测量图像清晰度值的系统在工作状态时的示意图。1 is a block diagram of a preferred embodiment of a system for measuring image sharpness values in accordance with an embodiment of the present invention. 2 is a schematic diagram of a system for measuring image sharpness values in an operating state according to an embodiment of the present invention.
参照图1和图2,根据本发明实施例的用于测量图像清晰度值的系统包括:图像特征提取器10、预测器20、训练器30、验证器40。 1 and 2, a system for measuring image sharpness values according to an embodiment of the present invention includes an image feature extractor 10, a predictor 20, a trainer 30, and a verifier 40.
图像特征提取器10包括图像接收模块11、卷积模块12、降采样模块13、变换处理模块14。所述图像接收模块10被构造为接收图像。所述卷积模块12被构造为在所述图像的局部提取得到第一特征图像;所述降采样模块13被构造为对卷积处理得到的第一特征图像进行降采样处理,以得到分辨率较低的第二特征图像;所述变换处理模块14被构造为对所述降采样模块13处理得到的第二特征图像变换为单列特征向量。在本实施例中,所述单列特征向量具体为一个200维的向量。但本发明并不限制于此。The image feature extractor 10 includes an image receiving module 11, a convolution module 12, a downsampling module 13, and a transform processing module 14. The image receiving module 10 is configured to receive an image. The convolution module 12 is configured to extract a first feature image in a local portion of the image; the downsampling module 13 is configured to perform a downsampling process on the first feature image obtained by the convolution process to obtain a resolution a lower second feature image; the transform processing module 14 is configured to transform the second feature image processed by the downsampling module 13 into a single column feature vector. In this embodiment, the single column feature vector is specifically a 200-dimensional vector. However, the invention is not limited thereto.
在本实施例中,图像特征提取器10优选由卷积神经网络配置形成,卷积神经网络简称为CNN。图像特征提取器10可具体被配置为FPGA电路、芯片,但本发明并不限制于此。In the present embodiment, the image feature extractor 10 is preferably formed by a convolutional neural network configuration, and the convolutional neural network is simply referred to as CNN. The image feature extractor 10 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto.
预测器20包括运算处理模块21。所述运算处理模块21被构造为对所述单列特征向量进行计算打分,以得到图像的清晰度值。The predictor 20 includes an arithmetic processing module 21. The operation processing module 21 is configured to perform a calculation score on the single column feature vector to obtain a sharpness value of the image.
所述运算处理模块21具体包括求和层211和输出层212。具体地,预测器20还包括输入层22、模式层23,输入层22被构造为将图像特征提取器10得到的单列特征向量传递给模式层22;输入层22包括多个第一神经元,第一神经元的个数与图像特征提取器10从图像中提取的单列特征向量的维数相等。所述模式层23被构造为一一对应所述单列特征向量中的每一样本数据,模式层23也包括多个第二神经元,第二神经元的数目与第一神经单元的数目相等。求合层211只包括两个第三神经元,求和层211与模式层23全连接,并且求和层211与模式层23之间进行运算(如下公式所示),输出层212通过计算求和层211的两个输出商数,得到最终的图像的清晰度值。所述预测器20对图像进行清晰度值计算的公式为:The operation processing module 21 specifically includes a summation layer 211 and an output layer 212. Specifically, the predictor 20 further includes an input layer 22, a mode layer 23 configured to pass a single column feature vector obtained by the image feature extractor 10 to the mode layer 22; the input layer 22 includes a plurality of first neurons, The number of first neurons is equal to the dimension of the single column feature vector extracted by the image feature extractor 10 from the image. The mode layer 23 is configured to correspond one-to-one to each sample data in the single-column feature vector, and the mode layer 23 also includes a plurality of second neurons, the number of the second neurons being equal to the number of the first neural units. The search layer 211 includes only two third neurons, the summation layer 211 is fully connected to the mode layer 23, and the operation between the summation layer 211 and the mode layer 23 (as shown in the following formula), and the output layer 212 is calculated. And the two output quotients of layer 211, the sharpness value of the final image is obtained. The formula for the predictor 20 to calculate the sharpness value of the image is:
Figure PCTCN2016096658-appb-000011
Figure PCTCN2016096658-appb-000011
其中,Xi,Y为样本观测值,σ为光滑因子,n为样本数目。Where X i , Y are sample observations, σ is the smoothing factor, and n is the number of samples.
为了降低训练的难度,提高运算的速度,快速测量图像的清晰度值,所述预测器20优选由广义回归神经网络配置形成。预测器20可具体被配置为FPGA电路、芯片,但本发明并不限制于此。广义回归神经网络简称为GRNN,英文全称为General regression neural network。广义回归神经网络(General regression  neural network)是人工神经网络的一种变化形式,具有很强的非线性映射与泛化能力,适用于小样本数据。但本发明并不限制与此,例如,作为本发明的另一实施例,所述预测器20也可以为支持向量回归。支持向量回归简称为SVR,英文全称为Support vector regression。In order to reduce the difficulty of training, increase the speed of the calculation, and quickly measure the sharpness value of the image, the predictor 20 is preferably formed by a generalized regression neural network configuration. The predictor 20 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto. The generalized regression neural network is abbreviated as GRNN, and the English is called the General regression neural network. Generalized regression neural network Neural network) is a variant of artificial neural networks with strong nonlinear mapping and generalization capabilities for small sample data. However, the present invention is not limited thereto, and for example, as another embodiment of the present invention, the predictor 20 may also be a support vector regression. Support vector regression is abbreviated as SVR, and English is called Support vector regression.
所述训练器30被构造为对图像特征提取器10和预测器20训练,以使图像特征提取器10和预测器20能够自动学习得到层次化的特征,从而实现基于深度学习的无参考图像清晰度值的测量,得到待测图像精确的清晰度值。具体地,所述训练器被构造为对图像特征提取器进行训练,以使图像特征提取器能够对图像进行卷积、降采样处理,从而得到单列特征向量。所述训练器还被构造为对预测器进行训练,以使所述预测器能够对所述单列特征向量进行运算处理,从而得到图像的清晰度值。训练器30可具体被配置为FPGA电路、芯片,但本发明并不限制于此。The trainer 30 is configured to train the image feature extractor 10 and the predictor 20 to enable the image feature extractor 10 and the predictor 20 to automatically learn hierarchical features, thereby achieving depth-based reference-free image clarity. The measurement of the degree value obtains an accurate sharpness value of the image to be tested. Specifically, the trainer is configured to train the image feature extractor to enable the image feature extractor to perform convolution and downsampling processing on the image to obtain a single column feature vector. The trainer is further configured to train the predictor to enable the predictor to perform an arithmetic process on the single column feature vector to obtain a sharpness value of the image. The trainer 30 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto.
所述验证器40被构造为验证所述图像特征提取器10配合所述预测器测量20得到的清晰度值。更具体地,验证器40被构造为验证所述图像特征提取器10配合所述预测器测量20得到的清晰度值是否精确或准确。验证器40可具体被配置为FPGA电路、芯片,但本发明并不限制于此。The verifier 40 is configured to verify the sharpness value obtained by the image feature extractor 10 in conjunction with the predictor measurement 20. More specifically, the verifier 40 is configured to verify whether the sharpness value obtained by the image feature extractor 10 in conjunction with the predictor measurement 20 is accurate or accurate. The verifier 40 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto.
本发明实施例的用于测量图像清晰度值的系统可以集成到机器系统中,对获取的图像进行打分,判定机器系统的成像能力,包括但不限于光学成像系统,还可能包括医学成像系统。The system for measuring image sharpness values of embodiments of the present invention may be integrated into a machine system to score the acquired images to determine imaging capabilities of the machine system, including but not limited to optical imaging systems, and may also include medical imaging systems.
具体地,作为本发明的另一实施例,所述用于测量图像清晰度值的系统可以运用于摄像装置中。在该实施例中,摄像装置能够连拍多张照片,用于测量图像清晰度值的系统分别测量出连拍的多张照片的清晰度值,摄像装置被配置为比较多个清晰度值的大小,从而输出最大清晰度值对应的照片(也就是质量最高的照片)。由此可见,将测量图像清晰度值的系统运用于摄像装置中,可以为用户输出拍照质量更高的照片。Specifically, as another embodiment of the present invention, the system for measuring image sharpness values can be applied to an image pickup apparatus. In this embodiment, the image capturing apparatus is capable of continuously taking a plurality of photographs, and the system for measuring the image sharpness value respectively measures the sharpness values of the plurality of photographs of the continuous shooting, and the image capturing apparatus is configured to compare the plurality of sharpness values. The size, which in turn outputs the photo corresponding to the maximum sharpness value (that is, the highest quality photo). It can be seen that the system for measuring the image sharpness value is applied to the image pickup device, and the photograph of the photograph with higher quality can be output for the user.
更进一步地,作为本发明的又一实施例,所述用于测量图像清晰度值的系统也可以运用于评判图像质量的装置中。在该实施例中,用于测量图像清晰度值的系统测量出待测图像的清晰度值。设定一个参考阈值。所述用于评判图像质量的装置被配置为将待测图像的清晰度值与参考阈值进行比较,当所述清晰度值大于所述参考阈值时,判定所述图像的清晰度为清晰;当所述清晰度值小 于所述参考阈值时,判定所述图像的清晰度为不清晰。Still further, as still another embodiment of the present invention, the system for measuring image sharpness values can also be applied to a device for judging image quality. In this embodiment, the system for measuring the image sharpness value measures the sharpness value of the image to be tested. Set a reference threshold. The means for judging image quality is configured to compare a sharpness value of the image to be tested with a reference threshold, and when the sharpness value is greater than the reference threshold, determine that the sharpness of the image is clear; The sharpness value is small At the reference threshold, it is determined that the sharpness of the image is unclear.
本发明还提供了该系统测量图像清晰度值的方法。本发明实施例的系统测量图像清晰度值的方法可以被集成到一些图像增强算法中,进行算法的参数优化等应用。The present invention also provides a method of the system for measuring image sharpness values. The method for measuring the image sharpness value of the system of the embodiment of the invention can be integrated into some image enhancement algorithms, and the parameters of the algorithm are optimized.
图3是本发明实施例测量图像清晰度值的方法的流程图。具体地,一并参照图1、图2和图3,结合上述用于测量图像清晰度值的系统,所述测量图像清晰度值的方法具体包括:3 is a flow chart of a method of measuring image sharpness values in accordance with an embodiment of the present invention. Specifically, referring to FIG. 1 , FIG. 2 and FIG. 3 , in combination with the above system for measuring image sharpness value, the method for measuring image sharpness value specifically includes:
在操作210中,对图像特征提取器10进行训练,以使图像特征提取器10能够对图像依次进行卷积、降采样处理,从而得到单列特征向量。在这里,利用训练器30对图像特征提取器10进行训练。需要说明的是,在本实施例的系统框架中,有且仅有一个特征层。In operation 210, the image feature extractor 10 is trained to enable the image feature extractor 10 to sequentially perform convolution and downsampling processing on the image, thereby obtaining a single column feature vector. Here, the image feature extractor 10 is trained by the trainer 30. It should be noted that in the system framework of this embodiment, there is one and only one feature layer.
图4是本发明实施例对图像特征提取器进行训练的方法的流程图。具体地,参照图4,对图像特征提取器10进行训练的方法具体包括以下操作:4 is a flow chart of a method of training an image feature extractor in accordance with an embodiment of the present invention. Specifically, referring to FIG. 4, the method for training the image feature extractor 10 specifically includes the following operations:
在操作211中,初始化,输入训练集。具体地,初始化所有的卷积核权值与偏置顶,同时输入训练集样本图像至图像特征提取器10中。在这里,训练集包括具有精确清晰度值的样本图像。In operation 211, initialization is performed and a training set is input. Specifically, all convolution kernel weights and offset tops are initialized while the training set sample image is input to the image feature extractor 10. Here, the training set includes sample images with precise sharpness values.
在操作212中,计算图像特征提取器10的平均平方误差。具体地,样本图像通过计算得到输出值O,再将输出值O与样本标签y进行计算,可得到模型误差值E。通过误差值判断图像特征提取器10模型是否收敛,若收敛,则训练结束;若未收敛,则继续计算输出层的残差。计算图像特征提取器10模型的平均平方误差和残差的具体公式为:In operation 212, the average squared error of image feature extractor 10 is calculated. Specifically, the sample image is obtained by calculating the output value O, and then calculating the output value O and the sample label y to obtain the model error value E. Whether the image feature extractor 10 model converges is judged by the error value, and if it converges, the training ends; if it does not converge, the residual of the output layer is continuously calculated. The specific formula for calculating the average squared error and residual of the image feature extractor 10 model is:
Figure PCTCN2016096658-appb-000012
Figure PCTCN2016096658-appb-000012
Figure PCTCN2016096658-appb-000013
Figure PCTCN2016096658-appb-000013
其中,其中,m为每批样本数,y为样本标签,nl是指输出层,y为样本标签,a为输出值,f为激活函数,z为输出层上层神经元。Where m is the number of samples per batch, y is the sample label, nl is the output layer, y is the sample label, a is the output value, f is the activation function, and z is the upper layer neuron of the output layer.
在操作213中,将残差进行逐层反向传递,以得到每层的残差值。每层的残差值表明了该节点对最终输出值的残差产生了相应的影响。计算每层残差值的公式为: In operation 213, the residuals are reversed layer by layer to obtain residual values for each layer. The residual value of each layer indicates that the node has a corresponding effect on the residual of the final output value. The formula for calculating the residual value of each layer is:
δ(l)=((W(l))Tδ(l+1))·f′(z(l)),δ (l) = ((W (l) ) T δ (l+1) )·f'(z (l) ),
其中,W(l)是第l层的权值。Where W (l) is the weight of the first layer.
在操作214中,根据各层残差计算公式更新权值和偏置项的参数值。所述“更新权值和偏置项的参数值”的计算公式为:In operation 214, the parameter values of the weights and offsets are updated according to each layer residual calculation formula. The calculation formula of the parameter value of the update weight and offset term is:
Figure PCTCN2016096658-appb-000014
Figure PCTCN2016096658-appb-000014
Figure PCTCN2016096658-appb-000015
Figure PCTCN2016096658-appb-000015
其中,
Figure PCTCN2016096658-appb-000016
是第l层的偏置项,α是学习率。
among them,
Figure PCTCN2016096658-appb-000016
Is the bias term of the first layer, and α is the learning rate.
在操作220中,对预测器20进行训练,以使预测器20能够对所述单列特征向量进行运算处理,从而得到图像的清晰度值。在这里,利用训练器30对预测器20进行训练。In operation 220, the predictor 20 is trained to enable the predictor 20 to perform an arithmetic process on the single column feature vector to obtain a sharpness value of the image. Here, the predictor 20 is trained using the trainer 30.
图5是本发明实施例对预测器进行训练的方法的流程图。具体地,参照图5,所述对预测器20进行训练的方法包括以下操作:FIG. 5 is a flow chart of a method for training a predictor according to an embodiment of the present invention. Specifically, referring to FIG. 5, the method for training the predictor 20 includes the following operations:
在操作221中,将训练集输入到训练完毕的图像特征提取器10中,以得到所述训练集的单列特征向量。具体地,所述单列特征向量为一个200维的向量。In operation 221, the training set is input to the trained image feature extractor 10 to obtain a single column feature vector of the training set. Specifically, the single column feature vector is a 200-dimensional vector.
在操作222中,根据所述训练集的单列特征向量和标签,以计算得到因变量对自变量的回归值。In operation 222, a regression value of the dependent variable to the independent variable is calculated according to the single column feature vector and the label of the training set.
在操作230中,在对图像特征提取器10和预测器20进行训练后,还包括验证所述图像特征提取器10配合所述预测器20测量得到的清晰度值是否准确。在这里,利用验证器40来验证所述图像特征提取器10配合所述预测器20测量得到的清晰度值是否准确。In operation 230, after training the image feature extractor 10 and the predictor 20, it is further included to verify whether the sharpness value measured by the image feature extractor 10 in conjunction with the predictor 20 is accurate. Here, the verifier 40 is used to verify whether the sharpness value measured by the image feature extractor 10 in conjunction with the predictor 20 is accurate.
图6是本发明实施例验证测量得到的清晰度值是否准确的方法的流程图。具体地,参照图6,所述验证所述图像特征提取器10配合所述预测器20测量得到的清晰度值是否准确的方法具体包括以下操作:6 is a flow chart of a method for verifying whether a measured sharpness value is accurate according to an embodiment of the present invention. Specifically, referring to FIG. 6, the method for verifying whether the image feature extractor 10 cooperates with the predictor 20 to determine whether the sharpness value is accurate includes the following operations:
在操作231中,将参考图像输入训练后的图像特征提取器10,所述图像特征提取器10对所述参考图像进行特征提取并且得到单列参考特征向量。 In operation 231, the reference image is input to the trained image feature extractor 10, which performs feature extraction on the reference image and obtains a single column reference feature vector.
在操作232中,训练后的预测器20输入所述单列参考特征向量,并进行运算以得到参考图像的清晰度参考值。In operation 232, the trained predictor 20 inputs the single column reference feature vector and performs an operation to obtain a sharpness reference value of the reference image.
在操作233中,根据计算得到的清晰度参考值和所述参考图像的主观清晰度值,计算皮尔逊线性相关参数(LCC)和斯皮尔逊排序先后相关参数(SROCC);所述皮尔逊线性相关参数(LCC)和斯皮尔逊排序先后相关参数(SROCC)的计算公式为:In operation 233, a Pearson linear correlation parameter (LCC) and a spearson ordering correlation parameter (SROCC) are calculated based on the calculated sharpness reference value and the subjective sharpness value of the reference image; the Pearson linearity The relevant parameters (LCC) and the Spearson Sorting Related Parameters (SROCC) are calculated as:
Figure PCTCN2016096658-appb-000017
Figure PCTCN2016096658-appb-000017
Figure PCTCN2016096658-appb-000018
Figure PCTCN2016096658-appb-000018
其中,n为样本数,
Figure PCTCN2016096658-appb-000019
分别为{x1,x2,…,xn}和{y1,y2,…,yn}的均值,σx,σy分别为它们的标准差,rxi,ryi分别为xi和yi在各自数据序列中的排序位置。
Where n is the number of samples,
Figure PCTCN2016096658-appb-000019
Are {x 1, x 2, ... , x n} and {y 1, y 2, ... , y n} is the mean, σ x, σ y, respectively, the difference to their standard, r xi, r yi are x The sorting position of i and y i in their respective data sequences.
需要说明的是,皮尔逊线性相关参数LCC(Pearson linear correlation coefficient,LCC)用于度量预测结果的准确性。斯皮尔逊排序先后相关参数SROCC(Pearson rank-order correlation coefficient,SROCC)用于度量预测结果的单调性。It should be noted that the Pearson linear correlation coefficient (LCC) is used to measure the accuracy of the prediction results. The Pearson rank-order correlation coefficient (SROCC) is used to measure the monotonicity of the prediction results.
在操作234中,判断所述皮尔逊线性相关参数(LCC)的值是否大于或等于第一门限值、且所述斯皮尔逊排序先后相关参数(SROCC)的值是否大于或等于第二门限值;若是,则图像特征提取器和预测器训练完毕,能够得到精确的清晰度值;若否,则图像特征提取器和预测器未训练好,继续训练图像特征提取器和预测器。In operation 234, it is determined whether the value of the Pearson linear correlation parameter (LCC) is greater than or equal to a first threshold value, and whether the value of the Spearson Sorting Sequence Related Parameter (SROCC) is greater than or equal to a second gate The limit value; if so, the image feature extractor and the predictor are trained to obtain an accurate sharpness value; if not, the image feature extractor and the predictor are not trained, and the image feature extractor and the predictor are continuously trained.
所述第一门限值可以为0.8、0.9、0.91、0.92。优选地,在本实施例中,所述第一门限值优选为0.9。所述第二门限值也可以为0.8、0.9、0.91、0.92。优选地,在本实施例中,所述第二门限值优选为0.9。当然本发明并不限制于此,所述第一门限值和第二门限值可以根据实际情况作适当地更改。The first threshold value may be 0.8, 0.9, 0.91, 0.92. Preferably, in the embodiment, the first threshold value is preferably 0.9. The second threshold value may also be 0.8, 0.9, 0.91, 0.92. Preferably, in the embodiment, the second threshold value is preferably 0.9. Of course, the present invention is not limited thereto, and the first threshold value and the second threshold value may be appropriately changed according to actual conditions.
需要说明的是,上述“对图像特征提取器10进行训练”、“对预测器20进行训练”、“验证所述图像特征提取器10配合所述预测器20测量得到的清晰度值是否准确”的方法操作是在构建测量图像清晰度值的系统时需要进行的操作,并不是每次测量图像清晰度值都要先进行上述操作。当测量图像清晰度值的系统训练、验证完毕后,相当于完成了系统的训练、学习过程。因此,在以后测 量图像清晰度值时,可以实现无参考图像的清晰度值测量,并且能大大地提高测量的速度和准确率。所述测量图像清晰度值的操作具体如下:It should be noted that the above-mentioned "training the image feature extractor 10", "training the predictor 20", and "verifying whether the image feature extractor 10 matches the sharpness value measured by the predictor 20" is accurate. The method operation is an operation that needs to be performed when constructing a system for measuring image sharpness values, and it is not necessary to perform the above operations each time the image sharpness value is measured. When the system for measuring the image sharpness value is trained and verified, it is equivalent to completing the training and learning process of the system. Therefore, after the test When the image sharpness value is measured, the sharpness value measurement of the non-reference image can be realized, and the measurement speed and accuracy can be greatly improved. The operation of measuring the image sharpness value is as follows:
继续参照图1、图2和图3,在操作240中,利用图像特征提取器10对图像进行卷积处理,以在所述图像的局部提取得到第一特征图像。具体地,利用图像接收模块11接收待测图像,利用卷积模块12对图像进行卷积处理。在这里,图像接收模块11直接接收待测图像,无需对待测图像进行过多的预处理,提高了工作效率。具体地,利用多个卷积核k对图像进行卷积处理,其计算公式如下:With continued reference to Figures 1, 2 and 3, in operation 240, the image is subjected to convolution processing using image feature extractor 10 to obtain a first feature image at a localized extraction of the image. Specifically, the image to be tested is received by the image receiving module 11, and the image is convoluted by the convolution module 12. Here, the image receiving module 11 directly receives the image to be tested, and does not require excessive preprocessing of the image to be measured, thereby improving work efficiency. Specifically, the image is convoluted using a plurality of convolution kernels k, and the calculation formula is as follows:
Figure PCTCN2016096658-appb-000020
Figure PCTCN2016096658-appb-000020
其中,Xi为接收到的待测图像,l表示所在层数,k为卷积核,Mj为输入层的感受野,B为偏置项,f为激活函数。在这里,所述卷积核的个数优选为8个,但本发明并不限制于此。Where X i is the received image to be tested, l represents the number of layers, k is the convolution kernel, M j is the receptive field of the input layer, B is the bias term, and f is the activation function. Here, the number of the convolution cores is preferably eight, but the present invention is not limited thereto.
在操作250中,利用图像特征提取器10对卷积处理得到的局部的第一特征图像进行降采样处理,以得到第二特征图像。具体地,利用降采样模块13对卷积处理得到的局部的第一特征图像进行降采样处理。所述降采样处理用于降低模型的空间分辨率,并消除偏移和图像扭曲。进行降采样处理的计算公式为:In operation 250, the local feature image obtained by the convolution process is downsampled by the image feature extractor 10 to obtain a second feature image. Specifically, the downsampling module 13 performs downsampling processing on the local first feature image obtained by the convolution process. The downsampling process is used to reduce the spatial resolution of the model and eliminate offset and image distortion. The calculation formula for downsampling is:
Wj=f(βjp(yj)),W j =f(β j p(y j )),
其中,p为采样函数,β为权重系数。Where p is the sampling function and β is the weighting factor.
在操作260中,利用图像特征提取器10对降采样处理后的第二特征图像进行变换,以得到单列特征向量。在这里,经过多次卷积与降采样操作后,可以得到若干幅特征图像(特征向量),将所有的特征向量变换为一列的单列特征向量。具体地,利用变换处理模块14对降采样处理后的第二特征图像进行变换。In operation 260, the downsampled second feature image is transformed by the image feature extractor 10 to obtain a single column feature vector. Here, after multiple convolution and downsampling operations, several feature images (feature vectors) can be obtained, and all feature vectors are transformed into a single column of feature vectors. Specifically, the second feature image after the downsampling process is transformed by the transform processing module 14.
在操作270中,利用预测器20输入所述单列特征向量,并对所述单列特征向量计算,以得到图像的清晰度值。在这里,利用运算处理模块21对所述单列特征向量进行图像清晰度值的计算。具体地,输入单列特征向量至输入层22,输入层22将图像特征提取器10得到的单列特征向量传递给模式层22,所述模式层23与所述单列特征向量中的每一样本数据一一对应,和层211与模 式层23全连接,并且求和层211与模式层23之间进行运算(如下“所述预测器20对图像进行清晰度值计算的公式”所示),输出层212通过计算求和层211的两个输出商数,得到最终的图像的清晰度值。所述预测器20对图像进行清晰度值计算的公式为:In operation 270, the single column feature vector is input using the predictor 20, and the single column feature vector is calculated to obtain a sharpness value of the image. Here, the calculation of the image sharpness value is performed on the single-column feature vector by the arithmetic processing module 21. Specifically, a single column feature vector is input to the input layer 22, and the input layer 22 passes the single column feature vector obtained by the image feature extractor 10 to the mode layer 22, and the mode layer 23 and each sample data in the single column feature vector are a correspondence, and layer 211 and mode The layer 23 is fully connected, and an operation is performed between the summing layer 211 and the mode layer 23 (as shown in the following "The formula of the predictor 20 for calculating the sharpness value of the image"), and the output layer 212 calculates the summing layer 211. The two output quotients give the final image clarity value. The formula for the predictor 20 to calculate the sharpness value of the image is:
Figure PCTCN2016096658-appb-000021
Figure PCTCN2016096658-appb-000021
其中,Xi,Y为样本观测值,σ为光滑因子,n为样本数目。Where X i , Y are sample observations, σ is the smoothing factor, and n is the number of samples.
优选地,在本发明实施例中,用于测量图像清晰度值的系统及方法的参数具体为:(1)待测图像提取图像块个数为200;(2)待测图像块的尺寸为[16 16];(3)卷积核的尺寸为[7 7];(4)卷积核的个数为8;(5)迭代次数为120次;(6)广义回归神经网络的泛化参数为0.01;(6)验证参数为1.8,所述验证参数用于网络学习优劣的选取。当然本发明并不限制于此。Preferably, in the embodiment of the present invention, the parameters of the system and method for measuring the image sharpness value are: (1) the number of image blocks to be tested is 200; (2) the size of the image block to be tested is [16 16]; (3) the size of the convolution kernel is [7 7]; (4) the number of convolution kernels is 8; (5) the number of iterations is 120; (6) generalization of generalized regression neural networks The parameter is 0.01; (6) the verification parameter is 1.8, and the verification parameter is used for selecting the advantages and disadvantages of the network learning. Of course, the invention is not limited thereto.
由于皮尔逊线性相关参数(LCC)和斯皮尔逊排序先后相关参数(SROCC)的取值范围为0~1,且皮尔逊线性相关参数(LCC)和斯皮尔逊排序先后相关参数(SROCC)的取值越大,说明测量图像清晰度值的系统的精度越高、性能越优。The Pearson linear correlation parameter (LCC) and the Spearson sorting correlation parameter (SROCC) range from 0 to 1, and the Pearson linear correlation parameter (LCC) and the Spearson ordering related parameter (SROCC) The larger the value, the higher the accuracy and performance of the system for measuring image sharpness values.
皮尔逊线性相关参数LCC对比结果如表1所示。请参照表1,无论是在LIVE,CSIQ库上,还是TID2008和TID2013库上,本实施例提出的图像特征提取器10为卷积神经网络(CNN)、预测器20为广义回归神经网络(GRNN)的系统(CNN-GRNN)能够有效地预测图像的敏锐度。同样地,本发明的另一实施例提出的图像特征提取器10为卷积神经网络(CNN)、预测器20为支持向量回归(SVR)的系统(CNN-SVR)也能够达到同样的技术效果。尤其在CSIQ,TID2008和TID2013库上,广义回归神经网络构成的系统(CNN-GRNN)和支持向量回归构成的系统(CNN-SVR)比其它三个算法构成的系统测量出来的清晰度值都更加可靠(大致高0.05到0.16),更加有效和准确。The Pearson linear correlation parameter LCC comparison results are shown in Table 1. Referring to Table 1, the image feature extractor 10 proposed in this embodiment is a convolutional neural network (CNN) and the predictor 20 is a generalized regression neural network (GRNN) on the LIVE, CSIQ library, or TID2008 and TID2013 libraries. The system (CNN-GRNN) is able to effectively predict the sharpness of the image. Similarly, the image feature extractor 10 proposed by another embodiment of the present invention is a convolutional neural network (CNN), and the predictor 20 is a support vector regression (SVR) system (CNN-SVR) can achieve the same technical effect. . Especially in the CSIQ, TID2008 and TID2013 libraries, the system consisting of a system of generalized regression neural networks (CNN-GRNN) and a system of support vector regression (CNN-SVR) is more accurate than the system composed of the other three algorithms. Reliable (approximately 0.05 to 0.16), more efficient and accurate.
斯皮尔逊排序先后相关参数SROCC对比结果如表2所示。请参照表2,无论是在LIVE,CSIQ库上,还是TID2008和TID2013库上,图像特征提取器10为卷积神经网络(CNN)、预测器20为广义回归神经网络(GRNN)的系统(CNN-GRNN)和图像特征提取器10为卷积神经网络(CNN)、预测器 20为支持向量回归(SVR)构成的方案都优于其它方案。更进一步地,广义回归神经网络构成的系统(CNN-GRNN)和支持向量回归构成的系统(CNN-SVR)在TID2008和TID2013库上的准确性和有效性更高。The comparison results of Spilson for Spilson sorting successively are shown in Table 2. Please refer to Table 2, whether on the LIVE, CSIQ library, or TID2008 and TID2013 libraries, the image feature extractor 10 is a convolutional neural network (CNN), and the predictor 20 is a generalized regression neural network (GRNN) system (CNN). -GRNN) and image feature extractor 10 is a convolutional neural network (CNN), predictor 20 solutions for support vector regression (SVR) are superior to other solutions. Furthermore, the system consisting of a generalized regression neural network (CNN-GRNN) and a support vector regression (CNN-SVR) is more accurate and effective on the TID2008 and TID2013 libraries.
表1皮尔逊线性相关参数LCC测试结果Table 1 Pearson linear correlation parameters LCC test results
Figure PCTCN2016096658-appb-000022
Figure PCTCN2016096658-appb-000022
表2斯皮尔逊排序先后相关参数SROCC测试结果Table 2 Spelson Sorting Related Parameters SROCC Test Results
Figure PCTCN2016096658-appb-000023
Figure PCTCN2016096658-appb-000023
综上所述,根据本发明的实施例,所述测量图像清晰度值的方法能够利用特征提取器对待测图像进行快速的卷积、降采样处理得到单列特征向量,利用预测器对单列特征向量进行快速计算以得到图像的清晰度值。本发明的用于测量图像清晰度值的系统在训练结束后可以快速测量图像的清晰度值,其训练的难度低,用到的参数少、测量得到的清晰度值准确高且实时性强,能够广泛运用于光学成像系统和医学成像系统中。In summary, according to the embodiment of the present invention, the method for measuring the image sharpness value can perform fast convolution and downsampling processing on the image to be measured by the feature extractor to obtain a single-column feature vector, and use the predictor to single-column feature vector. Perform a quick calculation to get the sharpness value of the image. The system for measuring the image sharpness value of the invention can quickly measure the sharpness value of the image after the training is finished, the training difficulty is low, the used parameters are few, the measured sharpness value is accurate and high, and the real-time property is strong. Can be widely used in optical imaging systems and medical imaging systems.
虽然已经参照特定实施例示出并描述了本发明,但是本领域的技术人员将理解:在不脱离由权利要求及其等同物限定的本发明的精神和范围的情况下, 可在此进行形式和细节上的各种变化。 Although the present invention has been shown and described with respect to the specific embodiments of the present invention, Various changes in form and detail can be made here.

Claims (19)

  1. 一种测量图像清晰度值的方法,其中,包括:A method of measuring image sharpness values, including:
    利用图像特征提取器对图像进行卷积处理,以在所述图像的局部提取得到第一特征图像;The image feature extractor is used to perform convolution processing on the image to obtain a first feature image in the local extraction of the image;
    利用图像特征提取器对卷积处理得到的第一特征图像进行降采样处理,以得到分辨率较低的第二特征图像;Performing down-sampling processing on the first feature image obtained by the convolution process by using the image feature extractor to obtain a second feature image with lower resolution;
    利用图像特征提取器对降采样处理后的第二特征图像进行列变换,以得到单列特征向量;Performing column transformation on the downsampled second feature image by using an image feature extractor to obtain a single column feature vector;
    利用预测器对所述单列特征向量进行清晰度值的计算,以得到图像的清晰度值。The single column feature vector is used to calculate a sharpness value using a predictor to obtain a sharpness value of the image.
  2. 根据权利要求1所述的测量图像清晰度值的方法,其中,利用多个卷积核对图像进行卷积处理,其计算公式如下:The method of measuring an image sharpness value according to claim 1, wherein the image is subjected to convolution processing using a plurality of convolution kernels, and the calculation formula is as follows:
    Figure PCTCN2016096658-appb-100001
    Figure PCTCN2016096658-appb-100001
    其中,Xi为接收图像,i表示所在层数,k为卷积核,Mj为输入层的感受野,B为偏置项,f为激活函数。Where X i is the received image, i is the number of layers, k is the convolution kernel, M j is the receptive field of the input layer, B is the bias term, and f is the activation function.
  3. 根据权利要求1所述的测量图像清晰度值的方法,其中,所述降采样处理用于降低模型的空间分辨率,并消除偏移和图像扭曲,进行降采样处理的计算公式为:The method of measuring image sharpness values according to claim 1, wherein said downsampling processing is for reducing spatial resolution of the model and eliminating offset and image distortion, and the calculation formula for downsampling processing is:
    Wj=f(βjp(yj)),W j =f(β j p(y j )),
    其中,p为采样函数,β为权重系数。Where p is the sampling function and β is the weighting factor.
  4. 根据权利要求1所述的测量图像清晰度值的方法,其中,所述预测器对图像进行清晰度值计算的公式为:The method of measuring an image sharpness value according to claim 1, wherein the predictor calculates a sharpness value for the image as:
    Figure PCTCN2016096658-appb-100002
    Figure PCTCN2016096658-appb-100002
    其中,Xi,Y为样本观测值,σ为光滑因子,n为样本数目。Where X i , Y are sample observations, σ is the smoothing factor, and n is the number of samples.
  5. 根据权利要求1所述的测量图像清晰度值的方法,其中,在利用图像 特征提取器对图像进行卷积处理之前,还包括:A method of measuring an image sharpness value according to claim 1, wherein the image is utilized Before the feature extractor convolves the image, it also includes:
    对图像特征提取器进行训练,以使图像特征提取器能够对图像进行卷积、降采样处理,从而得到单列特征向量;The image feature extractor is trained to enable the image feature extractor to perform convolution and downsampling processing on the image to obtain a single column feature vector;
    对预测器进行训练,以使所述预测器能够对所述单列特征向量进行运算处理,从而得到图像的清晰度值。The predictor is trained to enable the predictor to perform an arithmetic process on the single-column feature vector to obtain a sharpness value of the image.
  6. 根据权利要求5所述的测量图像清晰度值的方法,其中,在对图像特征提取器和预测器进行训练后,还包括验证所述图像特征提取器配合所述预测器测量得到的清晰度值,验证所述图像特征提取器配合所述预测器测量得到的清晰度值的方法包括:The method of measuring an image sharpness value according to claim 5, wherein after training the image feature extractor and the predictor, further comprising verifying a sharpness value measured by the image feature extractor in conjunction with the predictor And the method for verifying that the image feature extractor matches the sharpness value measured by the predictor includes:
    将参考图像输入训练后的图像特征提取器,所述图像特征提取器对所述参考图像进行特征提取并且得到单列参考特征向量;Inputting a reference image into the trained image feature extractor, the image feature extractor extracting features of the reference image and obtaining a single column reference feature vector;
    训练后的预测器输入所述单列参考特征向量,并进行运算以得到参考图像的清晰度参考值;The trained predictor inputs the single-column reference feature vector and performs an operation to obtain a sharpness reference value of the reference image;
    根据计算得到的清晰度参考值和所述参考图像的主观清晰度值,计算皮尔逊线性相关参数(LCC)和斯皮尔逊排序先后相关参数(SROCC);Calculating a Pearson linear correlation parameter (LCC) and a Spearson sorting sequence related parameter (SROCC) according to the calculated sharpness reference value and the subjective sharpness value of the reference image;
    判断所述皮尔逊线性相关参数(LCC)的值是否大于或等于第一门限值、且所述斯皮尔逊排序先后相关参数(SROCC)的值是否大于或等于第二门限值;若是,则图像特征提取器和预测器训练完毕;若否,则图像特征提取器和预测器未训练完毕,继续训练图像特征提取器和预测器。Determining whether the value of the Pearson linear correlation parameter (LCC) is greater than or equal to a first threshold value, and whether the value of the Spearson Sorting Sequence Related Parameter (SROCC) is greater than or equal to a second threshold value; if yes, Then, the image feature extractor and the predictor are trained; if not, the image feature extractor and the predictor are not trained, and the image feature extractor and the predictor are continuously trained.
  7. 根据权利要求6所述的测量图像清晰度值的方法,其中,所述皮尔逊线性相关参数(LCC)和斯皮尔逊排序先后相关参数(SROCC)的公式为:The method of measuring an image sharpness value according to claim 6, wherein the formula of the Pearson linear correlation parameter (LCC) and the Spearson Sorting Sequence Related Parameter (SROCC) is:
    Figure PCTCN2016096658-appb-100003
    Figure PCTCN2016096658-appb-100003
    Figure PCTCN2016096658-appb-100004
    Figure PCTCN2016096658-appb-100004
    其中,n为样本数,
    Figure PCTCN2016096658-appb-100005
    分别为{x1,x2,…,xn}和{y1,y2,…,yn}的均值,σx,σy分别为它们的标准差,rxi,ryi分别为xi和yi在各自数据序列中的排序位置。
    Where n is the number of samples,
    Figure PCTCN2016096658-appb-100005
    Respectively {x1, x 2, ..., x n} and {y 1, y 2, ... , y n} is the mean, σ x, σ y, respectively, the difference to their standard, r xi, r yi are x i And the position of y i in their respective data sequences.
  8. 根据权利要求5所述的测量图像清晰度值的方法,其中,对图像特征提取器进行训练的方法包括: The method of measuring an image sharpness value according to claim 5, wherein the method of training the image feature extractor comprises:
    初始化,输入训练集;Initialize, enter the training set;
    计算图像特征提取器的平均平方误差;Calculating the average squared error of the image feature extractor;
    判断所述平均平方误差是否收敛;若所述平均平方误差收敛,则训练结束;若否,则计算残差;Determining whether the average squared error converges; if the average squared error converges, the training ends; if not, the residual is calculated;
    将残差进行逐层反向传递,以得到每层的残差值;Passing the residuals layer by layer in reverse to obtain the residual value of each layer;
    更新权值和偏置项的参数值。Update the parameter values for weights and offsets.
  9. 根据权利要求8所述的测量图像清晰度值的方法,其中,计算图像特征提取器模型的平均平方误差和残差的公式为:The method of measuring an image sharpness value according to claim 8, wherein the formula for calculating an average square error and a residual of the image feature extractor model is:
    Figure PCTCN2016096658-appb-100006
    Figure PCTCN2016096658-appb-100006
    Figure PCTCN2016096658-appb-100007
    Figure PCTCN2016096658-appb-100007
    其中,其中,m为每批样本数,y为样本标签,nl是指输出层,y为样本标签,a为输出值,f为激活函数,z为输出层上层神经元。Where m is the number of samples per batch, y is the sample label, nl is the output layer, y is the sample label, a is the output value, f is the activation function, and z is the upper layer neuron of the output layer.
  10. 根据权利要求8所述的测量图像清晰度值的方法,其中,计算每层残差值的公式为:A method of measuring an image sharpness value according to claim 8, wherein the formula for calculating the residual value of each layer is:
    δ(l)=((W(l))Tδ(l+1))·f′(z(l)),δ (l) = ((W (l) ) T δ (l+1) )·f'(z (l) ),
    其中,W(l)是第l层的权值。Where W (l) is the weight of the first layer.
  11. 根据权利要求8所述的测量图像清晰度值的方法,其中,所述“更新权值和偏置项的参数值”的计算公式为:The method of measuring an image sharpness value according to claim 8, wherein the calculation formula of the "update weight and parameter value of the offset term" is:
    Figure PCTCN2016096658-appb-100008
    Figure PCTCN2016096658-appb-100008
    Figure PCTCN2016096658-appb-100009
    Figure PCTCN2016096658-appb-100009
    其中,
    Figure PCTCN2016096658-appb-100010
    是第l层的偏置项,α是学习率。
    among them,
    Figure PCTCN2016096658-appb-100010
    Is the bias term of the first layer, and α is the learning rate.
  12. 根据权利要求5所述的测量图像清晰度值的方法,其中,对预测器进行训练的方法包括:The method of measuring an image sharpness value according to claim 5, wherein the method of training the predictor comprises:
    将训练集输入到训练完毕的图像特征提取器中,以得到所述训练集的局部 的单列特征向量;Inputting the training set into the trained image feature extractor to obtain a part of the training set Single column feature vector;
    根据所述训练集的单列特征向量和标签,以得到因变量对自变量的回归值。According to the single column feature vector and label of the training set, the regression value of the dependent variable to the independent variable is obtained.
  13. 一种用于测量图像清晰度值的系统,其中,所述系统包括:A system for measuring image sharpness values, wherein the system comprises:
    图像特征提取器,被构造为:接收图像;在所述图像的局部提取得到第一特征图像;对提取到的第一特征图像进行模糊处理,以得到分辨率较低的第二特征图像;对处理得到的第二特征图像变换为单列特征向量;The image feature extractor is configured to: receive an image; extract a first feature image from the local image; and perform blur processing on the extracted first feature image to obtain a second feature image with a lower resolution; The processed second feature image is transformed into a single column feature vector;
    预测器,被构造为:对所述单列特征向量进行计算打分,以得到图像的清晰度值。The predictor is configured to perform a scoring calculation on the single-column feature vector to obtain a sharpness value of the image.
  14. 根据权利要求13所述的用于测量图像清晰度值的系统,其中,所述系统还包括训练器,所述训练器被构造为对图像特征提取器进行训练,以使图像特征提取器能够对图像进行卷积、降采样处理,从而得到单列特征向量。A system for measuring image sharpness values according to claim 13 wherein said system further comprises a trainer configured to train the image feature extractor to enable the image feature extractor to The image is convoluted and downsampled to obtain a single column feature vector.
  15. 根据权利要求14所述的用于测量图像清晰度值的系统,其中,所述训练器被构造为:A system for measuring image sharpness values according to claim 14, wherein the trainer is configured to:
    初始化,输入训练集至图像特征提取器;Initialize, input the training set to the image feature extractor;
    计算图像特征提取器的平均平方误差;判断所述平均平方误差是否收敛;Calculating an average square error of the image feature extractor; determining whether the average squared error converges;
    若所述平均平方误差收敛,则训练结束;若否,则计算残差;将残差进行逐层反向传递,以得到每层的残差值;更新权值和偏置项的参数值。If the average squared error converges, the training ends; if not, the residual is calculated; the residual is inversely transmitted layer by layer to obtain the residual value of each layer; and the parameter values of the weight and the offset term are updated.
  16. 根据权利要求14所述的用于测量图像清晰度值的系统,其中,所述训练器还被构造为对预测器进行训练,以使所述预测器能够对所述单列特征向量进行运算处理,从而得到图像的清晰度值。The system for measuring image sharpness values according to claim 14, wherein the trainer is further configured to train a predictor to enable the predictor to perform arithmetic processing on the single-column feature vector, Thereby the sharpness value of the image is obtained.
  17. 根据权利要求16所述的用于测量图像清晰度值的系统,其中,所述训练器被构造为:A system for measuring image sharpness values according to claim 16, wherein the trainer is configured to:
    将训练集输入到训练完毕的图像特征提取器中,以得到所述训练集的局部的单列特征向量;Inputting the training set into the trained image feature extractor to obtain a local single-column feature vector of the training set;
    根据所述训练集的单列特征向量和标签,以得到因变量对自变量的回归值。According to the single column feature vector and label of the training set, the regression value of the dependent variable to the independent variable is obtained.
  18. 根据权利要求13所述的用于测量图像清晰度值的系统,其中,所述系统还包括验证器,所述验证器被构造为验证所述图像特征提取器配合所述预 测器测量得到的清晰度值。A system for measuring image sharpness values according to claim 13 wherein said system further comprises a validator configured to verify said image feature extractor to cooperate with said pre- The sharpness value measured by the detector.
  19. 根据权利要求18所述的用于测量图像清晰度值的系统,其中,所述验证器被构造为:A system for measuring image sharpness values according to claim 18, wherein said verifier is configured to:
    将参考图像输入训练后的图像特征提取器,所述图像特征提取器对所述参考图像进行特征提取并且得到单列参考特征向量,训练后的预测器输入所述单列参考特征向量,并进行运算以得到参考图像的清晰度参考值;Inputting the reference image into the trained image feature extractor, the image feature extractor performs feature extraction on the reference image and obtains a single column reference feature vector, and the trained predictor inputs the single column reference feature vector, and performs an operation to Obtaining a sharpness reference value of the reference image;
    根据计算得到的清晰度参考值和所述参考图像的主观清晰度值,并计算皮尔逊线性相关参数(LCC)和斯皮尔逊排序先后相关参数(SROCC);Calculating a sharpness reference value and a subjective sharpness value of the reference image, and calculating a Pearson linear correlation parameter (LCC) and a Spearson sorting sequence related parameter (SROCC);
    判断所述皮尔逊线性相关参数(LCC)的值是否大于或等于第一门限值、且所述斯皮尔逊排序先后相关参数(SROCC)的值是否大于或等于第二门限值;若是,则图像特征提取器和预测器训练完毕;若否,则图像特征提取器和预测器未训练完毕,继续训练图像特征提取器和预测器。 Determining whether the value of the Pearson linear correlation parameter (LCC) is greater than or equal to a first threshold value, and whether the value of the Spearson Sorting Sequence Related Parameter (SROCC) is greater than or equal to a second threshold value; if yes, Then, the image feature extractor and the predictor are trained; if not, the image feature extractor and the predictor are not trained, and the image feature extractor and the predictor are continuously trained.
PCT/CN2016/096658 2016-08-22 2016-08-25 System and method for measuring image resolution value WO2018035794A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610702576.1A CN106355195B (en) 2016-08-22 2016-08-22 System and method for measuring image definition value
CN201610702576.1 2016-08-22

Publications (1)

Publication Number Publication Date
WO2018035794A1 true WO2018035794A1 (en) 2018-03-01

Family

ID=57844657

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/096658 WO2018035794A1 (en) 2016-08-22 2016-08-25 System and method for measuring image resolution value

Country Status (2)

Country Link
CN (1) CN106355195B (en)
WO (1) WO2018035794A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443881A (en) * 2019-05-29 2019-11-12 重庆交通大学 The CNN-GRNN method of bridge floor metamorphosis identification Bridge Structural Damage
CN111191629A (en) * 2020-01-07 2020-05-22 中国人民解放军国防科技大学 Multi-target-based image visibility detection method
CN111242911A (en) * 2020-01-08 2020-06-05 来康科技有限责任公司 Method and system for determining image definition based on deep learning algorithm
CN111368875A (en) * 2020-02-11 2020-07-03 西安工程大学 Method for evaluating quality of super-resolution image based on stacking no-reference type
CN111885297A (en) * 2020-06-16 2020-11-03 北京迈格威科技有限公司 Image definition determining method, image focusing method and device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874957A (en) * 2017-02-27 2017-06-20 苏州大学 A kind of Fault Diagnosis of Roller Bearings
CN111798414A (en) * 2020-06-12 2020-10-20 北京阅视智能技术有限责任公司 Method, device and equipment for determining definition of microscopic image and storage medium
CN112330666B (en) * 2020-11-26 2022-04-29 成都数之联科技股份有限公司 Image processing method, system, device and medium based on improved twin network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100202668A1 (en) * 2007-08-15 2010-08-12 Indiana Research & Technology Corporation System And Method For Measuring Clarity Of Images Used In An Iris Recognition System
CN101872424A (en) * 2010-07-01 2010-10-27 重庆大学 Facial expression recognizing method based on Gabor transform optimal channel blur fusion
CN102881010A (en) * 2012-08-28 2013-01-16 北京理工大学 Method for evaluating perception sharpness of fused image based on human visual characteristics
CN104134204A (en) * 2014-07-09 2014-11-05 中国矿业大学 Image definition evaluation method and image definition evaluation device based on sparse representation
CN105740894A (en) * 2016-01-28 2016-07-06 北京航空航天大学 Semantic annotation method for hyperspectral remote sensing image
CN105809704A (en) * 2016-03-30 2016-07-27 北京小米移动软件有限公司 Method and device for identifying image definition

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101426134A (en) * 2007-11-01 2009-05-06 上海杰得微电子有限公司 Hardware device and method for video encoding and decoding
CN101996406A (en) * 2010-11-03 2011-03-30 中国科学院光电技术研究所 No-reference structural sharpness image quality evaluation method
CN102393960A (en) * 2011-06-29 2012-03-28 南京大学 Method for describing local characteristic of image
US9325985B2 (en) * 2013-05-28 2016-04-26 Apple Inc. Reference and non-reference video quality evaluation
CN103310486B (en) * 2013-06-04 2016-04-06 西北工业大学 Atmospheric turbulence degraded image method for reconstructing
CN103761521A (en) * 2014-01-09 2014-04-30 浙江大学宁波理工学院 LBP-based microscopic image definition measuring method
US9384422B2 (en) * 2014-04-04 2016-07-05 Ebay Inc. Image evaluation
CN104902267B (en) * 2015-06-08 2017-02-01 浙江科技学院 No-reference image quality evaluation method based on gradient information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100202668A1 (en) * 2007-08-15 2010-08-12 Indiana Research & Technology Corporation System And Method For Measuring Clarity Of Images Used In An Iris Recognition System
CN101872424A (en) * 2010-07-01 2010-10-27 重庆大学 Facial expression recognizing method based on Gabor transform optimal channel blur fusion
CN102881010A (en) * 2012-08-28 2013-01-16 北京理工大学 Method for evaluating perception sharpness of fused image based on human visual characteristics
CN104134204A (en) * 2014-07-09 2014-11-05 中国矿业大学 Image definition evaluation method and image definition evaluation device based on sparse representation
CN105740894A (en) * 2016-01-28 2016-07-06 北京航空航天大学 Semantic annotation method for hyperspectral remote sensing image
CN105809704A (en) * 2016-03-30 2016-07-27 北京小米移动软件有限公司 Method and device for identifying image definition

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443881A (en) * 2019-05-29 2019-11-12 重庆交通大学 The CNN-GRNN method of bridge floor metamorphosis identification Bridge Structural Damage
CN110443881B (en) * 2019-05-29 2023-07-07 重庆交通大学 Bridge deck morphological change recognition bridge structure damage CNN-GRNN method
CN111191629A (en) * 2020-01-07 2020-05-22 中国人民解放军国防科技大学 Multi-target-based image visibility detection method
CN111191629B (en) * 2020-01-07 2023-12-15 中国人民解放军国防科技大学 Image visibility detection method based on multiple targets
CN111242911A (en) * 2020-01-08 2020-06-05 来康科技有限责任公司 Method and system for determining image definition based on deep learning algorithm
CN111368875A (en) * 2020-02-11 2020-07-03 西安工程大学 Method for evaluating quality of super-resolution image based on stacking no-reference type
CN111368875B (en) * 2020-02-11 2023-08-08 浙江昕微电子科技有限公司 Method for evaluating quality of non-reference super-resolution image based on stacking
CN111885297A (en) * 2020-06-16 2020-11-03 北京迈格威科技有限公司 Image definition determining method, image focusing method and device

Also Published As

Publication number Publication date
CN106355195B (en) 2021-04-23
CN106355195A (en) 2017-01-25

Similar Documents

Publication Publication Date Title
WO2018035794A1 (en) System and method for measuring image resolution value
WO2022002150A1 (en) Method and device for constructing visual point cloud map
CN106920215B (en) Method for detecting registration effect of panoramic image
CN108108764B (en) Visual SLAM loop detection method based on random forest
CN109190446A (en) Pedestrian's recognition methods again based on triple focused lost function
CN109086675B (en) Face recognition and attack detection method and device based on light field imaging technology
TWI823084B (en) Image repair method and device, storage medium, terminal
CN108960404B (en) Image-based crowd counting method and device
JP2021515927A (en) Lighting condition setting method, devices, systems and programs, and storage media
CN110879982A (en) Crowd counting system and method
CN113361542A (en) Local feature extraction method based on deep learning
CN111127435A (en) No-reference image quality evaluation method based on double-current convolutional neural network
CN111626927A (en) Binocular image super-resolution method, system and device adopting parallax constraint
CN113888501A (en) Non-reference image quality evaluation method based on attention positioning network
CN107644203B (en) Feature point detection method for shape adaptive classification
CN109978897B (en) Registration method and device for heterogeneous remote sensing images of multi-scale generation countermeasure network
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN111696090B (en) Method for evaluating quality of face image in unconstrained environment
TWI817896B (en) Machine learning method and device
CN114913086B (en) Face image quality enhancement method based on generation countermeasure network
CN116740487A (en) Target object recognition model construction method and device and computer equipment
CN116681742A (en) Visible light and infrared thermal imaging image registration method based on graph neural network
CN114419716B (en) Calibration method for face image face key point calibration
CN113628261B (en) Infrared and visible light image registration method in electric power inspection scene
CN112734798B (en) On-line self-adaptive system and method for neural network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16913817

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/07/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16913817

Country of ref document: EP

Kind code of ref document: A1