WO2018035794A1 - Système et procédé de mesure de valeur de résolution d'image - Google Patents

Système et procédé de mesure de valeur de résolution d'image Download PDF

Info

Publication number
WO2018035794A1
WO2018035794A1 PCT/CN2016/096658 CN2016096658W WO2018035794A1 WO 2018035794 A1 WO2018035794 A1 WO 2018035794A1 CN 2016096658 W CN2016096658 W CN 2016096658W WO 2018035794 A1 WO2018035794 A1 WO 2018035794A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
value
sharpness
feature extractor
predictor
Prior art date
Application number
PCT/CN2016/096658
Other languages
English (en)
Chinese (zh)
Inventor
余绍德
江帆
陈璐明
姬治华
伍世宾
谢耀钦
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2018035794A1 publication Critical patent/WO2018035794A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Definitions

  • the invention belongs to the technical field of image processing, and in particular to a system for measuring image sharpness values.
  • an object of the present invention is to provide a system and method for measuring image sharpness values with less parameters, small amount of calculation, easy training, high precision, and high speed.
  • the present invention provides a method of measuring image sharpness values, the method comprising:
  • the image feature extractor is used to perform convolution processing on the image to obtain a first feature image in the local extraction of the image
  • the single column feature vector is used to calculate a sharpness value using a predictor to obtain a sharpness value of the image.
  • X i is the received image to be tested, i represents the number of layers, k is the convolution kernel, M j is the receptive field of the input layer, B is the bias term, and f is the activation function.
  • the downsampling process is used to reduce the spatial resolution of the model and eliminate the offset and image distortion.
  • the calculation formula of the downsampling process is:
  • X i , Y are sample observations
  • is the smoothing factor
  • n is the number of samples.
  • the method further includes:
  • the image feature extractor is trained to enable the image feature extractor to perform convolution and downsampling processing on the image to obtain a single column feature vector
  • the predictor is trained to enable the predictor to perform an arithmetic process on the single-column feature vector to obtain a sharpness value of the image.
  • the method of obtaining the sharpness value includes:
  • the image feature extractor Inputting a reference image into the trained image feature extractor, the image feature extractor extracting features of the reference image and obtaining a single column reference feature vector;
  • the trained predictor inputs the single-column reference feature vector and performs an operation to obtain a sharpness reference value of the reference image
  • n is the number of samples
  • ⁇ x 1, x 2, ... , x n ⁇ and ⁇ y 1, y 2, ... , y n ⁇ is the mean, ⁇ x, ⁇ y, respectively, the difference to their standard, r xi, r yi are x The sorting position of i and y i in their respective data sequences.
  • the method for training the image feature extractor includes:
  • m is the number of samples per batch
  • y is the sample label
  • nl is the output layer
  • y is the sample label
  • a is the output value
  • f is the activation function
  • z is the upper layer neuron of the output layer.
  • ⁇ (l) ((W (l) ) T ⁇ (l+1) ) ⁇ f'(z (l) ),
  • W (l) is the weight of the first layer.
  • I the bias term of the first layer
  • the learning rate
  • the method of training the predictor includes:
  • the regression value of the dependent variable to the independent variable is obtained.
  • the present invention also provides a system for measuring image sharpness values, the method of measuring image sharpness values as described above for measuring sharpness values of an image, the system comprising:
  • the image feature extractor is configured to: receive an image; extract a first feature image from the local image; and perform blur processing on the extracted first feature image to obtain a second feature image with a lower resolution;
  • the processed second feature image is transformed into a single column feature vector;
  • the predictor is configured to perform a scoring calculation on the single-column feature vector to obtain a sharpness value of the image.
  • system further includes a trainer configured to train the image feature extractor to enable the image feature extractor to perform convolution and downsampling processing on the image to obtain a single column feature vector.
  • the trainer is configured to:
  • the training ends; if not, the residual is calculated; the residual is inversely transmitted layer by layer to obtain the residual value of each layer; and the parameter values of the weight and the offset term are updated.
  • the trainer is further configured to train the predictor to enable the predictor to perform an operation process on the single-column feature vector to obtain a sharpness value of the image.
  • the trainer is configured to:
  • the regression value of the dependent variable to the independent variable is obtained.
  • system further includes a verifier configured to verify a sharpness value measured by the image feature extractor in conjunction with the predictor.
  • the verifier is configured to:
  • the image feature extractor Inputting the reference image into the trained image feature extractor, the image feature extractor performs feature extraction on the reference image and obtains a single column reference feature vector, and the trained predictor inputs the single column reference feature vector, and performs an operation to Obtaining a sharpness reference value of the reference image;
  • the system and method for measuring image sharpness value provided by the invention can perform fast convolution and downsampling processing on the image to be measured by the image feature extractor to obtain a single column feature vector, and use a predictor to perform arithmetic processing on a single column feature vector. To get the sharpness value of the image.
  • the system for measuring the image sharpness value of the invention can quickly measure the sharpness value of the image after the training is finished, the training difficulty is low, the used parameters are few, the measured sharpness value is accurate and high, and the real-time property is strong. Can be widely used in optical imaging systems and medical imaging systems.
  • FIG. 1 is a block diagram of a preferred embodiment of a system for measuring image sharpness values in accordance with an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a system for measuring image sharpness values in an operating state according to an embodiment of the present invention
  • FIG. 3 is a flow chart of a method for measuring image sharpness values according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of a method for training an image feature extractor according to an embodiment of the present invention
  • FIG. 5 is a flowchart of a method for training a predictor according to an embodiment of the present invention
  • FIG. 6 is a flow chart of a method for verifying whether a measured sharpness value is accurate according to an embodiment of the present invention.
  • the current methods for measuring image sharpness values are mainly divided into reference (the optimal image quality considered by the senses as a comparison), semi-reference (partial information of the optimal image with sensory perception as a comparison) and no reference ( There are no direct or indirect information to evaluate the image.
  • Embodiments of the present invention are based on the principle of no reference image sharpness measurement to achieve measurement of image sharpness values.
  • the present invention also implements measurement of image sharpness values based on the principle of deep learning.
  • Deep learning transforms the feature representation of the sample in the original space into a new feature space by transforming the feature image of the sample image layer by layer. It automatically learns the hierarchical features, reduces manual parameter selection and feature selection, and is more conducive to classification or feature. Visualization, at the same time, can well avoid the limitations caused by manually setting features, greatly improving the accuracy and efficiency of image sharpness measurement.
  • 1 is a block diagram of a preferred embodiment of a system for measuring image sharpness values in accordance with an embodiment of the present invention.
  • 2 is a schematic diagram of a system for measuring image sharpness values in an operating state according to an embodiment of the present invention.
  • a system for measuring image sharpness values includes an image feature extractor 10, a predictor 20, a trainer 30, and a verifier 40.
  • the image feature extractor 10 includes an image receiving module 11, a convolution module 12, a downsampling module 13, and a transform processing module 14.
  • the image receiving module 10 is configured to receive an image.
  • the convolution module 12 is configured to extract a first feature image in a local portion of the image;
  • the downsampling module 13 is configured to perform a downsampling process on the first feature image obtained by the convolution process to obtain a resolution a lower second feature image;
  • the transform processing module 14 is configured to transform the second feature image processed by the downsampling module 13 into a single column feature vector.
  • the single column feature vector is specifically a 200-dimensional vector.
  • the invention is not limited thereto.
  • the image feature extractor 10 is preferably formed by a convolutional neural network configuration, and the convolutional neural network is simply referred to as CNN.
  • the image feature extractor 10 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto.
  • the predictor 20 includes an arithmetic processing module 21.
  • the operation processing module 21 is configured to perform a calculation score on the single column feature vector to obtain a sharpness value of the image.
  • the operation processing module 21 specifically includes a summation layer 211 and an output layer 212.
  • the predictor 20 further includes an input layer 22, a mode layer 23 configured to pass a single column feature vector obtained by the image feature extractor 10 to the mode layer 22; the input layer 22 includes a plurality of first neurons, The number of first neurons is equal to the dimension of the single column feature vector extracted by the image feature extractor 10 from the image.
  • the mode layer 23 is configured to correspond one-to-one to each sample data in the single-column feature vector, and the mode layer 23 also includes a plurality of second neurons, the number of the second neurons being equal to the number of the first neural units.
  • the search layer 211 includes only two third neurons, the summation layer 211 is fully connected to the mode layer 23, and the operation between the summation layer 211 and the mode layer 23 (as shown in the following formula), and the output layer 212 is calculated. And the two output quotients of layer 211, the sharpness value of the final image is obtained.
  • the formula for the predictor 20 to calculate the sharpness value of the image is:
  • X i , Y are sample observations
  • is the smoothing factor
  • n is the number of samples.
  • the predictor 20 is preferably formed by a generalized regression neural network configuration.
  • the predictor 20 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto.
  • the generalized regression neural network is abbreviated as GRNN, and the English is called the General regression neural network.
  • Generalized regression neural network Neural network is a variant of artificial neural networks with strong nonlinear mapping and generalization capabilities for small sample data.
  • the present invention is not limited thereto, and for example, as another embodiment of the present invention, the predictor 20 may also be a support vector regression. Support vector regression is abbreviated as SVR, and English is called Support vector regression.
  • the trainer 30 is configured to train the image feature extractor 10 and the predictor 20 to enable the image feature extractor 10 and the predictor 20 to automatically learn hierarchical features, thereby achieving depth-based reference-free image clarity.
  • the measurement of the degree value obtains an accurate sharpness value of the image to be tested.
  • the trainer is configured to train the image feature extractor to enable the image feature extractor to perform convolution and downsampling processing on the image to obtain a single column feature vector.
  • the trainer is further configured to train the predictor to enable the predictor to perform an arithmetic process on the single column feature vector to obtain a sharpness value of the image.
  • the trainer 30 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto.
  • the verifier 40 is configured to verify the sharpness value obtained by the image feature extractor 10 in conjunction with the predictor measurement 20. More specifically, the verifier 40 is configured to verify whether the sharpness value obtained by the image feature extractor 10 in conjunction with the predictor measurement 20 is accurate or accurate.
  • the verifier 40 may be specifically configured as an FPGA circuit or a chip, but the present invention is not limited thereto.
  • the system for measuring image sharpness values of embodiments of the present invention may be integrated into a machine system to score the acquired images to determine imaging capabilities of the machine system, including but not limited to optical imaging systems, and may also include medical imaging systems.
  • the system for measuring image sharpness values can be applied to an image pickup apparatus.
  • the image capturing apparatus is capable of continuously taking a plurality of photographs, and the system for measuring the image sharpness value respectively measures the sharpness values of the plurality of photographs of the continuous shooting, and the image capturing apparatus is configured to compare the plurality of sharpness values.
  • the size which in turn outputs the photo corresponding to the maximum sharpness value (that is, the highest quality photo). It can be seen that the system for measuring the image sharpness value is applied to the image pickup device, and the photograph of the photograph with higher quality can be output for the user.
  • the system for measuring image sharpness values can also be applied to a device for judging image quality.
  • the system for measuring the image sharpness value measures the sharpness value of the image to be tested.
  • the means for judging image quality is configured to compare a sharpness value of the image to be tested with a reference threshold, and when the sharpness value is greater than the reference threshold, determine that the sharpness of the image is clear; The sharpness value is small At the reference threshold, it is determined that the sharpness of the image is unclear.
  • the present invention also provides a method of the system for measuring image sharpness values.
  • the method for measuring the image sharpness value of the system of the embodiment of the invention can be integrated into some image enhancement algorithms, and the parameters of the algorithm are optimized.
  • FIG. 3 is a flow chart of a method of measuring image sharpness values in accordance with an embodiment of the present invention. Specifically, referring to FIG. 1 , FIG. 2 and FIG. 3 , in combination with the above system for measuring image sharpness value, the method for measuring image sharpness value specifically includes:
  • the image feature extractor 10 is trained to enable the image feature extractor 10 to sequentially perform convolution and downsampling processing on the image, thereby obtaining a single column feature vector.
  • the image feature extractor 10 is trained by the trainer 30. It should be noted that in the system framework of this embodiment, there is one and only one feature layer.
  • FIG. 4 is a flow chart of a method of training an image feature extractor in accordance with an embodiment of the present invention. Specifically, referring to FIG. 4, the method for training the image feature extractor 10 specifically includes the following operations:
  • initialization is performed and a training set is input. Specifically, all convolution kernel weights and offset tops are initialized while the training set sample image is input to the image feature extractor 10.
  • the training set includes sample images with precise sharpness values.
  • the average squared error of image feature extractor 10 is calculated. Specifically, the sample image is obtained by calculating the output value O, and then calculating the output value O and the sample label y to obtain the model error value E. Whether the image feature extractor 10 model converges is judged by the error value, and if it converges, the training ends; if it does not converge, the residual of the output layer is continuously calculated.
  • the specific formula for calculating the average squared error and residual of the image feature extractor 10 model is:
  • m is the number of samples per batch
  • y is the sample label
  • nl is the output layer
  • y is the sample label
  • a is the output value
  • f is the activation function
  • z is the upper layer neuron of the output layer.
  • the residuals are reversed layer by layer to obtain residual values for each layer.
  • the residual value of each layer indicates that the node has a corresponding effect on the residual of the final output value.
  • the formula for calculating the residual value of each layer is:
  • ⁇ (l) ((W (l) ) T ⁇ (l+1) ) ⁇ f'(z (l) ),
  • W (l) is the weight of the first layer.
  • the parameter values of the weights and offsets are updated according to each layer residual calculation formula.
  • the calculation formula of the parameter value of the update weight and offset term is:
  • I the bias term of the first layer
  • the learning rate
  • the predictor 20 is trained to enable the predictor 20 to perform an arithmetic process on the single column feature vector to obtain a sharpness value of the image.
  • the predictor 20 is trained using the trainer 30.
  • FIG. 5 is a flow chart of a method for training a predictor according to an embodiment of the present invention. Specifically, referring to FIG. 5, the method for training the predictor 20 includes the following operations:
  • the training set is input to the trained image feature extractor 10 to obtain a single column feature vector of the training set.
  • the single column feature vector is a 200-dimensional vector.
  • a regression value of the dependent variable to the independent variable is calculated according to the single column feature vector and the label of the training set.
  • operation 230 after training the image feature extractor 10 and the predictor 20, it is further included to verify whether the sharpness value measured by the image feature extractor 10 in conjunction with the predictor 20 is accurate.
  • the verifier 40 is used to verify whether the sharpness value measured by the image feature extractor 10 in conjunction with the predictor 20 is accurate.
  • FIG. 6 is a flow chart of a method for verifying whether a measured sharpness value is accurate according to an embodiment of the present invention. Specifically, referring to FIG. 6, the method for verifying whether the image feature extractor 10 cooperates with the predictor 20 to determine whether the sharpness value is accurate includes the following operations:
  • the reference image is input to the trained image feature extractor 10, which performs feature extraction on the reference image and obtains a single column reference feature vector.
  • the trained predictor 20 inputs the single column reference feature vector and performs an operation to obtain a sharpness reference value of the reference image.
  • a Pearson linear correlation parameter (LCC) and a spearson ordering correlation parameter (SROCC) are calculated based on the calculated sharpness reference value and the subjective sharpness value of the reference image; the Pearson linearity The relevant parameters (LCC) and the Spearson Sorting Related Parameters (SROCC) are calculated as:
  • n is the number of samples
  • ⁇ x 1, x 2, ... , x n ⁇ and ⁇ y 1, y 2, ... , y n ⁇ is the mean, ⁇ x, ⁇ y, respectively, the difference to their standard, r xi, r yi are x The sorting position of i and y i in their respective data sequences.
  • the Pearson linear correlation coefficient (LCC) is used to measure the accuracy of the prediction results.
  • the Pearson rank-order correlation coefficient (SROCC) is used to measure the monotonicity of the prediction results.
  • the first threshold value may be 0.8, 0.9, 0.91, 0.92.
  • the first threshold value is preferably 0.9.
  • the second threshold value may also be 0.8, 0.9, 0.91, 0.92.
  • the second threshold value is preferably 0.9.
  • the present invention is not limited thereto, and the first threshold value and the second threshold value may be appropriately changed according to actual conditions.
  • the method operation is an operation that needs to be performed when constructing a system for measuring image sharpness values, and it is not necessary to perform the above operations each time the image sharpness value is measured.
  • the system for measuring the image sharpness value is trained and verified, it is equivalent to completing the training and learning process of the system. Therefore, after the test When the image sharpness value is measured, the sharpness value measurement of the non-reference image can be realized, and the measurement speed and accuracy can be greatly improved.
  • the operation of measuring the image sharpness value is as follows:
  • the image is subjected to convolution processing using image feature extractor 10 to obtain a first feature image at a localized extraction of the image.
  • the image to be tested is received by the image receiving module 11, and the image is convoluted by the convolution module 12.
  • the image receiving module 11 directly receives the image to be tested, and does not require excessive preprocessing of the image to be measured, thereby improving work efficiency.
  • the image is convoluted using a plurality of convolution kernels k, and the calculation formula is as follows:
  • X i is the received image to be tested
  • l represents the number of layers
  • k is the convolution kernel
  • M j is the receptive field of the input layer
  • B is the bias term
  • f is the activation function.
  • the number of the convolution cores is preferably eight, but the present invention is not limited thereto.
  • the local feature image obtained by the convolution process is downsampled by the image feature extractor 10 to obtain a second feature image.
  • the downsampling module 13 performs downsampling processing on the local first feature image obtained by the convolution process.
  • the downsampling process is used to reduce the spatial resolution of the model and eliminate offset and image distortion.
  • the calculation formula for downsampling is:
  • the downsampled second feature image is transformed by the image feature extractor 10 to obtain a single column feature vector.
  • the image feature extractor 10 After multiple convolution and downsampling operations, several feature images (feature vectors) can be obtained, and all feature vectors are transformed into a single column of feature vectors.
  • the second feature image after the downsampling process is transformed by the transform processing module 14.
  • the single column feature vector is input using the predictor 20, and the single column feature vector is calculated to obtain a sharpness value of the image.
  • the calculation of the image sharpness value is performed on the single-column feature vector by the arithmetic processing module 21.
  • a single column feature vector is input to the input layer 22, and the input layer 22 passes the single column feature vector obtained by the image feature extractor 10 to the mode layer 22, and the mode layer 23 and each sample data in the single column feature vector are a correspondence, and layer 211 and mode
  • the layer 23 is fully connected, and an operation is performed between the summing layer 211 and the mode layer 23 (as shown in the following "The formula of the predictor 20 for calculating the sharpness value of the image"), and the output layer 212 calculates the summing layer 211.
  • the two output quotients give the final image clarity value.
  • the formula for the predictor 20 to calculate the sharpness value of the image is:
  • X i , Y are sample observations
  • is the smoothing factor
  • n is the number of samples.
  • the parameters of the system and method for measuring the image sharpness value are: (1) the number of image blocks to be tested is 200; (2) the size of the image block to be tested is [16 16]; (3) the size of the convolution kernel is [7 7]; (4) the number of convolution kernels is 8; (5) the number of iterations is 120; (6) generalization of generalized regression neural networks The parameter is 0.01; (6) the verification parameter is 1.8, and the verification parameter is used for selecting the advantages and disadvantages of the network learning.
  • the invention is not limited thereto.
  • the Pearson linear correlation parameter (LCC) and the Spearson sorting correlation parameter (SROCC) range from 0 to 1, and the Pearson linear correlation parameter (LCC) and the Spearson ordering related parameter (SROCC) The larger the value, the higher the accuracy and performance of the system for measuring image sharpness values.
  • the image feature extractor 10 proposed in this embodiment is a convolutional neural network (CNN) and the predictor 20 is a generalized regression neural network (GRNN) on the LIVE, CSIQ library, or TID2008 and TID2013 libraries.
  • the system (CNN-GRNN) is able to effectively predict the sharpness of the image.
  • the image feature extractor 10 proposed by another embodiment of the present invention is a convolutional neural network (CNN), and the predictor 20 is a support vector regression (SVR) system (CNN-SVR) can achieve the same technical effect. .
  • the system consisting of a system of generalized regression neural networks (CNN-GRNN) and a system of support vector regression (CNN-SVR) is more accurate than the system composed of the other three algorithms. Reliable (approximately 0.05 to 0.16), more efficient and accurate.
  • the image feature extractor 10 is a convolutional neural network (CNN)
  • the predictor 20 is a generalized regression neural network (GRNN) system (CNN).
  • -GRNN and image feature extractor 10 is a convolutional neural network (CNN)
  • predictor 20 solutions for support vector regression (SVR) are superior to other solutions.
  • the system consisting of a generalized regression neural network (CNN-GRNN) and a support vector regression (CNN-SVR) is more accurate and effective on the TID2008 and TID2013 libraries.
  • the method for measuring the image sharpness value can perform fast convolution and downsampling processing on the image to be measured by the feature extractor to obtain a single-column feature vector, and use the predictor to single-column feature vector. Perform a quick calculation to get the sharpness value of the image.
  • the system for measuring the image sharpness value of the invention can quickly measure the sharpness value of the image after the training is finished, the training difficulty is low, the used parameters are few, the measured sharpness value is accurate and high, and the real-time property is strong. Can be widely used in optical imaging systems and medical imaging systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne également un système et un procédé de mesure d'une valeur de résolution d'image. Le procédé de mesure d'une valeur de résolution d'image consiste à: convulsionner une image à l'aide d'un extracteur de caractéristique d'image, pour extraire une première image de caractéristique d'une partie de l'image (240); sous-échantillonner, à l'aide de l'extracteur de caractéristique d'image, la première image de caractéristique obtenue par convolution pour obtenir une seconde image de caractéristique avec une faible résolution (250); réaliser une transformation de colonne sur la seconde image de caractéristique échantillonnée vers le bas à l'aide de l'extracteur de caractéristique d'image, pour obtenir un vecteur de caractéristique de colonne unique (260); réaliser d'un calcul de valeur de résolution sur le vecteur de caractéristique de colonne unique à l'aide d'un dispositif de prédiction, pour obtenir une valeur de résolution de l'image (270). Le procédé peut mesurer rapidement une valeur de résolution d'une image, est facile à former, utilise peu de paramètres, et peut mesurer précisément la valeur de résolution en temps réel. Le procédé est largement applicable à un système d'imagerie optique et à un système d'imagerie médicale.
PCT/CN2016/096658 2016-08-22 2016-08-25 Système et procédé de mesure de valeur de résolution d'image WO2018035794A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610702576.1 2016-08-22
CN201610702576.1A CN106355195B (zh) 2016-08-22 2016-08-22 用于测量图像清晰度值的系统及其方法

Publications (1)

Publication Number Publication Date
WO2018035794A1 true WO2018035794A1 (fr) 2018-03-01

Family

ID=57844657

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/096658 WO2018035794A1 (fr) 2016-08-22 2016-08-25 Système et procédé de mesure de valeur de résolution d'image

Country Status (2)

Country Link
CN (1) CN106355195B (fr)
WO (1) WO2018035794A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443881A (zh) * 2019-05-29 2019-11-12 重庆交通大学 桥面形态变化识别桥梁结构损伤的cnn-grnn方法
CN111191629A (zh) * 2020-01-07 2020-05-22 中国人民解放军国防科技大学 一种基于多目标的图像能见度检测方法
CN111242911A (zh) * 2020-01-08 2020-06-05 来康科技有限责任公司 一种基于深度学习算法确定图像清晰度的方法及系统
CN111368875A (zh) * 2020-02-11 2020-07-03 西安工程大学 基于stacking无参考型超分辨图像质量评价方法
CN111885297A (zh) * 2020-06-16 2020-11-03 北京迈格威科技有限公司 图像清晰度的确定方法、图像对焦方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874957A (zh) * 2017-02-27 2017-06-20 苏州大学 一种滚动轴承故障诊断方法
CN111798414A (zh) * 2020-06-12 2020-10-20 北京阅视智能技术有限责任公司 显微图像的清晰度确定方法、装置、设备及存储介质
CN112330666B (zh) * 2020-11-26 2022-04-29 成都数之联科技股份有限公司 基于改进孪生网络的图像处理方法及系统及装置及介质
CN113011408A (zh) * 2021-02-09 2021-06-22 中国银行股份有限公司苏州分行 多帧图片序列的字符识别、车辆识别码识别方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100202668A1 (en) * 2007-08-15 2010-08-12 Indiana Research & Technology Corporation System And Method For Measuring Clarity Of Images Used In An Iris Recognition System
CN101872424A (zh) * 2010-07-01 2010-10-27 重庆大学 基于Gabor变换最优通道模糊融合的人脸表情识别方法
CN102881010A (zh) * 2012-08-28 2013-01-16 北京理工大学 基于人眼视觉特性的融合图像感知清晰度评价方法
CN104134204A (zh) * 2014-07-09 2014-11-05 中国矿业大学 一种基于稀疏表示的图像清晰度评价方法和装置
CN105740894A (zh) * 2016-01-28 2016-07-06 北京航空航天大学 一种高光谱遥感图像的语义标注方法
CN105809704A (zh) * 2016-03-30 2016-07-27 北京小米移动软件有限公司 识别图像清晰度的方法及装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101426134A (zh) * 2007-11-01 2009-05-06 上海杰得微电子有限公司 用于视频编解码的硬件装置及方法
CN101996406A (zh) * 2010-11-03 2011-03-30 中国科学院光电技术研究所 无参考结构清晰度图像质量评价方法
CN102393960A (zh) * 2011-06-29 2012-03-28 南京大学 一种图像的局部特征描述方法
US9325985B2 (en) * 2013-05-28 2016-04-26 Apple Inc. Reference and non-reference video quality evaluation
CN103310486B (zh) * 2013-06-04 2016-04-06 西北工业大学 大气湍流退化图像重建方法
CN103761521A (zh) * 2014-01-09 2014-04-30 浙江大学宁波理工学院 一种基于局部二值模式的显微图像清晰度测量方法
US9384422B2 (en) * 2014-04-04 2016-07-05 Ebay Inc. Image evaluation
CN104902267B (zh) * 2015-06-08 2017-02-01 浙江科技学院 一种基于梯度信息的无参考图像质量评价方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100202668A1 (en) * 2007-08-15 2010-08-12 Indiana Research & Technology Corporation System And Method For Measuring Clarity Of Images Used In An Iris Recognition System
CN101872424A (zh) * 2010-07-01 2010-10-27 重庆大学 基于Gabor变换最优通道模糊融合的人脸表情识别方法
CN102881010A (zh) * 2012-08-28 2013-01-16 北京理工大学 基于人眼视觉特性的融合图像感知清晰度评价方法
CN104134204A (zh) * 2014-07-09 2014-11-05 中国矿业大学 一种基于稀疏表示的图像清晰度评价方法和装置
CN105740894A (zh) * 2016-01-28 2016-07-06 北京航空航天大学 一种高光谱遥感图像的语义标注方法
CN105809704A (zh) * 2016-03-30 2016-07-27 北京小米移动软件有限公司 识别图像清晰度的方法及装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443881A (zh) * 2019-05-29 2019-11-12 重庆交通大学 桥面形态变化识别桥梁结构损伤的cnn-grnn方法
CN110443881B (zh) * 2019-05-29 2023-07-07 重庆交通大学 桥面形态变化识别桥梁结构损伤的cnn-grnn方法
CN111191629A (zh) * 2020-01-07 2020-05-22 中国人民解放军国防科技大学 一种基于多目标的图像能见度检测方法
CN111191629B (zh) * 2020-01-07 2023-12-15 中国人民解放军国防科技大学 一种基于多目标的图像能见度检测方法
CN111242911A (zh) * 2020-01-08 2020-06-05 来康科技有限责任公司 一种基于深度学习算法确定图像清晰度的方法及系统
CN111368875A (zh) * 2020-02-11 2020-07-03 西安工程大学 基于stacking无参考型超分辨图像质量评价方法
CN111368875B (zh) * 2020-02-11 2023-08-08 浙江昕微电子科技有限公司 基于stacking无参考型超分辨图像质量评价方法
CN111885297A (zh) * 2020-06-16 2020-11-03 北京迈格威科技有限公司 图像清晰度的确定方法、图像对焦方法及装置

Also Published As

Publication number Publication date
CN106355195B (zh) 2021-04-23
CN106355195A (zh) 2017-01-25

Similar Documents

Publication Publication Date Title
WO2018035794A1 (fr) Système et procédé de mesure de valeur de résolution d'image
WO2022002150A1 (fr) Procédé et dispositif permettant de construire une carte de nuage de points visuels
CN106920224B (zh) 一种评估拼接图像清晰度的方法
CN106920215B (zh) 一种全景图像配准效果的检测方法
TWI823084B (zh) 圖像修復方法及裝置、存儲介質、終端
CN109190446A (zh) 基于三元组聚焦损失函数的行人再识别方法
CN109086675B (zh) 一种基于光场成像技术的人脸识别及攻击检测方法及其装置
CN106127741B (zh) 基于改良自然场景统计模型的无参考图像质量评价方法
CN109753891A (zh) 基于人体关键点检测的足球运动员姿势校准方法及系统
CN108960404B (zh) 一种基于图像的人群计数方法及设备
CN111127435B (zh) 基于双流卷积神经网络的无参考图像质量评估方法
JP2021515927A (ja) 照明条件の設定方法、装置、システム及びプログラム並びに記憶媒体
CN114972085B (zh) 一种基于对比学习的细粒度噪声估计方法和系统
CN113628261B (zh) 一种电力巡检场景下的红外与可见光图像配准方法
WO2014201971A1 (fr) Procédé et dispositif de détection d'objet pour formation en ligne
CN106127234B (zh) 基于特征字典的无参考图像质量评价方法
CN105550649A (zh) 基于全耦合局部约束表示的极低分辨率人脸识别方法及系统
CN113361542A (zh) 一种基于深度学习的局部特征提取方法
CN109978897B (zh) 一种多尺度生成对抗网络的异源遥感图像配准方法及装置
CN112329662B (zh) 基于无监督学习的多视角显著性估计方法
TWI817896B (zh) 機器學習方法以及裝置
CN111626927A (zh) 采用视差约束的双目图像超分辨率方法、系统及装置
CN116681742A (zh) 基于图神经网络的可见光与红外热成像图像配准方法
CN111696090B (zh) 一种无约束环境下人脸图像质量评估方法
CN113111850A (zh) 基于感兴趣区域变换的人体关键点检测方法、装置与系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16913817

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/07/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16913817

Country of ref document: EP

Kind code of ref document: A1