CN105913427A - Machine learning-based noise image saliency detecting method - Google Patents
Machine learning-based noise image saliency detecting method Download PDFInfo
- Publication number
- CN105913427A CN105913427A CN201610222900.XA CN201610222900A CN105913427A CN 105913427 A CN105913427 A CN 105913427A CN 201610222900 A CN201610222900 A CN 201610222900A CN 105913427 A CN105913427 A CN 105913427A
- Authority
- CN
- China
- Prior art keywords
- noise
- image
- amplitude
- value
- machine learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000001514 detection method Methods 0.000 claims abstract description 36
- 238000002790 cross-validation Methods 0.000 claims abstract description 6
- 238000012549 training Methods 0.000 claims description 20
- 238000011156 evaluation Methods 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 17
- 238000001914 filtration Methods 0.000 claims description 15
- 230000002146 bilateral effect Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 5
- 238000001303 quality assessment method Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000003708 edge detection Methods 0.000 claims description 3
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000013441 quality evaluation Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 230000003935 attention Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000010332 selective attention Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a machine learning-based noise image saliency detecting method which comprises the following steps: 1, a plurality kinds of denoising parameters are adopted for a noise image of each amplitude, and an optimal denoising parameter for each amplitude is obtained; 2, each noise image is subjected to characteristic extracting operation via a noise assessing algorithm, noise value characteristics are obtained, and a noise value characteristic set is formed; 3, the noise value characteristic set is used as a machine learning algorithm characteristic set, and a noise amplitude prediction model is obtained via a machine learning algorithm and a quinquesection cross validation method; 4, a noise image corresponding to the noise amplitude prediction model is adopted for prediction, and predicted noise amplitude value is obtained; 5, predicted noise amplitude value of each noise image and a corresponding optimal denoising parameter are used for denoising operation, and a denoised image set can be obtained; 6, images in the denoised image set is subjected to saliency detecting operation via a saliency detection method, and a final salient image can be obtained. According to the machine learning-based noise image saliency detecting method, noise image detecting performance can be improved.
Description
Technical Field
The invention relates to the technical field of image and video processing and computer vision, in particular to a noise image saliency detection method based on machine learning.
Background
Human senses include mainly vision, smell, taste, hearing and touch. Humans rely on senses to receive externally transmitted information. Visual senses dominate the human senses. The human visual system is able to focus attention on the most important parts of the image, i.e. the parts of the human eye that are of most interest, in a short time. With the advent of the multimedia age, the popularization of various digital products and the spread of digitized images in the network age, a large amount of image resources are generated and transmitted every day. The vast amount of image data, while enriching life, also presents a number of challenges.
How to efficiently and accurately process these image resources is a critical issue. After discovering the selective attention mechanism of the human visual system, researchers have proposed saliency detection methods by attempting to have computers mimic the human visual system. Saliency detection has been applied to image compression and encoding, image retrieval, image segmentation, target recognition, and content-aware image scaling, among others. For example, in image compression and encoding, a salient region is first detected, and then more details are retained in the salient region, so that the image is compressed and more important details are retained.
Visual saliency detection has been well studied, however most saliency detection models are proposed for undistorted images and experimental data is a collection of undistorted images. A few papers have noted the effect of distorted images on saliency detection. Zhang et al found that noise, blurring, and compression changed image low-level features, and proposed a bottom-up saliency detection model based on image low-level features. Meanwhile, Zhang et al found that image quality distortion causes changes in the saliency map, and that there is a certain link between changes in the saliency map and subjective image quality assessment. Gide and Karam evaluated 5 saliency detection models on an eye movement dataset of image quality evaluation, the evaluated distortion types including blur, noise and JPEG compression distortion. Mittal et al extracts low-level features such as image brightness and contrast, and predicts salient regions of JPEG-distorted images using a machine learning framework based on these features. Kim and Milanfar proposed a non-parametric regression framework based significance detection model for noisy images.
In real life, most images have distortion, such as distortion caused by peripheral devices such as a camera sensor and an image processor, and jitter distortion and image compression distortion caused by shaking of a photographing device. In order to improve the application of the saliency detection method to the noise image, the invention provides a noise image saliency detection method based on machine learning.
Disclosure of Invention
The invention aims to provide a noise image saliency detection method based on machine learning, which can improve the detection performance of the saliency detection method in a noise image.
In order to achieve the purpose, the technical scheme of the invention is as follows: a noise image saliency detection method based on machine learning comprises the following steps:
step S1: denoising the noise image of each amplitude by adopting a plurality of denoising parameters respectively to obtain an optimal denoising parameter corresponding to each amplitude;
step S2: performing feature extraction on each noise image by using a noise evaluation algorithm to obtain the noise value feature of each noise image so as to form a noise value feature setP;
Step S3: set the noise value characteristicPAs a feature set of a machine learning algorithm, obtaining a noise amplitude prediction model of a noise image through the machine learning algorithm and a five-equal-division cross validation method;
step S4: predicting the corresponding noise image by adopting a noise amplitude prediction model to obtain a predicted noise amplitude value of each noise image;
step S5: carrying out denoising treatment by adopting the predicted noise amplitude value of each noise image and the optimal denoising parameter corresponding to the amplitude to obtain a denoising image set;
step S6: and detecting the images in the de-noised image set by using a saliency detection method to obtain a final saliency map.
Further, in the step S1, performing denoising processing on the noise image of each amplitude by using a plurality of denoising parameters, to obtain an optimal denoising parameter corresponding to each amplitude, specifically including the following steps:
step S11: use ofnDenoising the noise image of each amplitude by using the Gaussian low-pass filtering denoising parameter to obtain the noise image of each amplitudenA de-noised image set S of the de-noising parameters is planted;
step S12: calculating a saliency map of the denoised image set S by using a saliency detection method VA to obtain a saliency map set T of the denoised image;
step S13: and evaluating the significance map set T of the denoised image by using the evaluation index PR-AUC, and finding out the denoising parameter used when the average PR-AUC highest value is found for each amplitude to obtain the optimal denoising parameter of each amplitude.
Further, in step S2, a noise evaluation algorithm is used to perform feature extraction on each noise image to obtain a noise value feature of each noise image, so as to form a noise value feature setPThe method specifically comprises the following steps:
step S21: carrying out graying processing on the noise image to obtain a grayscale imageI;
Step S22: processing grayscale images using bilateral filteringIObtaining a bilateral filtering result graph;
Step S23: calculating a grayscale imageIAnd bilateral filtering result graphTo obtain a difference imageD;
Step S24: for gray scale imageIObtaining an edge image by using a Canny edge detection methodEFor edge imageEExpanding the edge region using an expansion operator to obtain an expanded edge image;
Step S25: calculating noise magnitude evaluation value imageMThe calculation formula is as follows:
wherein
Wherein,D v representing pixels in a grayscale imagevThe value of (a) is,tthe number of pixels is represented by a number of pixels,an enlarged edge image is represented which is,Nrepresenting an enlarged edge imageMiddle pixel pointtA set of values of 0;
step S26: evaluating the noise magnitude of the imageMUniformly dividing into 3 × 3 mesh regions, respectively calculating noise magnitude evaluation value imagesMNoise level of whole image and each grid regionAnd the evaluation value is calculated by the formula:
wherein,M r the corresponding area is indicated by a representation of,r=1, 2, …, 10 for full graph and 9 grid areas respectively,M r v,is shown in the arearMiddle pixelvA value of (d); calculating to obtain a noise value characteristic setP={P 1,P 2, …,P 10}。
Further, in the step S3, the noise value feature set is setPThe method is used as a feature set of a machine learning algorithm, and a noise amplitude prediction model of a noise image is obtained through the machine learning algorithm and a five-equal-division cross validation method, and specifically comprises the following steps:
step S31: set the noise value characteristicPMiddle characteristic valueP 1,P 2, …,P 10After the data are arranged from small to large, the data are used as a feature set F of a machine learning algorithm, and the feature set F is divided into five parts at random: f1, F2, F3, F4 and F5;
step S32: f2, F3, F4 and F5 are used as training data sets for machine learning, image distortion amplitudes corresponding to the training data sets in an image quality assessment database are used as training labels for machine learning, and a noise amplitude prediction model M1 is obtained through learning;
step S33: step S32 is repeated to obtain noise amplitude prediction models M2 when F1, F3, F4, and F5 are training data sets, noise amplitude prediction models M3 when F1, F2, F4, and F5 are training data sets, noise amplitude prediction models M4 when F1, F2, F3, and F5 are training data sets, and noise amplitude prediction models M5 when F1, F2, F3, and F4 are training data sets.
Further, in step S4, predicting the corresponding noise image by using a noise amplitude prediction model to obtain a predicted noise amplitude value of each noise image, specifically including the following steps:
step S41: predicting the image set corresponding to the feature set F1 by using a noise amplitude prediction model M1 to obtain a noise amplitude value prediction set V1;
step S42: repeating the method of the step S41, and predicting the image sets corresponding to the feature sets F2, F3, F4, and F5 by using noise amplitude prediction models M2, M3, M4, and M5, respectively, to obtain noise amplitude value prediction sets V2, V3, V4, and V5;
step S43: and integrating the noise amplitude value prediction set V = { V1, V2, V3, V4 and V5}, and obtaining the noise amplitude value prediction set V of the complete image set.
Further, in the step S5, performing denoising processing by using the predicted noise amplitude value of each noise image and the optimal denoising parameter corresponding to the amplitude value, to obtain a denoising image set, specifically including the following steps:
step S51: for each noise image, finding a noise amplitude value corresponding to the noise image from a noise amplitude value prediction set V;
step S52: and according to the noise amplitude value, adopting the corresponding optimal denoising parameter to perform Gaussian low-pass filtering processing on the noise image to obtain a denoising image set FI.
Compared with the prior art, the invention has the beneficial effects that: the method comprises the steps of predicting the noise amplitude of a noise image by machine learning, then performing denoising processing by using the optimal denoising parameter suitable for the amplitude, and finally calculating the saliency map of the denoised image by adopting a saliency detection method.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention.
Fig. 2 is an exemplary picture in step S2 of an embodiment of the present invention (for better display effect, the pixel values of (d), (g), and (h) in fig. 2 are mapped to [0,1 ]).
Fig. 3 is a flow chart of an overall method implementation of an embodiment of the present invention.
Fig. 4 is an original noise image and an exemplary picture of the final effect through steps S5 and S6 in an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the embodiments.
The invention provides a noise image saliency detection method based on machine learning, which comprises the following steps as shown in figures 1 and 3:
step S1: and (3) denoising the noise image of each amplitude by adopting various denoising parameters to obtain the corresponding optimal denoising parameter of each amplitude. In this embodiment, step S1 specifically includes the following steps:
step S11: carrying out denoising processing on the noise image of each amplitude by using 9 Gaussian low-pass filtering denoising parameters (the template sizes are {3 × 3, 5 × 5 and 7 × 7} respectively, and the standard deviations are {0.5, 0.7 and 0.9} respectively) to obtain a denoised image set S containing 9 denoising parameters of each amplitude;
step S12: calculating a saliency map of the denoised image set S by using a saliency detection method VA (salience detection visualization markov chain) to obtain a saliency map set T of the denoised image;
step S13: and evaluating the significance map set T of the denoised image by using an evaluation index PR-AUC (the area under precision-side curve), and finding out the denoising parameter used when the average PR-AUC highest value is found for each amplitude to obtain the optimal denoising parameter of each amplitude.
Step S2: performing feature extraction on each noise image by using a noise evaluation algorithm to obtain the noise value feature of each noise image so as to form a noise value feature setP. In this embodiment, as shown in fig. 2, step S2 specifically includes the following steps:
step S21: carrying out graying processing on the noise image to obtain a grayscale imageI(see FIG. 2 (b));
step S22: processing grayscale images using bilateral filteringIObtaining a bilateral filtering result graph(see FIG. 2 (c));
step S23: calculating a grayscale imageIAnd bilateral filtering result graphTo obtain a difference imageD(see FIG. 2 (d));
step S24: for gray scale imageIObtaining an edge image by using a Canny edge detection methodE(see FIG. 2 (e)) for edge imagesEExpanding the edge region using an expansion operator to obtain an expanded edge image(see FIG. 2 (f));
step S25: calculating noise magnitude evaluation value imageMThe calculation formula is as follows:
wherein
Wherein,D v representing pixels in a grayscale imagevThe value of (a) is,tthe number of pixels is represented by a number of pixels,an enlarged edge image is represented which is,Nrepresenting an enlarged edge imageMiddle pixel pointtA set of values of 0;
step S26: evaluating the noise magnitude of the imageM(see fig. 2 (g)) is divided into 3 × 3 mesh regions (see fig. 2 (h)) uniformly, and noise level evaluation value images are calculated separatelyMThe noise magnitude evaluation value of the whole graph and each grid area is calculated according to the formula:
wherein,M r the corresponding area is indicated by a representation of,r=1, 2, …, 10 for full graph and 9 grid areas respectively,M r v,is shown in the arearMiddle pixelvA value of (d); calculating to obtain a noise value characteristic setP={P 1,P 2, …,P 10}。
Step S3: set the noise value characteristicPAnd as a feature set of a machine learning algorithm, obtaining a noise amplitude prediction model of the noise image by the machine learning algorithm and a five-equal-division cross validation method. In this embodiment, step S3 specifically includes the following steps:
step S31: set the noise value characteristicPMiddle characteristic valueP 1,P 2, …,P 10After the data are arranged from small to large, the data are used as a feature set F of a machine learning algorithm, and the feature set F is divided into five parts at random: f1, F2, F3, F4 and F5;
step S32: f2, F3, F4 and F5 are used as training data sets of machine learning, image distortion amplitudes corresponding to the image quality assessment database TID2013 are used as training labels of the machine learning, and a noise amplitude prediction model M1 is obtained through learning;
step S33: step S32 is repeated to obtain noise amplitude prediction models M2 when F1, F3, F4, and F5 are training data sets, noise amplitude prediction models M3 when F1, F2, F4, and F5 are training data sets, noise amplitude prediction models M4 when F1, F2, F3, and F5 are training data sets, and noise amplitude prediction models M5 when F1, F2, F3, and F4 are training data sets.
Step S4: and predicting the corresponding noise image by adopting a noise amplitude prediction model to obtain a predicted noise amplitude value of each noise image. In this embodiment, step S4 specifically includes the following steps:
step S41: predicting the image set corresponding to the feature set F1 by using a noise amplitude prediction model M1 to obtain a noise amplitude value prediction set V1;
step S42: repeating the method of the step S41, and predicting the image sets corresponding to the feature sets F2, F3, F4, and F5 by using noise amplitude prediction models M2, M3, M4, and M5, respectively, to obtain noise amplitude value prediction sets V2, V3, V4, and V5;
step S43: and integrating the noise amplitude value prediction set V = { V1, V2, V3, V4 and V5}, and obtaining the noise amplitude value prediction set V of the complete image set.
Step S4: and predicting the corresponding noise image by adopting a noise amplitude prediction model to obtain a predicted noise amplitude value of each noise image. The method specifically comprises the following steps:
step S41: predicting the image set corresponding to the feature set F1 by using a noise amplitude prediction model M1 to obtain a noise amplitude value prediction set V1;
step S42: repeating the method of the step S41, and predicting the image sets corresponding to the feature sets F2, F3, F4, and F5 by using noise amplitude prediction models M2, M3, M4, and M5, respectively, to obtain noise amplitude value prediction sets V2, V3, V4, and V5;
step S43: and integrating the noise amplitude value prediction set V = { V1, V2, V3, V4 and V5}, and obtaining the noise amplitude value prediction set V of the complete image set.
Step S5: and denoising by adopting the predicted noise amplitude value of each noise image and the optimal denoising parameter corresponding to the amplitude to obtain a denoising image set. In this embodiment, as shown in fig. 4, step S5 specifically includes the following steps:
step S51: for each noise image, finding a noise amplitude value corresponding to the noise image from a noise amplitude value prediction set V;
step S52: and according to the noise amplitude value, adopting the corresponding optimal denoising parameter to perform Gaussian low-pass filtering processing on the noise image to obtain a denoising image set FI.
Step S6: and detecting the images in the de-noised image set FI by using a significance detection method VA to obtain a final significance map.
The noise image significance detection method based on machine learning provided by the invention considers the influence of a noise image on the significance detection method, excavates the correlation between the noise size evaluation value characteristics and the noise amplitude in the image quality evaluation database TID2013, designs and obtains a machine learning noise prediction model, performs denoising processing on the image by combining a denoising method and parameters set for the amplitude, and finally calculates the significance map of the denoised image by adopting a significance detection method VA. The method can effectively improve the detection performance of the saliency detection method on the noise image, and can be applied to the fields of image and video processing, computer vision and the like.
The above are preferred embodiments of the present invention, and all changes made according to the technical scheme of the present invention that produce functional effects do not exceed the scope of the technical scheme of the present invention belong to the protection scope of the present invention.
Claims (6)
1. A noise image saliency detection method based on machine learning is characterized by comprising the following steps:
step S1: denoising the noise image of each amplitude by adopting a plurality of denoising parameters respectively to obtain an optimal denoising parameter corresponding to each amplitude;
step S2: performing feature extraction on each noise image by using a noise evaluation algorithm to obtain the noise value feature of each noise image so as to form a noise value feature setP;
Step S3: set the noise value characteristicPAs a feature set of a machine learning algorithm, obtaining a noise amplitude prediction model of a noise image through the machine learning algorithm and a five-equal-division cross validation method;
step S4: predicting the corresponding noise image by adopting a noise amplitude prediction model to obtain a predicted noise amplitude value of each noise image;
step S5: carrying out denoising treatment by adopting the predicted noise amplitude value of each noise image and the optimal denoising parameter corresponding to the amplitude to obtain a denoising image set;
step S6: and detecting the images in the de-noised image set by using a saliency detection method to obtain a final saliency map.
2. The noise image saliency detection method based on machine learning according to claim 1, characterized in that: in the step S1, the denoising processing is performed on the noise image of each amplitude by using a plurality of denoising parameters, so as to obtain an optimal denoising parameter corresponding to each amplitude, which specifically includes the following steps:
step S11: use ofnDenoising the noise image of each amplitude by using the Gaussian low-pass filtering denoising parameter to obtain the noise image of each amplitudenA de-noised image set S of the de-noising parameters is planted;
step S12: calculating a saliency map of the denoised image set S by using a saliency detection method VA to obtain a saliency map set T of the denoised image;
step S13: and evaluating the significance map set T of the denoised image by using the evaluation index PR-AUC, and finding out the denoising parameter used when the average PR-AUC highest value is found for each amplitude to obtain the optimal denoising parameter of each amplitude.
3. The noise image saliency detection method based on machine learning according to claim 1, characterized in that: in step S2, a noise evaluation algorithm is used to perform feature extraction on each noise image to obtain a noise value feature of each noise image, so as to form a noise value feature setPSpecifically comprises the following steps:
Step S21: carrying out graying processing on the noise image to obtain a grayscale imageI;
Step S22: processing grayscale images using bilateral filteringIObtaining a bilateral filtering result graph;
Step S23: calculating a grayscale imageIAnd bilateral filtering result graphTo obtain a difference imageD;
Step S24: for gray scale imageIObtaining an edge image by using a Canny edge detection methodEFor edge imageEExpanding the edge region using an expansion operator to obtain an expanded edge image;
Step S25: calculating noise magnitude evaluation value imageMThe calculation formula is as follows:
wherein
Wherein,D v representing pixels in a grayscale imagevThe value of (a) is,tthe number of pixels is represented by a number of pixels,an enlarged edge image is represented which is,Nrepresenting an enlarged edge imageMiddle pixel pointtA set of values of 0;
step S26: evaluating the noise magnitude of the imageMUniformly dividing into 3 × 3 mesh regions, respectively calculating noise magnitude evaluation value imagesMThe noise magnitude evaluation value of the whole graph and each grid area is calculated according to the formula:
wherein,M r the corresponding area is indicated by a representation of,r=1, 2, …, 10 for full graph and 9 grid areas respectively,M r v,is shown in the arearMiddle pixelvA value of (d); calculating to obtain a noise value characteristic setP={P 1,P 2, …,P 10}。
4. The noise image saliency detection method based on machine learning according to claim 3, characterized in that: in step S3, the noise value feature set is setPThe method is used as a feature set of a machine learning algorithm, and a noise amplitude prediction model of a noise image is obtained through the machine learning algorithm and a five-equal-division cross validation method, and specifically comprises the following steps:
step S31: set the noise value characteristicPMiddle characteristic valueP 1,P 2, …,P 10After the data are arranged from small to large, the data are used as a feature set F of a machine learning algorithm, and the feature set F is divided into five parts at random: f1, F2, F3, F4 and F5;
step S32: f2, F3, F4 and F5 are used as training data sets for machine learning, image distortion amplitudes corresponding to the training data sets in an image quality assessment database are used as training labels for machine learning, and a noise amplitude prediction model M1 is obtained through learning;
step S33: step S32 is repeated to obtain noise amplitude prediction models M2 when F1, F3, F4, and F5 are training data sets, noise amplitude prediction models M3 when F1, F2, F4, and F5 are training data sets, noise amplitude prediction models M4 when F1, F2, F3, and F5 are training data sets, and noise amplitude prediction models M5 when F1, F2, F3, and F4 are training data sets.
5. The noise image saliency detection method based on machine learning of claim 4 is characterized by: in step S4, predicting the corresponding noise image by using a noise amplitude prediction model to obtain a predicted noise amplitude value of each noise image, which specifically includes the following steps:
step S41: predicting the image set corresponding to the feature set F1 by using a noise amplitude prediction model M1 to obtain a noise amplitude value prediction set V1;
step S42: repeating the method of the step S41, and predicting the image sets corresponding to the feature sets F2, F3, F4, and F5 by using noise amplitude prediction models M2, M3, M4, and M5, respectively, to obtain noise amplitude value prediction sets V2, V3, V4, and V5;
step S43: and integrating the noise amplitude value prediction set V = { V1, V2, V3, V4 and V5}, and obtaining the noise amplitude value prediction set V of the complete image set.
6. The method as claimed in claim 5, wherein in step S5, denoising is performed by using the predicted noise amplitude value of each noise image and the optimal denoising parameter corresponding to the amplitude value, so as to obtain a set of denoised images, specifically comprising the following steps:
step S51: for each noise image, finding a noise amplitude value corresponding to the noise image from a noise amplitude value prediction set V;
step S52: and according to the noise amplitude value, adopting the corresponding optimal denoising parameter to perform Gaussian low-pass filtering processing on the noise image to obtain a denoising image set FI.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610222900.XA CN105913427B (en) | 2016-04-12 | 2016-04-12 | Machine learning-based noise image saliency detecting method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610222900.XA CN105913427B (en) | 2016-04-12 | 2016-04-12 | Machine learning-based noise image saliency detecting method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105913427A true CN105913427A (en) | 2016-08-31 |
CN105913427B CN105913427B (en) | 2017-05-10 |
Family
ID=56744939
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610222900.XA Expired - Fee Related CN105913427B (en) | 2016-04-12 | 2016-04-12 | Machine learning-based noise image saliency detecting method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105913427B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416756A (en) * | 2018-03-26 | 2018-08-17 | 福州大学 | A kind of region perceptual image denoising method based on machine learning |
CN111788476A (en) * | 2018-02-26 | 2020-10-16 | 株式会社高迎科技 | Component mounting state inspection method, printed circuit board inspection apparatus, and computer-readable recording medium |
CN111798658A (en) * | 2019-11-08 | 2020-10-20 | 方勤 | Traffic lane passing efficiency detection platform |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140193066A1 (en) * | 2012-12-10 | 2014-07-10 | Brain Corporation | Contrast enhancement spiking neuron network sensory processing apparatus and methods |
CN104463870A (en) * | 2014-12-05 | 2015-03-25 | 中国科学院大学 | Image salient region detection method |
CN104616259A (en) * | 2015-02-04 | 2015-05-13 | 西安理工大学 | Non-local mean image de-noising method with noise intensity self-adaptation function |
CN105376506A (en) * | 2014-08-27 | 2016-03-02 | 江南大学 | Design of image pattern noise relevance predictor |
-
2016
- 2016-04-12 CN CN201610222900.XA patent/CN105913427B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140193066A1 (en) * | 2012-12-10 | 2014-07-10 | Brain Corporation | Contrast enhancement spiking neuron network sensory processing apparatus and methods |
CN105376506A (en) * | 2014-08-27 | 2016-03-02 | 江南大学 | Design of image pattern noise relevance predictor |
CN104463870A (en) * | 2014-12-05 | 2015-03-25 | 中国科学院大学 | Image salient region detection method |
CN104616259A (en) * | 2015-02-04 | 2015-05-13 | 西安理工大学 | Non-local mean image de-noising method with noise intensity self-adaptation function |
Non-Patent Citations (2)
Title |
---|
张巧荣 等: "利用多尺度频域分析的图像显著区域检测", 《哈尔滨工程大学学报》 * |
钱晓亮 等: "视觉显著性检测:一种融合长期和短期特征的信息论算法", 《电子与信息学报》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111788476A (en) * | 2018-02-26 | 2020-10-16 | 株式会社高迎科技 | Component mounting state inspection method, printed circuit board inspection apparatus, and computer-readable recording medium |
CN111788476B (en) * | 2018-02-26 | 2023-07-07 | 株式会社高迎科技 | Method for inspecting component mounting state, printed circuit board inspection device, and computer-readable recording medium |
CN108416756A (en) * | 2018-03-26 | 2018-08-17 | 福州大学 | A kind of region perceptual image denoising method based on machine learning |
CN108416756B (en) * | 2018-03-26 | 2021-11-02 | 福州大学 | Regional perception image denoising method based on machine learning |
CN111798658A (en) * | 2019-11-08 | 2020-10-20 | 方勤 | Traffic lane passing efficiency detection platform |
Also Published As
Publication number | Publication date |
---|---|
CN105913427B (en) | 2017-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7490141B2 (en) | IMAGE DETECTION METHOD, MODEL TRAINING METHOD, IMAGE DETECTION APPARATUS, TRAINING APPARATUS, DEVICE, AND PROGRAM | |
CN108229526B (en) | Network training method, network training device, image processing method, image processing device, storage medium and electronic equipment | |
CN108961303B (en) | Image processing method and device, electronic equipment and computer readable medium | |
Gu et al. | Blind quality assessment of tone-mapped images via analysis of information, naturalness, and structure | |
JP6507846B2 (en) | Image noise removing method and image noise removing apparatus | |
Bhowmik et al. | Visual attention-based image watermarking | |
CN111612741B (en) | Accurate reference-free image quality evaluation method based on distortion recognition | |
WO2014070489A1 (en) | Recursive conditional means image denoising | |
CN110378893B (en) | Image quality evaluation method and device and electronic equipment | |
CN113592776A (en) | Image processing method and device, electronic device and storage medium | |
CN105913427B (en) | Machine learning-based noise image saliency detecting method | |
CN116797590A (en) | Mura defect detection method and system based on machine vision | |
JP2020197915A (en) | Image processing device, image processing method, and program | |
Zhao et al. | Motion-blurred image restoration framework based on parameter estimation and fuzzy radial basis function neural networks | |
Vega et al. | Image tampering detection by estimating interpolation patterns | |
CN114511702A (en) | Remote sensing image segmentation method and system based on multi-scale weighted attention | |
CN110428006A (en) | The detection method of computer generated image, system, device | |
Chen et al. | A universal reference-free blurriness measure | |
CN114119376A (en) | Image processing method and device, electronic equipment and storage medium | |
JP2012231301A (en) | Coefficient learning apparatus and method, image processing apparatus and method, program, and recording medium | |
CN114742774B (en) | Non-reference image quality evaluation method and system integrating local and global features | |
CN115471736A (en) | Forged image detection method and device based on attention mechanism and knowledge distillation | |
Wang et al. | Take a prior from other tasks for severe blur removal | |
VidalMata et al. | On the effectiveness of image manipulation detection in the age of social media | |
CN103077396B (en) | The vector space Feature Points Extraction of a kind of coloured image and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170510 |
|
CF01 | Termination of patent right due to non-payment of annual fee |