CN112949669A - Method for estimating Gaussian low-pass filtering parameters in digital image - Google Patents

Method for estimating Gaussian low-pass filtering parameters in digital image Download PDF

Info

Publication number
CN112949669A
CN112949669A CN201911256752.3A CN201911256752A CN112949669A CN 112949669 A CN112949669 A CN 112949669A CN 201911256752 A CN201911256752 A CN 201911256752A CN 112949669 A CN112949669 A CN 112949669A
Authority
CN
China
Prior art keywords
gaussian low
convolutional neural
neural network
parameters
estimating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911256752.3A
Other languages
Chinese (zh)
Inventor
丁峰
杨建权
常杰
朱国普
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201911256752.3A priority Critical patent/CN112949669A/en
Publication of CN112949669A publication Critical patent/CN112949669A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing and multimedia information security, and relates to a method for estimating Gaussian low-pass filter parameters in a digital image, which comprises the following steps: 1) carrying out gray level conversion on the image information; 2) performing Gaussian low-pass filtering to obtain a training set; 3) building a convolutional neural network; 4) optimizing network advanced parameters; 5) training a convolutional neural network; 6) classification based on softmax; 7) the gaussian low-pass filter parameters are estimated. The invention creatively uses the convolutional neural network to solve the problem of parameter estimation, and is different from the traditional parameter estimation which needs a highly-specialized skill design model, and the convolutional neural network can be used for achieving the aim only by simply collecting data for training.

Description

Method for estimating Gaussian low-pass filtering parameters in digital image
Technical Field
The invention belongs to the technical field of image processing and multimedia information security, and relates to a method for estimating Gaussian low-pass filter parameters in a digital image.
Background
Digital images are widely used as a carrier for the transmission and diffusion of important visual information. Digital images bring convenience to life of people, but if information in the images is tampered by an unconscious person, the tampered images bring a great threat to information security. Therefore, digital evidence as a means to protect the authenticity and integrity of images has attracted attention of a wide range of scientists. With the development of technology in these years, ordinary people who have not been trained in the profession also have the capability of tampering with images, and the evidence of tampering is more and more difficult to be perceived. Therefore, a large number of digital forensic algorithms are designed each year to cope with the severe information security situation.
So far, a large number of image editing operations of different effects can be said to be figure eight. Therefore, in conventional digital forensics, corresponding detection algorithms are necessary for different image editing operations. However, in recent years, with the rapid development of deep learning, almost all image editing operations can be easily detected by deep learning. In such a background, merely detecting an image editing tampering operation has not been able to meet the current forensic requirements. Therefore, it is expected that more information can be extracted from the image to further understand the editing and modeling history of the image.
Gaussian low-pass filtering is widely used in digital image processing as one of the most common means of image editing. The largest application scene is denoising, and almost all images are subjected to denoising processing after being shaped so as to improve the image quality. In this case, the intensity of the gaussian low-pass filter is usually set to be low to achieve the visual effect of not blurring while denoising. In addition, it can also be used to increase image smoothness, for example, especially in human face processing, the facial skin is often subjected to gaussian low-pass filtering to remove wrinkles, color spots, etc., which can significantly improve the aesthetic effect of the human face. At this time, the gaussian low-pass filtering is set to a high intensity to achieve a perfect beautifying effect. In digital forensics, there are a number of algorithms that detect gaussian filtering. It is now possible to essentially perfectly detect the traces left in the image by gaussian low-pass filtering, but there is little evidence to focus on the parameters used in gaussian filtering. If the core parameters in Gaussian filtering can be estimated, the method is more beneficial to mastering the complete editing history of the image and further analyzing the intention of image editing operation as an auxiliary means.
Disclosure of Invention
In order to solve the problems in the background art, the invention provides a method for estimating gaussian low-pass filter parameters in a digital image, which achieves the purpose of estimating the parameters by classifying the filtered images under different parameters by using a convolutional neural network.
The technical scheme for solving the problems is as follows: a method of estimating gaussian low-pass filter parameters in a digital image, characterized in that it comprises the steps of:
1) carrying out gray level conversion on the image information;
2) performing Gaussian low-pass filtering to obtain a training set;
3) building a convolutional neural network;
4) optimizing network advanced parameters;
5) training a convolutional neural network;
6) classification based on softmax;
7) the gaussian low-pass filter parameters are estimated.
Preferentially, the step 1) performs gray scale conversion on the image information, specifically:
if Gaussian parameter estimation needs to be carried out on the color image, the color image needs to be converted into a gray image in advance; if the input is a gray image, the next operation is directly carried out without carrying out gray conversion.
Preferentially, the step 2) performs gaussian low-pass filtering processing to obtain a training set, specifically:
and (2) processing the image information processed in the step 1) by Gaussian low-pass filters with different parameters, and marking for use in subsequent network training. The gaussian window sizes chosen in the present invention are 3 and 5, and the standard deviations chosen are as follows, [0.5, 1, 1.5, 2, 3, 5 ].
Preferentially, the step 3) of establishing the convolutional neural network specifically comprises the following steps:
in this step, a proper convolutional neural network structure needs to be designed. Conventional convolutional neural networks are typically composed of several visual modules. Each vision module includes a convolutional layer, an activation layer, a pooling layer, and other functional layers. It is generally believed that the deeper the neural network, the more visual modules it contains, the more powerful it is to learn. However, in actual use, it is not required that the stronger the learning ability of the neural network is, the better. Neural networks of different depths and learning capabilities are required for different problems. If an excessively complex network is used to solve a simple problem, the neural network learns some irrelevant features and patterns in the self-learning process, so that the problem is complicated, overfitting is performed, and the classification effect of the network is reduced.
According to the experience of the inventor in the design process, the invention firstly tries the network structures comprising four groups, five groups, six groups and seven groups of visual modules, and comprehensively compares the performances and the operation efficiency of the networks with various depths, and finally determines the network structure based on the six-layer visual groups.
Preferentially, the step 4) optimizes the high-level parameters of the network, and specifically comprises the following steps:
convolutional neural networks contain a large number of adjustable parameters. In addition to the basic net learning rate, momentum, etc., there are many higher order parameters such as pooling, activation functions, default convolution method, window size, stride, etc. that can be adjusted. The appropriate parameters can improve the resolution performance of the network. Therefore, in order to accurately perform gaussian parameter estimation, it is necessary to optimize the higher-order parameters of the neural network. The most representative pooling scheme and activation function are chosen for illustration.
Pooling is an important down-sampling approach to reduce feature dimensionality in convolutional neural networks. Since too many features can burden the computation of the neural network, possibly leading to negative effects of overfitting, gradient vanishing, etc., pooling layers are typically added at the exit of each visual group to reduce the size of the extracted feature map. Common pooling is based on a 2 x 2 window, with the most representative pooling modes including average pooling and maximum pooling. Through repeated tests, the maximum pooling is more suitable for solving the problem of gaussian parameter estimation, so all pooling modes are unified into maximum pooling in the network of the present invention.
Activation functions are important components of neural networks. The conventional classifiers are linear classifiers, and when performing multi-class classification, the linear classification is limited, so a classification mode with nonlinear characteristics must be introduced to complete multi-label classification. In convolutional neural networks, the activation function plays such a role. Enabling the appropriate activation function allows better discrimination between different parameter filtered images in the present invention. Through experimentation, the inventors found that the most suitable activation function strategy was to use the TanH function in the first and second visual groups, and the ReLU function in all other visual groups.
In the invention, the inventor utilizes similar strategies to continuously combine and optimize various high-order parameters so as to bring the best classification effect.
Preferentially, the step 5) trains the convolutional neural network, specifically:
inputting the training set in the step 2) into a convolutional neural network, starting training the network, and finally achieving convergence by using a loss function of the network. In the process, training data are transmitted in a forward direction, and features are extracted after the training data are processed by each visual group so as to be used in classification; meanwhile, due to the backward feedback characteristic of the convolutional neural network, the classification result and the gradient of the extracted features are reversely transmitted to the superficial layer vision group, so that the vision group can more effectively extract the features by analyzing the feedback result. The feature of self-learning is the crucial difference of convolutional neural network from other traditional machine learning. In such training, the intelligence of the network is continuously improved, and the classification capability of the network is continuously strengthened.
Preferably, the step 6) is based on classification of softmax, specifically:
after the convolutional neural network extracts features and connects various features completely, classification based on the extracted features is required. The invention selects the softmax layer to complete the related operation. Softmax is also the most common classification layer in convolutional neural networks.
Preferentially, the step 7) estimates gaussian low-pass filtering parameters, specifically:
the trained convolutional neural network model achieves higher classification capability, and the purpose of estimating Gaussian low-pass filtering parameters can be achieved by classifying pictures subjected to filtering processing by different parameters. At this time, any low-pass filtered picture is input into the trained model to be classified and labeled. The Gaussian parameters can be estimated approximately through the labeled result.
The invention has the advantages that:
1. the method creatively uses the convolutional neural network to solve the problem of parameter estimation, is different from the traditional parameter estimation which needs a highly-specialized skill design model, and can achieve the aim by using the convolutional neural network only by simply collecting data for training;
2. the method is different from the general classification problem based on image content, and the classification of the images with the same content and different Gaussian parameters is very challenging, and has higher practical value and significance in digital evidence collection;
3. the method simplifies the parameter estimation problem into a complex classification problem, and the set of process flow for estimating the Gaussian parameters has heuristic significance and can be applied to other similar parameter estimation evidence-obtaining works in the future;
4. the traditional parameter estimation model can only estimate a single parameter of a single operation, and the method can estimate the window size and the standard deviation of Gaussian low-pass filtering at the same time, so that the practical significance is larger.
Drawings
Fig. 1 is a flow chart of a method of estimating gaussian low-pass filter parameters in a digital image.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention.
The invention designs a novel method for evaluating Gaussian low-pass filtering parameters. The basic principle of this operation is to estimate the parameters by classifying the filtered images under different parameters using a convolutional neural network. The convolutional neural network has an autonomous learning function, and can accurately classify various different parameter filtering images by inputting a proper data set for training. The trained network can be directly utilized to blindly detect any input gaussian filtered image.
The general framework of the present invention is shown in fig. 1, and is described in detail below with reference to a specific flow chart.
1) Necessary gradation conversion
Because excessive image information can cause processing difficulty to rise, the method mainly aims at single-channel images, namely gray level images to carry out Gaussian parameter estimation. In practical operation, if gaussian parameter estimation needs to be performed on a color image, the color image needs to be converted into a grayscale image in advance. If the input is a gray image, the next operation is directly carried out without carrying out gray conversion.
2) Gaussian low-pass filtering process
Its main function is to pre-process the original image to prepare a suitable training data set. In reality, it is difficult to collect a gaussian filtered image with a definite gaussian parameter, so in order to achieve the purpose of training the network, this module must be introduced to control the accuracy of the training set. It is known that the performance of convolutional neural networks is influenced by the network structure and parameters, and another very important factor is the training data set. The classification performance of a convolutional neural network can only be maximized through proper data set training. Therefore, in the module, the original image is processed by the Gaussian low-pass filters with different parameters, and is marked for use in subsequent network training. The gaussian window sizes chosen in the present invention are 3 and 5, and the standard deviations chosen are as follows, [0.5, 1, 1.5, 2, 3, 5 ].
3) Building suitable convolution neural network
In this step, a proper convolutional neural network structure needs to be designed. Conventional convolutional neural networks are typically composed of several visual modules. Each vision module includes a convolutional layer, an activation layer, a pooling layer, and other functional layers. It is generally believed that the deeper the neural network, the more visual modules it contains, the more powerful it is to learn. However, in actual use, it is not required that the stronger the learning ability of the neural network is, the better. Neural networks of different depths and learning capabilities are required for different problems. If an excessively complex network is used to solve a simple problem, the neural network learns some irrelevant features and patterns in the self-learning process, so that the problem is complicated, overfitting is performed, and the classification effect of the network is reduced.
According to the experience of the inventor in the design process, the invention firstly tries the network structures comprising four groups, five groups, six groups and seven groups of visual modules, and comprehensively compares the performances and the operation efficiency of the networks with various depths, and finally determines the network structure based on the six groups of visual modules.
4) Optimizing network high-level parameters
Convolutional neural networks contain a large number of adjustable parameters. In addition to the basic net learning rate, momentum, etc., there are many higher order parameters such as pooling, activation functions, default convolution method, window size, stride, etc. that can be adjusted. The appropriate parameters can improve the resolution performance of the network. Therefore, in order to accurately perform gaussian parameter estimation, it is necessary to optimize the higher-order parameters of the neural network. The most representative pooling scheme and activation function are chosen for illustration.
Pooling is an important down-sampling approach to reduce feature dimensionality in convolutional neural networks. Since too many features can burden the computation of the neural network, possibly leading to negative effects of overfitting, gradient vanishing, etc., pooling layers are typically added at the exit of each visual group to reduce the size of the extracted feature map. Common pooling is based on a 2 x 2 window, with the most representative pooling modes including average pooling and maximum pooling. Through repeated tests, the maximum pooling is more suitable for solving the problem of gaussian parameter estimation, so all pooling modes are unified into maximum pooling in the network of the present invention.
Activation functions are important components of neural networks. The conventional classifiers are linear classifiers, and when performing multi-class classification, the linear classification is limited, so a classification mode with nonlinear characteristics must be introduced to complete multi-label classification. In convolutional neural networks, the activation function plays such a role. Enabling the appropriate activation function allows better discrimination between different parameter filtered images in the present invention. Through experimentation, the inventors found that the most suitable activation function strategy was to use the TanH function in the first and second visual groups, and the ReLU function in all other visual groups.
In the invention, the inventor utilizes similar strategies to continuously combine and optimize various high-order parameters so as to bring the best classification effect. The final network structure is shown in figure 1.
5) Training convolutional neural networks
In this step, a training set generated by the second module and a convolutional neural network structure designed by the third and fourth modules are needed. Inputting the training set into the network, the network begins to train, and finally the goal of convergence is reached by the loss function of the network. In the process, training data are transmitted in a forward direction, and features are extracted after the training data are processed by each visual group so as to be used in classification; meanwhile, due to the backward feedback characteristic of the convolutional neural network, the classification result and the gradient of the extracted features are reversely transmitted to the superficial layer vision group, so that the vision group can more effectively extract the features by analyzing the feedback result. The feature of self-learning is the crucial difference of convolutional neural network from other traditional machine learning. In such training, the intelligence of the network is continuously improved, and the classification capability of the network is continuously strengthened.
6) Softmax-based classification
After the neural network extracts features and connects the various features completely, classification based on the extracted features is required. The invention selects the softmax layer to complete the related operation. Softmax is also the most common classification layer in convolutional neural networks.
7) Estimating Gaussian low-pass filter parameters
The trained convolutional neural network model achieves higher classification capability, and the purpose of estimating Gaussian low-pass filtering parameters can be achieved by classifying pictures subjected to filtering processing by different parameters. At this time, any low-pass filtered picture is input into the trained model to be classified and labeled. The Gaussian parameters can be estimated approximately through the labeled result.
The present invention certifies its parameter estimation performance through the following experiment.
10000 images were used as a database in the simulation experiment. All images were taken from the well-known digital forensic database BOSS. 8000 of 10000 images are used as training data set through Gaussian low-pass filtering operation, and 2000 images are used as verification data set through filtering processing. The parameters set in the experiment included window size and standard deviation. The window size is 3 or 5 and the standard deviation is selected from the following numbers, 0.5, 1, 1.5, 2, 3, 5. In order to meet the requirement of parameter estimation, in the simulation, different windows and standard deviations are combined pairwise to finally form 12 low-pass filtered images processed by different Gaussian parameters.
In the experiment, the proposed convolutional neural network is firstly trained by using a training set to obtain higher training accuracy, and finally, the accuracy of the actual estimation parameters is verified by using a verification set. Firstly, the network is used for simple classification, the capability of the network for distinguishing images subjected to filtering processing with different parameters is verified under the condition that the window sizes are uniform, and in the step, the original image is simultaneously introduced to verify the effect simulation result of network detection Gaussian low-pass filtering as follows.
Table 1 the discrimination accuracy of the filtered images with different standard deviations is 3 for the window size. For example: line 3, column 2, 99.33%, shows that when distinguishing between filters with a filter standard deviation of 1 and a filter standard deviation of 0.5, the accuracy reaches 99.33%. Since the accuracy of table row 3 and column 2 should be the same as table row 2 and column 3, the table is denoted by x, and will not be described in detail below.
Table 1 the discrimination accuracy of the filtered images with different standard deviations is 3 for the window size.
Standard deviation of filtering 0.5 1 1.5 2 3 Original image
0.5 x x x x x 95.21%
1 99.33% x x x x 99.64%
1.5 99.57% 99.80% x x x 99.85%
2 99.82% 99.50% 98.70% x x 99.73%
3 99.75% 99.32% 98.15% 97.33% x 99.55%
TABLE 2 discrimination accuracy for different standard deviation filtered images with a window size of 5
Figure BDA0002310477700000091
We also evaluated whether the network has the ability to distinguish different window sizes with a fixed standard deviation by a similar process, and the simulation results are shown in table 3.
Table 3 accuracy in distinguishing gaussian window sizes 3 and 5 with fixed filtering signature difference
Filter label difference sigma 1 1.5 2.0 3.0
Rate of differential accuracy 91.39% 98.20% 99.00% 99.25%
The three tables show that the convolutional neural network provided by the invention can better distinguish the pictures processed by Gaussian filtering with different parameters, and has unusual expression on detecting Gaussian filtering traces.
In the final simulation, we mix all the images together to allow the network to perform complex classification. The step is the core relevant content of the invention, and can perfectly distinguish different windows and different standard deviations to become a qualified forensics parameter estimator. Through training and verification, in the step, the final classification accuracy of the neural network reaches 96.95%, and the method has high parameter estimation performance.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent flow transformations made by using the contents of the specification and the drawings, or applied directly or indirectly to other related systems, are included in the scope of the present invention.

Claims (8)

1. A method of estimating gaussian low-pass filter parameters in a digital image, comprising the steps of:
1) carrying out gray level conversion on the image information;
2) performing Gaussian low-pass filtering to obtain a training set;
3) building a convolutional neural network;
4) optimizing network advanced parameters;
5) training a convolutional neural network;
6) classification based on softmax;
7) the gaussian low-pass filter parameters are estimated.
2. The method of claim 1, wherein the estimating the Gaussian low-pass filter parameters in the digital image,
the step 1) performs gray level conversion on the image information, specifically:
if the Gaussian parameter estimation needs to be carried out on the color image, the color image is converted into a gray image; if the input is a gray image, the next operation is directly carried out without carrying out gray conversion.
3. A method for estimating gaussian low-pass filter parameters in a digital image according to claim 2, wherein:
step 2) carrying out Gaussian low-pass filtering processing to obtain a training set, which specifically comprises the following steps:
and (2) processing the image information processed in the step 1) by Gaussian low-pass filters with different parameters, and marking the image information to be used as a training set.
4. A method for estimating Gaussian low-pass filter parameters in a digital image as claimed in claim 3,
and 3) building a convolutional neural network, wherein the convolutional neural network comprises six visual modules, and each visual module comprises a convolutional layer, an activation layer and a pooling layer.
5. The method of estimating Gaussian low-pass filter parameters in a digital image according to claim 4,
the step 4) of optimizing the network advanced parameters specifically comprises the following steps:
the high-level parameters comprise a pooling mode and an activation function;
wherein the pooling mode is maximum pooling;
the activation function is a TanH function in the first and second vision modules, and a ReLU function in all other vision modules.
6. The method of claim 5, wherein the estimating the Gaussian low-pass filter parameters in the digital image,
the step 5) of training the convolutional neural network specifically comprises the following steps:
inputting the training set in the step 2) into a convolutional neural network, starting training the network, and finally achieving convergence by using a loss function of the network.
7. The method of claim 6, wherein the estimating the gaussian low-pass filter parameters in the digital image comprises:
the step 6) is based on the classification of softmax, and specifically comprises the following steps:
after the convolutional neural network extracts the features and completely connects various features, a softmax layer is selected as a classification layer.
8. The method of estimating Gaussian low-pass filter parameters in a digital image according to claim 7,
the step 7) of estimating gaussian low-pass filtering parameters specifically comprises:
any low-pass filtering picture is input into the trained model to be classified and labeled, and the Gaussian low-pass filtering parameter can be estimated according to the labeling result.
CN201911256752.3A 2019-12-10 2019-12-10 Method for estimating Gaussian low-pass filtering parameters in digital image Pending CN112949669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911256752.3A CN112949669A (en) 2019-12-10 2019-12-10 Method for estimating Gaussian low-pass filtering parameters in digital image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911256752.3A CN112949669A (en) 2019-12-10 2019-12-10 Method for estimating Gaussian low-pass filtering parameters in digital image

Publications (1)

Publication Number Publication Date
CN112949669A true CN112949669A (en) 2021-06-11

Family

ID=76225402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911256752.3A Pending CN112949669A (en) 2019-12-10 2019-12-10 Method for estimating Gaussian low-pass filtering parameters in digital image

Country Status (1)

Country Link
CN (1) CN112949669A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715257A (en) * 2013-12-11 2015-06-17 中国科学院深圳先进技术研究院 Image median filtering detection method and device
US20160321523A1 (en) * 2015-04-30 2016-11-03 The Regents Of The University Of California Using machine learning to filter monte carlo noise from images
WO2018045602A1 (en) * 2016-09-07 2018-03-15 华中科技大学 Blur kernel size estimation method and system based on deep learning
US20180114109A1 (en) * 2016-10-20 2018-04-26 Nokia Technologies Oy Deep convolutional neural networks with squashed filters
CN109284530A (en) * 2018-08-02 2019-01-29 西北工业大学 Space non-cooperative target appearance rail integration method for parameter estimation based on deep learning
CN110472545A (en) * 2019-08-06 2019-11-19 中北大学 The classification method of the power components image of taking photo by plane of knowledge based transfer learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715257A (en) * 2013-12-11 2015-06-17 中国科学院深圳先进技术研究院 Image median filtering detection method and device
US20160321523A1 (en) * 2015-04-30 2016-11-03 The Regents Of The University Of California Using machine learning to filter monte carlo noise from images
WO2018045602A1 (en) * 2016-09-07 2018-03-15 华中科技大学 Blur kernel size estimation method and system based on deep learning
US20180114109A1 (en) * 2016-10-20 2018-04-26 Nokia Technologies Oy Deep convolutional neural networks with squashed filters
CN109284530A (en) * 2018-08-02 2019-01-29 西北工业大学 Space non-cooperative target appearance rail integration method for parameter estimation based on deep learning
CN110472545A (en) * 2019-08-06 2019-11-19 中北大学 The classification method of the power components image of taking photo by plane of knowledge based transfer learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FENG DING, ET AL.: "Real-time estimation for¬ the ¬parameters of¬ Gaussian filtering via¬ deep learning", JOURNAL OF REAL-TIME IMAGE PROCESSING, vol. 17, pages 17 - 27 *

Similar Documents

Publication Publication Date Title
US10223780B2 (en) Image steganalysis based on deep learning
CN107844795B (en) Convolutional neural networks feature extracting method based on principal component analysis
CN104050471B (en) Natural scene character detection method and system
CN109948692B (en) Computer-generated picture detection method based on multi-color space convolutional neural network and random forest
CN108268859A (en) A kind of facial expression recognizing method based on deep learning
CN108334881B (en) License plate recognition method based on deep learning
CN112163511B (en) Method for identifying authenticity of image
CN110399821B (en) Customer satisfaction acquisition method based on facial expression recognition
CN104408449B (en) Intelligent mobile terminal scene literal processing method
CN107590432A (en) A kind of gesture identification method based on circulating three-dimensional convolutional neural networks
CN112395442B (en) Automatic identification and content filtering method for popular pictures on mobile internet
CN113537008B (en) Micro expression recognition method based on self-adaptive motion amplification and convolutional neural network
CN107506765B (en) License plate inclination correction method based on neural network
CN112818862A (en) Face tampering detection method and system based on multi-source clues and mixed attention
CN108629338A (en) A kind of face beauty prediction technique based on LBP and convolutional neural networks
CN106295645B (en) A kind of license plate character recognition method and device
CN110503613A (en) Based on the empty convolutional neural networks of cascade towards removing rain based on single image method
CN104143091B (en) Based on the single sample face recognition method for improving mLBP
CN105117707A (en) Regional image-based facial expression recognition method
CN109492668A (en) MRI based on multichannel convolutive neural network not same period multi-mode image characterizing method
CN107944398A (en) Based on depth characteristic association list diagram image set face identification method, device and medium
CN109886978A (en) A kind of end-to-end warning information recognition methods based on deep learning
CN109711411B (en) Image segmentation and identification method based on capsule neurons
CN110046544A (en) Digital gesture identification method based on convolutional neural networks
CN106529395A (en) Signature image recognition method based on deep brief network and k-means clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination