CN105825511B - A kind of picture background clarity detection method based on deep learning - Google Patents
A kind of picture background clarity detection method based on deep learning Download PDFInfo
- Publication number
- CN105825511B CN105825511B CN201610155947.9A CN201610155947A CN105825511B CN 105825511 B CN105825511 B CN 105825511B CN 201610155947 A CN201610155947 A CN 201610155947A CN 105825511 B CN105825511 B CN 105825511B
- Authority
- CN
- China
- Prior art keywords
- picture
- layer
- pixel
- background
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The picture background clarity detection method based on deep learning that the invention discloses a kind of, this method are to carry out feature extraction using convolutional neural networks (CNN), and effectively picture is classified according to its clear background degree using the CNN features extracted;Simultaneously using the method for transfer learning, pre-training is carried out with the ImageNet pictures for possessing a large amount of known marks, solves the defect that samples pictures concentrate known background definition values picture less, to obtain preferable CNN parameters;The samples pictures for further utilizing a small amount of known background definition values, are adjusted parameter, make CNN parameter adaptations pictures to be detected;The CNN parameters being adjusted can be carried out the clear background degree detection of picture to be detected.The detection method of the present invention so that clear background degree detects the accuracy that can reach height.
Description
Technical field
The picture background clarity detection method based on deep learning that the present invention relates to a kind of, relates generally in machine learning
The application of deep learning belongs to artificial intelligence picture recognition technical field.
Background technology
The concept of deep learning is derived from the research of artificial neural network, and the multilayer perceptron containing more hidden layers is exactly a kind of depth
Learning structure.Deep learning forms more abstract high-rise expression (attribute classification or feature) by combining low-level feature, with hair
The distributed nature of existing data indicates.Typical algorithm of the BP algorithm as conventional exercises multitiered network, it is several in practice for containing only
Layer network, the training method are just very undesirable.Depth structure (being related to multiple nonlinear processing unit layers) non-convex target cost
The Local Minimum of generally existing is the difficult main source of training in function.
The learning methods such as Most current classification, recurrence are shallow structure algorithm, are limited in that finite sample and calculating
Limited to the expression ability of complicated function under cell cases, for complicated classification problem, its generalization ability is centainly restricted.It is deep
Degree study can be realized that complicated function approaches, characterize input data distribution table by learning a kind of deep layer nonlinear network structure
Show, and presents the powerful ability from a few sample focusing study data set substantive characteristics.Deep learning is exactly a kind of feature
Learning method, initial data is simple by some but nonlinear model be transformed into it is higher level, it is more abstract
Expression.By the combination of enough conversions, extremely complex function can also be learnt.In terms of the core of deep learning
It is that the feature of above layers is not designed using artificial engineering, but uses a kind of general learning process from data
In acquire.
The convolutional neural networks (CNNs) that Lecun et al. is proposed are first real multilayered structure learning algorithms, and this hair
Bright used core knowledge is exactly convolutional neural networks, it reduces number of parameters to improve BP training using spatial correlation
Performance.Convolutional neural networks obtain good effect in field of image recognition, have reached good effect on identification hand-written character
Fruit.But its network structure has large effect to the effect and efficiency of image recognition, to improve recognition performance, passes through reuse
Smaller convolution kernel designs and Implements a kind of new convolutional neural networks structure, efficiently reduces the quantity of training parameter, and can
Improve the accuracy rate of identification.Convolutional neural networks algorithm currently there is world-class ILSVRC to choose with field of image recognition
The algorithm contrast experiment compared with good result is obtained in war match, verifies the validity of this structure.
The training process of convolutional neural networks needs the sample of a large amount of known mark, if the sample size containing label is not
It is enough, it is easy for causing the overfitting of system.Jeff Donahue et al. construct Decaf frames, and thought is exactly to exist first
Pre-training is carried out in pictures containing a large amount of known mark samples, is adjusted the parameter of convolutional neural networks system, is utilized migration
Study moves to the parameter of whole system in the pictures to be trained, only needs the sample of a small amount of known mark in this way, just
Can accurately it be classified.
Have much currently with the type of deep learning picture recognition, such as hand-written character, license plate number etc., but based on volume
The usage of product neural network is complete there is no developing, and there is no the good sides of environment visible level in artificial intelligence identification picture at present
Method, the visible level of picture background, i.e., the fog-level of things in background, the process of major part picture recognition is all identification at present
Object in picture often has ignored information useful in its background environment.The present invention is exactly mainly for solving the problems, such as this.
Identify picture background visible level practicability it is very big, such as using this patent in reality according to picture recognition haze etc.
Grade, application prospect are very wide.
Invention content
The technical problem to be solved by the present invention is to:A kind of picture background clarity detection side based on deep learning is provided
Method, detects clear background degree in picture, i.e., the fog-level of things in background extracts information useful in background environment, is
Picture recognition provides reference.
The present invention uses following technical scheme to solve above-mentioned technical problem:
A kind of picture background clarity detection method based on deep learning, includes the following steps:
Step 1, by the picture of known mark in picture library ImageNet and not in picture library ImageNet but known
The samples pictures of clear background angle value are converted into the gray scale picture that pixel is 256*256;
Step 2, pre-training is carried out to transformed gray scale picture in picture library ImageNet, is carried using convolutional neural networks
It takes the feature of all gray scale pictures and classifies, counting loss function adjusts deconvolution parameter with stochastic gradient descent method, makes letter
Number loss within a predetermined range, obtains the deconvolution parameter after first successive step;
Step 3, to not in picture library ImageNet but the transformed gray scale of the samples pictures of known background definition values
Picture is extracted feature using convolutional neural networks and classified, obtained it based on the deconvolution parameter after step 2 just successive step
Definition values are compared with practical definition values, counting loss function, are continued to adjust deconvolution parameter with stochastic gradient descent method,
Make function loss within a predetermined range, obtains the deconvolution parameter after final adjustment;
Step 4, the picture of clarity to be detected is converted into the gray scale picture that pixel is 256*256, it is final based on step 3
Deconvolution parameter after adjustment extracts feature using convolutional neural networks and classifies, obtains the clear of clarity picture to be detected
Clear angle value.
As a preferred solution of the present invention, the convolutional neural networks include the input by being input to output successively
Layer, the first convolutional layer, the first downward sample level, the second convolutional layer, the second downward sample level, full articulamentum, output layer, and except defeated
Enter layer, outside output layer, the first convolutional layer, the first downward sample level, the second convolutional layer, the second downward sample level, full articulamentum exist
The number of plies where convolutional neural networks is respectively 1,2,3,4,5 layer.
As a preferred solution of the present invention, the convolution process formula of first convolutional layer is:Wherein, l=1, xlIndicate the value of the pixel exported after the first convolutional layer convolution,
Indicate the value of the i-th row j row pixels in input layer, w is deconvolution parameter, and b is offset.
As a preferred solution of the present invention, the downward sampling process formula of the described first downward sample level is:Wherein, l=2, xlIndicate the value of the pixel exported after the first downward sample level sampling,Indicate that the value of the i-th row j row pixels in the first convolutional layer, β are downward sampling parameter, b is offset.
As a preferred solution of the present invention, the full articulamentum includes full connection procedure, and full connection procedure twice
Formula is:Wherein, l=5 when first time connects entirely, l=6, x when connecting entirely for the second timelIt indicates through connecting entirely
The value of the pixel exported after connecing, k expression pixel numbers, k=1 ... when connecting entirely for the first time, 576, for the second time when full connection
K=1 ..., 50,For weighted value.
The present invention has the following technical effects using above technical scheme is compared with the prior art:
1, the present invention is based on the picture background clarity detection methods of deep learning, insufficient in samples pictures,
Using the thought of transfer learning, pre-training is carried out in the ImageNet pictures containing a large amount of known marks first, obtains CNN
Parameter, and further CNN parameters are adjusted, it is allowed to adapt to pictures to be detected, to make the detection of pictures to be detected
Accuracy higher.
2, the present invention is based on the picture background clarity detection method of deep learning, clear background degree in picture is solved
The problem of detection, to haze grade, air quality etc. have prodigious effect as identified in practical application.
Description of the drawings
Fig. 1 is the integrated stand composition of the picture background clarity detection method the present invention is based on deep learning.
Fig. 2 is the flow chart of the picture background clarity detection method the present invention is based on deep learning.
Fig. 3 is the internal structure chart of convolutional neural networks in the present invention.
Specific implementation mode
Embodiments of the present invention are described below in detail, the example of the embodiment is shown in the accompanying drawings.Below by
The embodiment being described with reference to the drawings is exemplary, and is only used for explaining the present invention, and is not construed as limiting the claims.
Due to a given pictures, pixel is not known simultaneously, and the input picture pixels requirement in convolutional neural networks
It is fixed, so first having to pre-process picture, converts thereof into the picture of same pixel, and known right
ImageNet is the picture that picture is wholly converted into the picture of 256*256 pixels and handles, therefore inputs during training
Pixel is also fixed as 256*256 pixels, and pending picture pixels size is not 256*256, then carries out pixel conversion first, will
It is converted into the picture of 256*256 pixels.Since there is no prodigious correlations with its color for the readability of picture, so first
All pictures should be first converted into the picture of gray scale.The readability of picture background is divided into five by the present invention according to definition values
A grade, it is respectively excellent, good, in, it is poor, very poor, so as to better analysis and test.
The invention mainly comprises three processes:Pre-training process, adjustment process and actually detected process.Pre-training process is
It is trained with the pictures ImageNet of known mark, its purpose is to obtain initial CNN parameters;Adjustment process is profit
With the samples pictures of a small amount of known background definition values, CNN parameters are adjusted, make CNN parameter adaptations picture to be detected
Collection;The parameter being adjusted can be carried out the detection of picture to be detected.
Convolutional neural networks (Convolutional Neural Network, CNN) are core of the invention technology, CNN
It is a kind of feedforward neural network, its artificial neuron can respond the surrounding cells in a part of coverage area, for large size
Image procossing has outstanding performance.It includes convolutional layer (alternating convolutional layer) and downward sample level
(pooling layer).Initial several stages are made of convolutional layer and downward sample level, and the unit of convolutional layer is organized in
In characteristic pattern, in characteristic pattern, the weights that each unit is called filter by one group are connected to the characteristic pattern of last layer
A localized mass, then this is local weighted and be passed to a nonlinear function, such as ReLU.In a characteristic pattern
Whole units enjoy identical filter, and the characteristic pattern of different layers uses different filters.It is in two sides using this structure
The reason of face.First, in array data, such as image data, the value near a value be often it is highly relevant, can be with
Form the local feature for having distinction for being easier to be detected.Secondly, different location partial statistics characteristic is less relevant,
That is, some feature occurred in a place, it is also possible to otherwise is appeared in, so the unit of different location can be with
It shares weights and identical sample can be detected.Mathematically, this filter operation executed by a characteristic pattern is one
Offline convolution, convolutional neural networks are also so to gain the name.
Convolutional neural networks have the effect of good feature extraction, and the feature extracted by convolutional neural networks can be very
Good classifies to target object.
Under the frame of traditional machine learning, the task of study be exactly it is given train up data on the basis of learn
Practise a disaggregated model;Then the model learnt using this is classified to test document and is predicted.However, we see
There is a problem that in current Web Research on Mining to machine learning algorithm one it is crucial:In some emerging fields
A large amount of training datas seldom arrive very much.
Traditional machine learning needs demarcate a large amount of training datas to each field, this will expend a large amount of manpower with
Material resources.Without a large amount of labeled data, can much carry out with study correlative study and application.Secondly, traditional
Machine learning assumes that training data obeys identical data distribution with test data.However, in many cases, it is this with distribution
Assuming that and being unsatisfactory for.Usually it can happen that as training data is expired.This generally requires us and goes to mark again largely
Training data is to meet the needs of our training, but it is very expensive to mark new data, needs a large amount of manpower and material resources.From
It is seen in another angle, if we have a large amount of, training data under different distributions, abandons these data completely
It wastes very much.How to be exactly reasonably that transfer learning mainly solves the problems, such as using these data.Transfer learning is to use
New a kind of machine learning method that the knowledge having had solves different but related field problem.
We can be divided into following three parts at work in terms of transfer learning at present:Case-based Reasoning under the isomorphic space
Transfer learning, the transfer learning of feature based and the transfer learning under isomeric space under the isomorphic space.That is used in the present invention moves
It moves and learns to belong to the second part, the transfer learning of feature based under the isomorphic space.Since ImageNet and aiming field have altogether
The parameter enjoyed, so only needing to be migrated the parameter in CNN systems.
As shown in Figure 1 and Figure 2, specific operation process of the invention is as follows:
1, the pretreatment of picture:By the samples pictures of picture and known mark in ImageNet, all located in advance
Reason, is converted into the gray scale picture of 256*256 pixels.
2, the pre-training stage:Pre-training is carried out with ImageNet picture libraries, inputs the gray scale picture of the pixel of 256*256,
Feature is extracted with CNN, and is classified, its loss function is calculated, the parameter in CNN is adjusted with stochastic gradient descent method.
3, the adjusting stage:It uses the picture of its known clear background angle value as input picture, feature is extracted by CNN and is gone forward side by side
Row classification, obtains the value of its clarity, is compared with the definition values of actual picture, counting loss function, under stochastic gradient
Drop method adjusts systematic parameter.
4, the actually detected stage:The picture of its readability unknown first will also be converted into 256*256 pixels and be used as input,
Feature is extracted with CNN and is classified, and the label of its readability is finally obtained.
The detailed process of convolutional neural networks (number of parameters can be adjusted according to actual conditions):
CNN in this algorithm is of five storeys altogether, do not include input and output layer, every layer all include can training parameter (connection weight).
Input picture is the size of 256*256 pixels.
1, input layer is the process of a convolution to 1 layer of Convolution, way be with the filter of 4 9*9 with
The pixel for inputting the 9*9 in picture is multiplied summation, i.e., the pixel of each 9*9 sizes in input picture is weighted and, then
In addition an offset, pixel has overlapping in convolution, and the filter translation of a pixel is carried out after calculating every time, and convolution process is public
Formula is as follows:
Wherein, l indicates that the number of plies (this layer of l=1), x indicate that the value of some pixel, i, j are indicated respectively where pixel
Row, column number (value of i is 1 to 9 in this layer, and the value of j is 1 to 9 in this layer), w is deconvolution parameter, and b is offset.
Concrete condition is referring to second block diagram in Fig. 3, and each square of the figure is a pixel, it is seen that input layer is per 9*9
Pixel pass through convolution process, be converted into Convolution1 layer of a pixel, each displacement of filter is a pixel.
The size of input layer is 256*256 pixels, has 1 characteristic pattern, Convolution1 layers of size is 248*248, shares 4 spies
Sign figure.
2,1 layer of Convolution to Subsampling 2 layers be a process to down-sampling, directly in this layer
The point of the pixel of 4*4 sizes carries out primary summation rear weight, adds an offset, the process to down-sampling not weight
Folded, the formula of downward sampling process is as follows:
Wherein, l indicates that the number of plies (this layer of l=2), x indicate that the value of some pixel, i, j are indicated respectively where pixel
Row, column number (value of i is 1 to 4 in this layer, and the value of j is 1 to 4 in this layer), β is downward sampling parameter, and b is offset.
Concrete condition is referring to the third block diagram in Fig. 3, and each square is a pixel in the figure, it is seen that
The pixel of every 4*4 in 1 layer of Convolution is by downward sample conversion at 1 pixel of 2 layers of subsampling.
The size that 1 layer of Convolution is 248*248, shares 4 characteristic patterns, Subsampling2 layers of size is 62*62, is shared
4 characteristic patterns.
3, the process of 2 layers to Convolution3 layers of Subsampling convolutional filtering identical as the process of first time convolution
The size of device is also 9*9, and convolution is carried out at the same time with 16 oscillographs.Subsampling2 layers of size is 62*62, shares 4
Characteristic pattern, Convolution3 layers of size is 54*54, shares 16 characteristic patterns.
4, Convolution3 layers to Subsampling4 layers of process is similar with process downsampled for the first time, different
Point is, to the pixel summation rear weight per 9*9 sizes in Convolution3 layers, an offset is added, to down-sampling
Process is not overlapped.
The value of i and j is all 1 to 9.Subsampling4 layers of size is 6*6, shares 16 characteristic patterns.
5, the characteristic pattern of 16 6*6, a shared 16*6*6 characteristic point, by two layers of full connection transformation, so-called full connection
Refer to just each output unit obtained by all input unit weighted sums, Subsampling4 layers shared 16*6*6 singly
Member is converted into 50 units by being once connected to the 5th layer entirely, and 5 layers of 50 units are by second of full connection transformation conversion
At 5 final grades.The formula connected entirely is as follows:
Wherein, l indicates the number of plies (l=5 when full connection for the first time, second of full connection when l=6), and x indicates some pixel
Value, k indicates pixel number (k=1 ... when full connection for the first time, 576, k=1 ... when second of full connection, 50),For
Weighted value.
Above example is merely illustrative of the invention's technical idea, and protection scope of the present invention cannot be limited with this, every
According to technological thought proposed by the present invention, any change done on the basis of technical solution each falls within the scope of the present invention
Within.
Claims (5)
1. a kind of picture background clarity detection method based on deep learning, which is characterized in that include the following steps:
Step 1, by the picture of known mark in picture library ImageNet and not in picture library ImageNet but known background
The samples pictures of definition values are converted into the gray scale picture that pixel is 256*256;
Step 2, pre-training is carried out to transformed gray scale picture in picture library ImageNet, institute is extracted using convolutional neural networks
There is the feature of gray scale picture and classify, counting loss function adjusts deconvolution parameter with stochastic gradient descent method, function is made to damage
It loses within a predetermined range, obtains the deconvolution parameter after first successive step;
Step 3, to not in picture library ImageNet but the transformed gray scale picture of the samples pictures of known background definition values,
Based on the deconvolution parameter after step 2 just successive step, extracts feature using convolutional neural networks and classify, obtain its clarity
Value, compares, counting loss function with practical definition values, is continued to adjust deconvolution parameter with stochastic gradient descent method, makes function
Loss within a predetermined range, obtains the deconvolution parameter after final adjustment;
Step 4, the picture of clarity to be detected is converted into the gray scale picture that pixel is 256*256, is based on step 3 final adjustment
Deconvolution parameter afterwards extracts feature using convolutional neural networks and classifies, obtains the clarity of clarity picture to be detected
Value.
2. the picture background clarity detection method based on deep learning according to claim 1, which is characterized in that the volume
Product neural network include successively by be input to the input layer of output, the first convolutional layer, the first downward sample level, the second convolutional layer,
Second downward sample level, full articulamentum, output layer, and in addition to input layer, output layer, the first convolutional layer, the first downward sample level,
Second convolutional layer, the second downward sample level, full articulamentum number of plies where convolutional neural networks are respectively 1,2,3,4,5 layer.
3. the picture background clarity detection method based on deep learning according to claim 2, which is characterized in that described
The convolution process formula of one convolutional layer is:
Wherein, l=1, xlIndicate the value of the pixel exported after the first convolutional layer convolution,Indicate the i-th row j row in input layer
The value of pixel, w are deconvolution parameter, and b is offset.
4. the picture background clarity detection method based on deep learning according to claim 2, which is characterized in that described
The downward sampling process formula of one downward sample level is:
Wherein, l=2, xlIndicate the value of the pixel exported after the first downward sample level sampling,It indicates in the first convolutional layer
The value of i-th row j row pixels, β are downward sampling parameter, and b is offset.
5. the picture background clarity detection method based on deep learning according to claim 2, which is characterized in that described complete
Articulamentum includes full connection procedure twice, and full connection procedure formula is:
Wherein, l=5 when first time connects entirely, l=6, x when connecting entirely for the second timelIndicate the pixel exported after full connection
It is worth, k expression pixel numbers, k=1 ... when connecting entirely for the first time, 576, for the second time k=1 ... when full connection, 50,For weight
Value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610155947.9A CN105825511B (en) | 2016-03-18 | 2016-03-18 | A kind of picture background clarity detection method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610155947.9A CN105825511B (en) | 2016-03-18 | 2016-03-18 | A kind of picture background clarity detection method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105825511A CN105825511A (en) | 2016-08-03 |
CN105825511B true CN105825511B (en) | 2018-11-02 |
Family
ID=56523997
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610155947.9A Active CN105825511B (en) | 2016-03-18 | 2016-03-18 | A kind of picture background clarity detection method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105825511B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106372402B (en) * | 2016-08-30 | 2019-04-30 | 中国石油大学(华东) | The parallel method of fuzzy region convolutional neural networks under a kind of big data environment |
CN106372656B (en) * | 2016-08-30 | 2019-05-10 | 同观科技(深圳)有限公司 | Obtain method, image-recognizing method and the device of the disposable learning model of depth |
CN106504233B (en) * | 2016-10-18 | 2019-04-09 | 国网山东省电力公司电力科学研究院 | Unmanned plane inspection image electric power widget recognition methods and system based on Faster R-CNN |
CN106780448B (en) * | 2016-12-05 | 2018-07-17 | 清华大学 | A kind of pernicious categorizing system of ultrasonic Benign Thyroid Nodules based on transfer learning and Fusion Features |
CN106777986B (en) * | 2016-12-19 | 2019-05-21 | 南京邮电大学 | Based on the ligand molecular fingerprint generation method of depth Hash in drug screening |
CN108510071B (en) * | 2017-05-10 | 2020-01-10 | 腾讯科技(深圳)有限公司 | Data feature extraction method and device and computer readable storage medium |
WO2018236446A2 (en) * | 2017-05-24 | 2018-12-27 | Hrl Laboratories, Llc | Transfer learning of convolutional neural networks from visible color (rbg)to infrared (ir) domain |
CN107463937A (en) * | 2017-06-20 | 2017-12-12 | 大连交通大学 | A kind of tomato pest and disease damage automatic testing method based on transfer learning |
CN107239803A (en) * | 2017-07-21 | 2017-10-10 | 国家海洋局第海洋研究所 | Utilize the sediment automatic classification method of deep learning neutral net |
CN107506740B (en) * | 2017-09-04 | 2020-03-17 | 北京航空航天大学 | Human body behavior identification method based on three-dimensional convolutional neural network and transfer learning model |
CN108021936A (en) * | 2017-11-28 | 2018-05-11 | 天津大学 | A kind of tumor of breast sorting algorithm based on convolutional neural networks VGG16 |
CN108363961A (en) * | 2018-01-24 | 2018-08-03 | 东南大学 | Bridge pad disease recognition method based on transfer learning between convolutional neural networks |
CN108647588A (en) * | 2018-04-24 | 2018-10-12 | 广州绿怡信息科技有限公司 | Goods categories recognition methods, device, computer equipment and storage medium |
CN108875794B (en) * | 2018-05-25 | 2020-12-04 | 中国人民解放军国防科技大学 | Image visibility detection method based on transfer learning |
CN109003601A (en) * | 2018-08-31 | 2018-12-14 | 北京工商大学 | A kind of across language end-to-end speech recognition methods for low-resource Tujia language |
CN109460699B (en) * | 2018-09-03 | 2020-09-25 | 厦门瑞为信息技术有限公司 | Driver safety belt wearing identification method based on deep learning |
CN109410169B (en) * | 2018-09-11 | 2020-06-05 | 广东智媒云图科技股份有限公司 | Image background interference degree identification method and device |
CN109472284A (en) * | 2018-09-18 | 2019-03-15 | 浙江大学 | A kind of battery core defect classification method based on zero sample learning of unbiased insertion |
CN109740495A (en) * | 2018-12-28 | 2019-05-10 | 成都思晗科技股份有限公司 | Outdoor weather image classification method based on transfer learning technology |
CN111191054B (en) * | 2019-12-18 | 2024-02-13 | 腾讯科技(深圳)有限公司 | Media data recommendation method and device |
CN111259957A (en) * | 2020-01-15 | 2020-06-09 | 上海眼控科技股份有限公司 | Visibility monitoring and model training method, device, terminal and medium based on deep learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103544705A (en) * | 2013-10-25 | 2014-01-29 | 华南理工大学 | Image quality testing method based on deep convolutional neural network |
CN105160678A (en) * | 2015-09-02 | 2015-12-16 | 山东大学 | Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method |
CN105205504A (en) * | 2015-10-04 | 2015-12-30 | 北京航空航天大学 | Image interest region quality evaluation index learning method based on data driving |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9953425B2 (en) * | 2014-07-30 | 2018-04-24 | Adobe Systems Incorporated | Learning image categorization using related attributes |
-
2016
- 2016-03-18 CN CN201610155947.9A patent/CN105825511B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103544705A (en) * | 2013-10-25 | 2014-01-29 | 华南理工大学 | Image quality testing method based on deep convolutional neural network |
CN105160678A (en) * | 2015-09-02 | 2015-12-16 | 山东大学 | Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method |
CN105205504A (en) * | 2015-10-04 | 2015-12-30 | 北京航空航天大学 | Image interest region quality evaluation index learning method based on data driving |
Non-Patent Citations (2)
Title |
---|
Convolutional Neural Networks for No-Reference Image Quality Assessment;Le Kang 等;《CVPR 2014》;20140628;第1-8页 * |
一种基于深度卷积神经网络的摄像机覆盖质量评价算法;朱陶 等;《江西师范大学学报(自然科学版)》;20150515;第39卷(第3期);第309-314页 * |
Also Published As
Publication number | Publication date |
---|---|
CN105825511A (en) | 2016-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105825511B (en) | A kind of picture background clarity detection method based on deep learning | |
CN106295601B (en) | A kind of improved Safe belt detection method | |
CN108416307A (en) | A kind of Aerial Images road surface crack detection method, device and equipment | |
CN108830188A (en) | Vehicle checking method based on deep learning | |
WO2019031503A1 (en) | Tire image recognition method and tire image recognition device | |
CN109584251A (en) | A kind of tongue body image partition method based on single goal region segmentation | |
CN106529499A (en) | Fourier descriptor and gait energy image fusion feature-based gait identification method | |
CN108269250A (en) | Method and apparatus based on convolutional neural networks assessment quality of human face image | |
CN112036447B (en) | Zero-sample target detection system and learnable semantic and fixed semantic fusion method | |
CN110717553A (en) | Traffic contraband identification method based on self-attenuation weight and multiple local constraints | |
CN106156765A (en) | safety detection method based on computer vision | |
CN104700078B (en) | A kind of robot scene recognition methods based on scale invariant feature extreme learning machine | |
CN109684922A (en) | A kind of recognition methods based on the multi-model of convolutional neural networks to finished product dish | |
CN109034184A (en) | A kind of grading ring detection recognition method based on deep learning | |
CN110060273A (en) | Remote sensing image landslide plotting method based on deep neural network | |
CN110929746A (en) | Electronic file title positioning, extracting and classifying method based on deep neural network | |
CN106408009B (en) | Neighborhood weighted average hyperspectral image classification method based on depth confidence network | |
CN108416270A (en) | A kind of traffic sign recognition method based on more attribute union features | |
CN115457006B (en) | Unmanned aerial vehicle inspection defect classification method and device based on similarity consistency self-distillation | |
CN106874929A (en) | A kind of pearl sorting technique based on deep learning | |
CN112258490A (en) | Low-emissivity coating intelligent damage detection method based on optical and infrared image fusion | |
CN110569780A (en) | high-precision face recognition method based on deep transfer learning | |
CN109598681A (en) | The reference-free quality evaluation method of image after a kind of symmetrical Tangka repairs | |
CN114492634B (en) | Fine granularity equipment picture classification and identification method and system | |
CN112257741A (en) | Method for detecting generative anti-false picture based on complex neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |