CN105825511A - Image background definition detection method based on deep learning - Google Patents

Image background definition detection method based on deep learning Download PDF

Info

Publication number
CN105825511A
CN105825511A CN201610155947.9A CN201610155947A CN105825511A CN 105825511 A CN105825511 A CN 105825511A CN 201610155947 A CN201610155947 A CN 201610155947A CN 105825511 A CN105825511 A CN 105825511A
Authority
CN
China
Prior art keywords
picture
pixel
layer
detection method
down sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610155947.9A
Other languages
Chinese (zh)
Other versions
CN105825511B (en
Inventor
胡海峰
韩硕
吴建盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201610155947.9A priority Critical patent/CN105825511B/en
Publication of CN105825511A publication Critical patent/CN105825511A/en
Application granted granted Critical
Publication of CN105825511B publication Critical patent/CN105825511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image background definition detection method based on deep learning. According to the method of the invention, a convolutional neural network (CNN) is used for feature extraction, and the features extracted by the CNN can effectively classify images according to background definition degrees; a transfer learning method is used, an ImageNet image set with a large amount of known markers is used for pre-training, the defect that few images with known background definition values exist in the sample image set is solved, and better CNN parameters are thus acquired; a few number of sample images with known background definition values are further used to adjust the parameters, and the CNN parameters are adaptive to a to-be-detected image set; and well-adjusted CNN parameters can be obtained to carry out background definition detection on the to-be-detected images. The detection method of the invention enables the background definition detection to achieve high accuracy.

Description

A kind of picture background definition detection method based on degree of depth study
Technical field
The present invention relates to a kind of picture background definition detection method based on degree of depth study, relate generally to the application of degree of depth study in machine learning, belong to artificial intelligence's picture recognition technical field.
Background technology
The concept of degree of depth study comes from the research of artificial neural network, and the multilayer perceptron containing many hidden layers is exactly a kind of degree of depth study structure.Degree of depth study forms more abstract high-rise expression (attribute classification or feature) by combination low-level feature, to find that the distributed nature of data represents.BP algorithm is as the typical algorithm of conventional exercises multitiered network, and in practice for only containing several layer networks, this training method is the most undesirable.The Local Minimum generally existed in depth structure (relating to multiple nonlinear processing unit layer) non-convex objective cost function is the main source of training difficulty.
The learnings method such as Most current classification, recurrence are shallow structure algorithm, and it is limited in one's ability to the expression of complicated function in the case of being limited in that finite sample and computing unit, are necessarily restricted for complicated its generalization ability of classification problem.Degree of depth study can be by a kind of deep layer nonlinear network structure of study, it is achieved complicated function approaches, and characterizes the input distributed expression of data, and presents the powerful ability from a few sample massed learning data set substitutive characteristics.Degree of depth study is exactly a kind of feature learning method, but initial data is transformed into higher level by some simple nonlinear models, more abstract expression.By the combination of abundant conversion, extremely complex function can also be learnt.The core aspect of degree of depth study is that the feature of above layers does not the most utilize artificial engineering to design, but uses a kind of general learning process to acquire from data.
The convolutional neural networks (CNNs) that Lecun et al. proposes is first real multiple structure learning algorithm, and the core knowledge used in the present invention is exactly convolutional neural networks, and it utilizes spatial correlation to reduce number of parameters to improve BP training performance.Convolutional neural networks obtains good effect in field of image recognition, has reached good effect identifying on hand-written character.But its network structure has large effect to effect and the efficiency of image recognition, for improving recognition performance, by reusing less convolution kernel, design and Implement a kind of new convolutional neural networks structure, efficiently reduce the quantity of training parameter, and the accuracy rate of identification can be improved.Convolutional neural networks algorithm and field of image recognition currently have the algorithm contrast experiment obtaining relatively good result in world-class ILSVRC challenge match, verify the effectiveness of this structure.
The training process of convolutional neural networks, needs the sample of substantial amounts of known mark, if the sample size containing labelling is inadequate, is easy for causing the overfitting of system.JeffDonahue et al. constructs Decaf framework, its thought is exactly first to carry out pre-training in the pictures containing a large amount of known mark samples, adjust the parameter of convolutional neural networks system, transfer learning is utilized to move in pictures to be trained by the parameter of whole system, so have only to the sample of a small amount of known mark, it becomes possible to classified accurately.
Kind currently with degree of depth study picture recognition has a lot, such as hand-written character, license plate number etc., but usage based on convolutional neural networks not exploitation is completely, the most artificial intelligence does not identifies the good method of environment visible level, the fog-level of things in the visible level of picture background, i.e. background in picture, the process of major part picture recognition is all to identify the object in picture at present, often have ignored information useful in its background environment.The present invention is used for solving this problem the most exactly.Identifying that the practicality of the visible level of picture background is very big, such as utilize this patent grade according to picture recognition haze in reality, its application prospect is the most wide.
Summary of the invention
The technical problem to be solved is: provide a kind of picture background definition detection method based on degree of depth study, clear background degree in detection picture, the i.e. fog-level of things in background, extracts information useful in background environment, provides reference for picture recognition.
The present invention solves above-mentioned technical problem by the following technical solutions:
A kind of picture background definition detection method based on degree of depth study, comprises the steps:
Step 1, by picture library ImageNet known mark picture and not in picture library ImageNet but the samples pictures of known background definition values, be all converted into the gray scale picture that pixel is 256*256;
Step 2, gray scale picture after conversion in picture library ImageNet is carried out pre-training, convolutional neural networks is utilized to extract the feature of all gray scale pictures and classify, counting loss function, deconvolution parameter is adjusted by stochastic gradient descent method, make function be lost in preset range, obtain the deconvolution parameter after first successive step;
Step 3, to the gray scale picture not in picture library ImageNet but after the conversion of the samples pictures of known background definition values, based on the deconvolution parameter after step 2 just successive step, utilize convolutional neural networks extract feature and classify, obtain its definition values, compare with actual definition values, counting loss function, continue to adjust deconvolution parameter by stochastic gradient descent method, make function be lost in preset range, the deconvolution parameter after finally being adjusted;
Step 4, is converted into the gray scale picture that pixel is 256*256, the deconvolution parameter after finally adjusting based on step 3 by the picture of definition to be detected, utilizes convolutional neural networks extract feature and classify, obtains the definition values of definition picture to be detected.
A preferred version as the present invention, described convolutional neural networks includes successively by being input to the input layer of output, first volume lamination, the first down sample layer, volume Two lamination, the second down sample layer, full articulamentum, output layer, and in addition to input layer, output layer, first volume lamination, the first down sample layer, volume Two lamination, the second down sample layer, full articulamentum are respectively 1,2,3,4,5 layers in the convolutional neural networks place number of plies.
As a preferred version of the present invention, the convolution process formula of described first volume lamination is:Wherein, l=1, xlRepresent the value of the pixel of output after first volume lamination convolution,Representing the value of the i-th row j row pixel in input layer, w is deconvolution parameter, and b is side-play amount.
As a preferred version of the present invention, the down sample process formula of described first down sample layer is:Wherein, l=2, xlRepresent the value of the pixel of output after the first down sample layer sampling,Representing the value of the i-th row j row pixel in first volume lamination, β is down sample parameter, and b is side-play amount.
As a preferred version of the present invention, described full articulamentum includes twice full connection procedure, and full connection procedure formula is:Wherein, l=5 during the most full connection, l=6, x when second time connects entirelylRepresenting the value of the pixel of output after entirely connecting, k represents that pixel is numbered, k=1 during first time full connection ..., 576, k=1 during second time full connection ..., 50,For weighted value.
The present invention uses above technical scheme compared with prior art, has following technical effect that
1, the picture background definition detection method that the present invention learns based on the degree of depth, in the case of samples pictures deficiency, utilize the thought of transfer learning, first in the ImageNet pictures containing a large amount of known mark, pre-training is carried out, obtain CNN parameter, and further CNN parameter is adjusted, it is allowed to adapt to pictures to be detected, so that the accuracy of detection of pictures to be detected is higher.
2, the picture background definition detection method that the present invention learns based on the degree of depth, solves the problem of clear background degree detecting in picture, has the biggest effect in applying reality in terms of identifying haze grade, air quality etc..
Accompanying drawing explanation
Fig. 1 is the integrated stand composition of the picture background definition detection method that the present invention learns based on the degree of depth.
Fig. 2 is the flow chart of the picture background definition detection method that the present invention learns based on the degree of depth.
Fig. 3 is the cut-away view of convolutional neural networks in the present invention.
Detailed description of the invention
Embodiments of the present invention are described below in detail, and the example of described embodiment is shown in the drawings.The embodiment described below with reference to accompanying drawing is exemplary, is only used for explaining the present invention, and is not construed as limiting the claims.
Due to a given pictures, its pixel is the most uncertain, and the input picture pixels in convolutional neural networks requires it is fixing, so first having to picture is carried out pretreatment, convert thereof into the picture of same pixel, and known to ImageNet train during be that the picture that picture is wholly converted into 256*256 pixel processes, therefore the picture pixels of input is also fixed as 256*256 pixel, pending picture pixels size is not 256*256, first carry out pixel conversion, convert thereof into the picture of 256*256 pixel.Owing to the readability of picture does not has the biggest dependency with its color, so first all of picture should be converted into the picture of gray scale.The readability of picture background is divided into five grades according to definition values by the present invention, the most excellent, good, in, poor, extreme difference, in order to preferably analysis and test.
The invention mainly comprises three processes: pre-training process, adjust process and actually detected process.Pre-training process is to be trained with the pictures ImageNet of known mark, its purpose is to obtain initial CNN parameter;Adjustment process is the samples pictures utilizing a small amount of known background definition values, is adjusted CNN parameter, makes CNN parameter adaptation pictures to be detected;The parameter being adjusted can be carried out the detection of picture to be detected.
Convolutional neural networks (ConvolutionalNeuralNetwork, CNN) it is the core technology of the present invention, CNN is a kind of feedforward neural network, and its artificial neuron can respond the surrounding cells in a part of coverage, has outstanding performance for large-scale image procossing.It includes convolutional layer (alternatingconvolutionallayer) and down sample layer (poolinglayer).Initial several stages are made up of convolutional layer and down sample layer, the unit of convolutional layer is organized in characteristic pattern, in characteristic pattern, each unit is connected to a localized mass of the characteristic pattern of last layer by one group of weights being called wave filter, then this is local weighted and is passed to nonlinear function, such as a ReLU.Whole unit in a characteristic pattern enjoy identical filter, and the characteristic pattern of different layers uses different filters.This structure is used to be in both sides reason.First, in array data, such as view data, the value of the vicinity of a value is often height correlation, can form the local feature having distinction being easier to be detected.Secondly, diverse location partial statistics characteristic is less correlated with, say, that local certain feature occurred, it is also possible to occur in otherwise, so the unit of diverse location can be shared weights and can detect identical sample.Mathematically, this filter operation by a characteristic pattern execution is the convolution of an off-line, and convolutional neural networks is also so to gain the name.
Convolutional neural networks has the effect of good feature extraction, and the feature extracted by convolutional neural networks can be good at classifying destination object.
Under the framework of traditional machine learning, the task of study exactly given train up data on the basis of learn a disaggregated model;Then utilize this study to model test document classified and prediction.But, it is seen that machine learning algorithm also exists a crucial problem in current Web Research on Mining: a large amount of training datas in some emerging fields the most seldom arrive.
Traditional machine learning needs each field is demarcated a large amount of training data, and this will expend substantial amounts of manpower and material resources.And there is no substantial amounts of labeled data, can make much cannot carry out with application with study correlational study.Secondly, the data distribution that training data is identical with test data obedience is assumed in traditional machine learning.But, in many cases, this same distributional assumption is also unsatisfactory for.Generally it can happen that as training data is expired.This generally requires the needs that we go again to mark substantial amounts of training data to meet our training, but mark new data is much more expensive, needs substantial amounts of manpower and material resources.From another one angle, if we have had training data substantial amounts of, under different distributions, abandon these data completely wastes the most very much.The most reasonably utilizing these data is exactly the problem that transfer learning mainly solves.Transfer learning is the new a kind of machine learning method using the knowledge having had to solve difference but association area problem.
Our work in terms of transfer learning can be divided into three below part at present: the transfer learning of Case-based Reasoning under the isomorphic space, the transfer learning of feature based and the transfer learning under isomeric space under the isomorphic space.The transfer learning used in the present invention belongs to the second part, the transfer learning of feature based under the isomorphic space.Owing to ImageNet and aiming field have shared parameter, so having only to carry out migrating by the parameter in CNN system.
As shown in Figure 1 and Figure 2, the specific operation process of the present invention is as follows:
1, the pretreatment of picture: by the picture in ImageNet, and the samples pictures of known mark, all carry out pretreatment, is all converted into the gray scale picture of 256*256 pixel.
2, the pre-training stage: carry out pre-training, the gray scale picture of the pixel of input 256*256 with ImageNet picture library, extract feature with CNN, and classify, calculate its loss function, adjust the parameter in CNN by stochastic gradient descent method.
3, the metamorphosis stage: with the picture of its clear background angle value known as input picture, extract feature by CNN and classify, obtaining the value of its definition, compare with the definition values of actual picture, counting loss function, adjusts systematic parameter by stochastic gradient descent method.
4, the actually detected stage: picture 256*256 the to be converted into pixel of its readability first unknown, as input, carries out extracting feature and classifying with CNN, finally gives the label of its readability.
The detailed process (number of parameters can adjust according to practical situation) of convolutional neural networks:
CNN in this algorithm is of five storeys altogether, does not comprise input and output layer, and every layer all comprise can training parameter (connection weight).Input picture is the size of 256*256 pixel.
1, input layer is the process of a convolution to Convolution1 layer, its way is the wave filter with 4 9*9 with the pixel of the 9*9 in input picture to be multiplied summation, i.e. the pixel of each 9*9 size in input picture is weighted and, add a side-play amount, in convolution, pixel has overlap, carrying out the wave filter translation of a pixel after calculating, convolution process formula is as follows every time:
x l = f ( Σ i = 1 9 Σ j = 1 9 ( x i j l - 1 × w i j l ) + b ) ,
Wherein, l represents the number of plies (this layer of l=1), and x represents the value of certain pixel, and i, j represent the row, column number (in this layer, the value of i is 1 to 9, and in this layer, the value of j is 1 to 9) at pixel place respectively, and w is deconvolution parameter, and b is side-play amount.
Concrete condition sees second block diagram in Fig. 3, and each square of this figure is a pixel, it is seen that the pixel of the every 9*9 of input layer, through convolution process, is converted into a pixel of Convolution1 layer, and each displacement of wave filter is a pixel.The size of input layer is 256*256 pixel, has 1 characteristic pattern, and the size of Convolution1 layer is 248*248, has 4 characteristic patterns.
2, Convolution1 layer is the process of a down sample to Subsampling2 layer, directly the point to the pixel of the 4*4 size in this layer is once sued for peace rear weight, adding a side-play amount, the process of this down sample does not has overlap, and the formula of down sample process is as follows:
x l = f ( β l Σ i = 1 4 Σ j = 1 4 x i j l - 1 + b l ) ,
Wherein, l represents the number of plies (this layer of l=2), and x represents the value of certain pixel, i, j represent that (in this layer, the value of i is 1 to 4 for the row, column number at pixel place respectively, in this layer, the value of j is 1 to 4), β is down sample parameter, and b is side-play amount.
Concrete condition sees the 3rd block diagram in Fig. 3, and in this figure, each square is a pixel, it is seen that the pixel of the every 4*4 in Convolution1 layer is converted into 1 pixel of subsampling2 layer through down sample.The size of Convolution1 layer is 248*248, has 4 characteristic patterns, and the size of Subsampling2 layer is 62*62, has 4 characteristic patterns.
3, Subsampling2 layer is also 9*9 to the size of process and the identical convolution filter of process of for the first time convolution of Convolution3 layer, carries out convolution with 16 oscillographs simultaneously.The size of Subsampling2 layer is 62*62, has 4 characteristic patterns, and the size of Convolution3 layer is 54*54, has 16 characteristic patterns.
4, Convolution3 layer is similar with the most downsampled process to the process of Subsampling4 layer, and difference is, the pixel summation rear weight to 9*9 size every in Convolution3 layer, adds a side-play amount, and down sample process does not has overlap.
The value of i Yu j is all 1 to 9.The size of Subsampling4 layer is 6*6, has 16 characteristic patterns.
5, the characteristic pattern of 16 6*6, one has 16*6*6 characteristic point, conversion is entirely connected through two-layer, it is to be obtained by all input block weighted sums that so-called full connection refers to each output unit, Subsampling4 layer has 16*6*6 unit, being converted into 50 unit through being the most entirely connected to the 5th layer, 50 unit of 5 layers are converted into final 5 grade through second time full connection conversion.The full formula connected is as follows:
x l = f ( Σ k ∂ k l x k l - 1 ) ,
Wherein, l represents the number of plies (l=5, l=6 when second time connects full during the most full connection), and x represents the value of certain pixel, k represent pixel numbering (for the first time full k=1 when connecting ..., 576, second time is full k=1 when connecting, ..., 50)For weighted value.
Above example is only the technological thought that the present invention is described, it is impossible to limiting protection scope of the present invention with this, every technological thought proposed according to the present invention, any change done on the basis of technical scheme, within each falling within scope.

Claims (5)

1. a picture background definition detection method based on degree of depth study, it is characterised in that comprise the steps:
Step 1, by picture library ImageNet known mark picture and not in picture library ImageNet but the samples pictures of known background definition values, be all converted into the gray scale picture that pixel is 256*256;
Step 2, gray scale picture after conversion in picture library ImageNet is carried out pre-training, convolutional neural networks is utilized to extract the feature of all gray scale pictures and classify, counting loss function, deconvolution parameter is adjusted by stochastic gradient descent method, make function be lost in preset range, obtain the deconvolution parameter after first successive step;
Step 3, to the gray scale picture not in picture library ImageNet but after the conversion of the samples pictures of known background definition values, based on the deconvolution parameter after step 2 just successive step, utilize convolutional neural networks extract feature and classify, obtain its definition values, compare with actual definition values, counting loss function, continue to adjust deconvolution parameter by stochastic gradient descent method, make function be lost in preset range, the deconvolution parameter after finally being adjusted;
Step 4, is converted into the gray scale picture that pixel is 256*256, the deconvolution parameter after finally adjusting based on step 3 by the picture of definition to be detected, utilizes convolutional neural networks extract feature and classify, obtains the definition values of definition picture to be detected.
Picture background definition detection method based on degree of depth study the most according to claim 1, it is characterized in that, described convolutional neural networks includes successively by being input to the input layer of output, first volume lamination, the first down sample layer, volume Two lamination, the second down sample layer, full articulamentum, output layer, and in addition to input layer, output layer, first volume lamination, the first down sample layer, volume Two lamination, the second down sample layer, full articulamentum are respectively 1,2,3,4,5 layers in the convolutional neural networks place number of plies.
Picture background definition detection method based on degree of depth study the most according to claim 2, it is characterised in that the convolution process formula of described first volume lamination is:
x l = f ( Σ i = 1 9 Σ j = 1 9 ( x i j l - 1 × w i j l ) + b ) ,
Wherein, l=1, xlRepresent the value of the pixel of output after first volume lamination convolution,Representing the value of the i-th row j row pixel in input layer, w is deconvolution parameter, and b is side-play amount.
Picture background definition detection method based on degree of depth study the most according to claim 2, it is characterised in that the down sample process formula of described first down sample layer is:
x l = f ( β l Σ i = 1 4 Σ j = 1 4 x i j l - 1 + b l ) ,
Wherein, l=2, xlRepresent the value of the pixel of output after the first down sample layer sampling,Representing the value of the i-th row j row pixel in first volume lamination, β is down sample parameter, and b is side-play amount.
Picture background definition detection method based on degree of depth study the most according to claim 2, it is characterised in that described full articulamentum includes twice full connection procedure, and full connection procedure formula is:
x l = f ( Σ k ∂ k l x k l - 1 ) ,
Wherein, l=5 during the most full connection, l=6, x when second time connects entirelylRepresenting the value of the pixel of output after entirely connecting, k represents that pixel is numbered, k=1 during first time full connection ..., 576, k=1 during second time full connection ..., 50,For weighted value.
CN201610155947.9A 2016-03-18 2016-03-18 A kind of picture background clarity detection method based on deep learning Active CN105825511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610155947.9A CN105825511B (en) 2016-03-18 2016-03-18 A kind of picture background clarity detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610155947.9A CN105825511B (en) 2016-03-18 2016-03-18 A kind of picture background clarity detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN105825511A true CN105825511A (en) 2016-08-03
CN105825511B CN105825511B (en) 2018-11-02

Family

ID=56523997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610155947.9A Active CN105825511B (en) 2016-03-18 2016-03-18 A kind of picture background clarity detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN105825511B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372656A (en) * 2016-08-30 2017-02-01 同观科技(深圳)有限公司 Depth one-time learning model obtaining method and device and image identification method and device
CN106504233A (en) * 2016-10-18 2017-03-15 国网山东省电力公司电力科学研究院 Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN
CN106777986A (en) * 2016-12-19 2017-05-31 南京邮电大学 Ligand molecular fingerprint generation method based on depth Hash in drug screening
CN106780448A (en) * 2016-12-05 2017-05-31 清华大学 A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features
CN107239803A (en) * 2017-07-21 2017-10-10 国家海洋局第海洋研究所 Utilize the sediment automatic classification method of deep learning neutral net
CN107463937A (en) * 2017-06-20 2017-12-12 大连交通大学 A kind of tomato pest and disease damage automatic testing method based on transfer learning
CN107506740A (en) * 2017-09-04 2017-12-22 北京航空航天大学 A kind of Human bodys' response method based on Three dimensional convolution neutral net and transfer learning model
CN108021936A (en) * 2017-11-28 2018-05-11 天津大学 A kind of tumor of breast sorting algorithm based on convolutional neural networks VGG16
CN108363961A (en) * 2018-01-24 2018-08-03 东南大学 Bridge pad disease recognition method based on transfer learning between convolutional neural networks
CN108510071A (en) * 2017-05-10 2018-09-07 腾讯科技(深圳)有限公司 Feature extracting method, device and the computer readable storage medium of data
CN108647588A (en) * 2018-04-24 2018-10-12 广州绿怡信息科技有限公司 Goods categories recognition methods, device, computer equipment and storage medium
CN108875794A (en) * 2018-05-25 2018-11-23 中国人民解放军国防科技大学 Image visibility detection method based on transfer learning
CN109003601A (en) * 2018-08-31 2018-12-14 北京工商大学 A kind of across language end-to-end speech recognition methods for low-resource Tujia language
CN109410169A (en) * 2018-09-11 2019-03-01 广东智媒云图科技股份有限公司 A kind of recognition methods of image background degree of disturbance and device
CN109460699A (en) * 2018-09-03 2019-03-12 厦门瑞为信息技术有限公司 A kind of pilot harness's wearing recognition methods based on deep learning
CN109472284A (en) * 2018-09-18 2019-03-15 浙江大学 A kind of battery core defect classification method based on zero sample learning of unbiased insertion
CN109740495A (en) * 2018-12-28 2019-05-10 成都思晗科技股份有限公司 Outdoor weather image classification method based on transfer learning technology
CN109800863A (en) * 2016-08-30 2019-05-24 中国石油大学(华东) A kind of well-log facies recognition method based on fuzzy theory and neural network
CN110494890A (en) * 2017-05-24 2019-11-22 赫尔实验室有限公司 Convolutional neural networks are from perceived color (RBG) to the transfer learning in the infrared domain (IR)
CN111191054A (en) * 2019-12-18 2020-05-22 腾讯科技(深圳)有限公司 Recommendation method and device for media data
CN111259957A (en) * 2020-01-15 2020-06-09 上海眼控科技股份有限公司 Visibility monitoring and model training method, device, terminal and medium based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544705A (en) * 2013-10-25 2014-01-29 华南理工大学 Image quality testing method based on deep convolutional neural network
CN105160678A (en) * 2015-09-02 2015-12-16 山东大学 Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method
CN105205504A (en) * 2015-10-04 2015-12-30 北京航空航天大学 Image interest region quality evaluation index learning method based on data driving
US20160034788A1 (en) * 2014-07-30 2016-02-04 Adobe Systems Incorporated Learning image categorization using related attributes

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544705A (en) * 2013-10-25 2014-01-29 华南理工大学 Image quality testing method based on deep convolutional neural network
US20160034788A1 (en) * 2014-07-30 2016-02-04 Adobe Systems Incorporated Learning image categorization using related attributes
CN105160678A (en) * 2015-09-02 2015-12-16 山东大学 Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method
CN105205504A (en) * 2015-10-04 2015-12-30 北京航空航天大学 Image interest region quality evaluation index learning method based on data driving

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LE KANG 等: "Convolutional Neural Networks for No-Reference Image Quality Assessment", 《CVPR 2014》 *
朱陶 等: "一种基于深度卷积神经网络的摄像机覆盖质量评价算法", 《江西师范大学学报(自然科学版)》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800863A (en) * 2016-08-30 2019-05-24 中国石油大学(华东) A kind of well-log facies recognition method based on fuzzy theory and neural network
CN109800863B (en) * 2016-08-30 2023-05-23 中国石油大学(华东) Logging phase identification method based on fuzzy theory and neural network
CN106372656B (en) * 2016-08-30 2019-05-10 同观科技(深圳)有限公司 Obtain method, image-recognizing method and the device of the disposable learning model of depth
CN106372656A (en) * 2016-08-30 2017-02-01 同观科技(深圳)有限公司 Depth one-time learning model obtaining method and device and image identification method and device
CN106504233A (en) * 2016-10-18 2017-03-15 国网山东省电力公司电力科学研究院 Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN
CN106780448A (en) * 2016-12-05 2017-05-31 清华大学 A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features
CN106780448B (en) * 2016-12-05 2018-07-17 清华大学 A kind of pernicious categorizing system of ultrasonic Benign Thyroid Nodules based on transfer learning and Fusion Features
CN106777986A (en) * 2016-12-19 2017-05-31 南京邮电大学 Ligand molecular fingerprint generation method based on depth Hash in drug screening
CN106777986B (en) * 2016-12-19 2019-05-21 南京邮电大学 Based on the ligand molecular fingerprint generation method of depth Hash in drug screening
CN108510071B (en) * 2017-05-10 2020-01-10 腾讯科技(深圳)有限公司 Data feature extraction method and device and computer readable storage medium
CN108510071A (en) * 2017-05-10 2018-09-07 腾讯科技(深圳)有限公司 Feature extracting method, device and the computer readable storage medium of data
CN110494890B (en) * 2017-05-24 2023-03-10 赫尔实验室有限公司 System, computer-implemented method, medium for migratory learning of convolutional neural networks
CN110494890A (en) * 2017-05-24 2019-11-22 赫尔实验室有限公司 Convolutional neural networks are from perceived color (RBG) to the transfer learning in the infrared domain (IR)
CN107463937A (en) * 2017-06-20 2017-12-12 大连交通大学 A kind of tomato pest and disease damage automatic testing method based on transfer learning
CN107239803A (en) * 2017-07-21 2017-10-10 国家海洋局第海洋研究所 Utilize the sediment automatic classification method of deep learning neutral net
CN107506740A (en) * 2017-09-04 2017-12-22 北京航空航天大学 A kind of Human bodys' response method based on Three dimensional convolution neutral net and transfer learning model
CN107506740B (en) * 2017-09-04 2020-03-17 北京航空航天大学 Human body behavior identification method based on three-dimensional convolutional neural network and transfer learning model
CN108021936A (en) * 2017-11-28 2018-05-11 天津大学 A kind of tumor of breast sorting algorithm based on convolutional neural networks VGG16
CN108363961A (en) * 2018-01-24 2018-08-03 东南大学 Bridge pad disease recognition method based on transfer learning between convolutional neural networks
CN108647588A (en) * 2018-04-24 2018-10-12 广州绿怡信息科技有限公司 Goods categories recognition methods, device, computer equipment and storage medium
CN108875794A (en) * 2018-05-25 2018-11-23 中国人民解放军国防科技大学 Image visibility detection method based on transfer learning
CN108875794B (en) * 2018-05-25 2020-12-04 中国人民解放军国防科技大学 Image visibility detection method based on transfer learning
CN109003601A (en) * 2018-08-31 2018-12-14 北京工商大学 A kind of across language end-to-end speech recognition methods for low-resource Tujia language
CN109460699A (en) * 2018-09-03 2019-03-12 厦门瑞为信息技术有限公司 A kind of pilot harness's wearing recognition methods based on deep learning
CN109460699B (en) * 2018-09-03 2020-09-25 厦门瑞为信息技术有限公司 Driver safety belt wearing identification method based on deep learning
CN109410169A (en) * 2018-09-11 2019-03-01 广东智媒云图科技股份有限公司 A kind of recognition methods of image background degree of disturbance and device
CN109410169B (en) * 2018-09-11 2020-06-05 广东智媒云图科技股份有限公司 Image background interference degree identification method and device
CN109472284A (en) * 2018-09-18 2019-03-15 浙江大学 A kind of battery core defect classification method based on zero sample learning of unbiased insertion
CN109740495A (en) * 2018-12-28 2019-05-10 成都思晗科技股份有限公司 Outdoor weather image classification method based on transfer learning technology
CN111191054A (en) * 2019-12-18 2020-05-22 腾讯科技(深圳)有限公司 Recommendation method and device for media data
CN111191054B (en) * 2019-12-18 2024-02-13 腾讯科技(深圳)有限公司 Media data recommendation method and device
CN111259957A (en) * 2020-01-15 2020-06-09 上海眼控科技股份有限公司 Visibility monitoring and model training method, device, terminal and medium based on deep learning

Also Published As

Publication number Publication date
CN105825511B (en) 2018-11-02

Similar Documents

Publication Publication Date Title
CN105825511A (en) Image background definition detection method based on deep learning
CN111368896B (en) Hyperspectral remote sensing image classification method based on dense residual three-dimensional convolutional neural network
CN108764063B (en) Remote sensing image time-sensitive target identification system and method based on characteristic pyramid
CN108038445B (en) SAR automatic target identification method based on multi-view deep learning framework
CN110210545B (en) Infrared remote sensing water body classifier construction method based on transfer learning
CN111275688A (en) Small target detection method based on context feature fusion screening of attention mechanism
Xu et al. High-resolution remote sensing image change detection combined with pixel-level and object-level
CN107392901A (en) A kind of method for transmission line part intelligence automatic identification
CN114092832B (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
Li et al. An image-based hierarchical deep learning framework for coal and gangue detection
CN107016357A (en) A kind of video pedestrian detection method based on time-domain convolutional neural networks
CN110633708A (en) Deep network significance detection method based on global model and local optimization
CN104573669A (en) Image object detection method
CN111401426B (en) Small sample hyperspectral image classification method based on pseudo label learning
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN111444939A (en) Small-scale equipment component detection method based on weak supervision cooperative learning in open scene of power field
CN103268607B (en) A kind of common object detection method under weak supervision condition
CN103942749B (en) A kind of based on revising cluster hypothesis and the EO-1 hyperion terrain classification method of semi-supervised very fast learning machine
CN108830312B (en) Integrated learning method based on sample adaptive expansion
CN107967474A (en) A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN112418351B (en) Zero sample learning image classification method based on global and local context sensing
CN114092697B (en) Building facade semantic segmentation method with attention fused with global and local depth features
Chen et al. Agricultural remote sensing image cultivated land extraction technology based on deep learning
CN110334584A (en) A kind of gesture identification method based on the full convolutional network in region
CN107545281B (en) Single harmful gas infrared image classification and identification method based on deep learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant