CN109978003A - Image classification method based on intensive connection residual error network - Google Patents

Image classification method based on intensive connection residual error network Download PDF

Info

Publication number
CN109978003A
CN109978003A CN201910135688.7A CN201910135688A CN109978003A CN 109978003 A CN109978003 A CN 109978003A CN 201910135688 A CN201910135688 A CN 201910135688A CN 109978003 A CN109978003 A CN 109978003A
Authority
CN
China
Prior art keywords
residual error
network
intensive connection
image
error network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910135688.7A
Other languages
Chinese (zh)
Inventor
王永雄
严龙
宋天中
张震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201910135688.7A priority Critical patent/CN109978003A/en
Publication of CN109978003A publication Critical patent/CN109978003A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of image classification methods based on intensive connection residual error network, and this method carries out data prediction to input picture first, and expands image data amount;Then picture is input to intensive connection residual error network and extracts characteristics of image;The characteristics of image extracted is finally input to Softmax classifier, obtains picture classification result.This method overcomes the defect that conventional method effect is bad, common deep learning method calculation amount is too big, model training and operation are too high to hardware device requirement in image classification task, the advantages of by combining residual error network and intensive connection network, less computing resource is occupied while reaching higher discrimination.

Description

Image classification method based on intensive connection residual error network
Technical field
The present invention relates to machine learning and technical field of machine vision, more particularly to one kind is based on intensive connection residual error network Image classification method.
Background technique
Image classification is one of most active research topic in computer vision field, quickly and accurately identifies object It has great significance to robot navigation, medical diagnosis, security protection, industrial detection etc..Extracting characteristics of image is in object identification Most important is also most intractable work, and the quality of characteristics of image has been largely fixed the effect of identification.Compared to traditional figure As feature extracting method, deep neural network can extract the stronger characteristics of image of ability to express, be more advantageous to image classification, because Most of object identification frames are all based on deep learning now for this.But current most of deep learnings for image classification Method calculation amount is very big, and the training time is long, has higher requirement to training equipment.Depth residual error network and intensive connection network are The outstanding convolutional neural networks model emerged in recent years, depth residual error network is it is possible to prevente effectively from traditional convolutional neural networks mould The gradient of type disappears or explodes and model degradation problem.Intensive connection network improves the utilization rate of parameter, and network can utilize Less parameters amount reaches higher discrimination, but a large amount of intensive connections in network inevitably consume a large amount of GPU Memory, the intensive connection network for calculating deep layer are higher to the hardware requirement of machine.
Summary of the invention
Technical problem to be solved by the invention is to provide a kind of image classification method based on intensive connection residual error network, This method overcomes conventional depth learning method to be directed to the defect of image classification, passes through combination residual error network and intensive connection network Advantage reaches higher discrimination using less parameter amount, reduces computing resource and occupies, effectively improves the discrimination of image.
In order to solve the above technical problems, including following step the present invention is based on the image classification method of intensive connection residual error network It is rapid:
Step 1: carrying out data prediction to input picture, and expand image data amount;
Step 2: picture, which is input to intensive connection residual error network, extracts characteristics of image, the intensive residual error network that connects is by one A miniature compact connection network and a residual error structure composition, in intensively connection residual error network, serial operation will be in network It is input in residual error function again after the output characteristic pattern series connection of all convolutional layers;
Step 3: the characteristics of image extracted is input to Softmax classifier, picture classification result is obtained.
Further, all sample standard deviations are subtracted the mean value and variance of entire data set sample by the data prediction, so that The convergence of model is faster;The expansion image data amount trains the random overturning of picture and rotation, with the multiplicity of expanding data Property, so that model learning has more the feature represented.
Further, the intensive connection residual error network is indicated using following formula:
yl=yl-1+F(Hd(y0..., yd]), [W0..., Wd])
Wherein yl-1And ylRespectively first of network intensively connects outputting and inputting for residual error structure, [y0..., yd] [y0..., yd] indicate all convolutional layers of intensive connection residual error network internal, [W0, Wd] indicate in intensive connection residual error network The corresponding weight parameter of each convolutional layer in portion.
Further, in the picture classification operation of the Softmax classifier, the output of Softmax classifier is each Sample belongs to the probability of each classification, and the expression formula of loss function is shown below:
Wherein, xiIndicate the input of i-th of neuron of output layer, θ is learning parameter.
It is since the present invention is based on the image classification methods of intensive connection residual error network to use above-mentioned technical proposal, i.e., our Method carries out data prediction to input picture first, and expands image data amount;Then picture is input to intensive connection residual error Network extracts characteristics of image;The characteristics of image extracted is finally input to Softmax classifier, obtains picture classification result. This method overcomes in image classification task that conventional method effect is bad, common deep learning method calculation amount is too big, model instruction Practice and run and too high defect is required to hardware device, the advantages of by combining residual error network and intensive connection network, reach compared with Less computing resource is occupied while high discrimination.
Detailed description of the invention
The present invention will be further described in detail below with reference to the accompanying drawings and embodiments:
Fig. 1 is residual error schematic network structure;
Fig. 2 is intensive connection schematic network structure;
Fig. 3 is the intensive connection residual error schematic network structure with two layers of convolution;
Fig. 4 occupies contrast schematic diagram with the intensive network internal storage that connect for common convolutional neural networks.
Specific embodiment
The present invention is based on the image classification methods of intensive connection residual error network to include the following steps:
Step 1: carrying out data prediction to input picture, and expand image data amount;
Step 2: picture, which is input to intensive connection residual error network, extracts characteristics of image, the intensive residual error network that connects is by one A miniature compact connection network and a residual error structure composition, in intensively connection residual error network, serial operation will be in network It is input in residual error function again after the output characteristic pattern series connection of all convolutional layers;
Usual depth residual error network is piled up by a series of residual error structures, residual error structure can following formula indicate:
yl=yl-1+F(yl-1, Wl)
Wherein yl-1Indicate the input of first of residual error structure, ylIndicate the output of first of residual error structure, WlIndicate first it is residual The weight parameter of poor structure, F indicate that residual error function as long as making F ()=0, that is, completes in residual error network training process One identical mapping yl=yl-1: obviously, go one determining function F ()=0 of fitting not true compared to some is approached with network Fixed function yl=yl-1It is more easier, residual error network structure is as shown in Figure 1;
Intensive connection network has extremely strong ability in feature extraction and higher parameter utilization rate, the intensive core for connecting network Thought thinks that each layer of convolution all receives the output of its all convolutional layer previous as input, intensive connection network structure such as Fig. 2 It is shown;L layers of input may be defined as in intensive connection network:
yl=Hl([y0, y1..., yl-1])
Wherein [y0, y1..., yl-1] indicate the 0th, 1 ..l-1 layer of output, H expression serial operation, by [y0, y1, yl-1] be connected in series together after as l layers of input;The characteristic pattern of each layer of all layers of front of reception of network can not only be protected It is transferred to card information smoothing deep layer, and the utilization rate of parameter can be improved;But deeper intensive connection network needs More memories are wanted, because a large amount of intensive connections in network inevitably consume a large amount of GPU memory;
Therefore it is directed to the excessive problem of EMS memory occupation existing for intensive connection network, and sufficiently in conjunction with residual error network and intensively Connect the advantage of network, this method proposes a kind of novel intensive connection residual error network structure, this intensively connect residual error structure by One small-sized intensive connection network and a residual error structure composition, Fig. 3 are the intensive connection residual error with two layers of convolution Network structure;
Compared with raw residual structure, the intensive input for connecting the summation layer (residual error operation) in residual error network structure is by original The output of the last one convolutional layer of beginning structure becomes convolutional layer output all in residual error structure, can make so each residual Convolutional layer in poor structure is fully used, and intensive connect residual error network (DRN) is by a series of intensive connection residual error structure Composition;
Step 3: the characteristics of image extracted is input to softmax classifier, picture classification result is obtained.
Preferably, all sample standard deviations are subtracted the mean value and variance of entire data set sample by the data prediction, so that The convergence of model is faster;The expansion image data amount trains the random overturning of picture and rotation, with the multiplicity of expanding data Property, so that model learning has more the feature represented.
Preferably, the intensive connection residual error network is indicated using following formula:
yl=yl-1+F(Hd([y0..., yd]), [W0..., Wd])
Wherein yl-1And ylRespectively first of network intensively connects outputting and inputting for residual error structure, [y0..., yd] [y0... .yd] indicate all convolutional layers of intensive connection residual error network internal, [W0..., Wd] indicate intensive connection residual error net The corresponding weight parameter of each convolutional layer in network inside.
Preferably, in the picture classification operation of the Softmax classifier, the output of Softmax classifier is each Sample belongs to the probability of each classification, and the expression formula of loss function is shown below:
Wherein, xiIndicate the input of i-th of neuron of output layer, θ is learning parameter.
This method is described in detail with reference to the accompanying drawing, the present embodiment proposes close first of all for verifying this method The performance of collection connection residual error network, carries out image classification experiment in two benchmark datasets of CIFAR-10 and CIFAR-100;It connects In the comparable situation of network parameter amount, compare raw residual network and with Bu Tong intensively connection residual error structure intensively connecting Residual error network is connect in performances such as accuracy rate, the training speeds of CIFAR-10 data set;Finally to the intensive connection residual error net of proposition Network and intensive connection network are compared in terms of performance and EMS memory occupation.
Firstly, in order to verify the performance of the intensive connection residual error network of this method proposition, in CIFAR-10 and CIFAR-100 Image classification experiment is carried out in the two benchmark datasets;For the experiment, designs a specific 2D and intensively connect residual error net Network, the network include a convolution kernel having a size of 3*3, the initial convolutional layer that convolution kernel number is 32, are intensively connected with being followed by three kinds Residual error structure is connect, these three convolution kernel sizes for intensively connecting residual error structure are all 3*3, and convolution kernel number is respectively 16,32 and 64, it is finally that an overall situation is averaged pond layer to feature progress dimensionality reduction, is then fed into full articulamentum and classifies.For CIFAR The network structure of data set is as shown in table 1, and wherein N indicates the quantity of residual error structure, and K indicates intensive connection residual error inside configuration volume The depth of the number of lamination, network is controlled by N and K, and such as when K is 2, N is 8, the intensive depth for connecting residual error network is 50 Layer;This method has used three kinds of different intensive connection residual error networks, and K is respectively 2,3,4, and is respectively designated as DRN-A, DRN- B and DRN-C.
Table 1
Then, in the comparable situation of network parameter amount, compare raw residual network and there is Bu Tong intensively connection residual error The intensive connection residual error network of structure is in performances such as accuracy rate, the training speeds of CIFAR-10 data set;Experimental result such as 2 institute of table Show, wherein what intensively connection residual error network A, B, C respectively indicated the intensive connection residual error Web vector graphic is to have 2,3,4 layers of convolution Intensive connection residual error structure, from the results shown in Table 2 in the comparable situation of parameter amount, it is intensive connect residual error network without By being that accuracy rate or the speed of service are all substantially better than raw residual network, this is the result shows that intensively connect the ginseng of residual error network Number utilization rate is higher.
Table 2
Finally, the intensive connection residual error network and intensive connection network to proposition compare in terms of performance and EMS memory occupation Compared with the intensive network that connects has excellent ability in feature extraction, but due to the particularity of intensive connection structure, a large amount of series connection behaviour Make so that intensively the EMS memory occupation amount of connection network increased significantly compared to common depth network, as shown in Figure 4;Since this is lacked It falls into, the intensive network that connects needs better GPU that could run;The present embodiment is intensive using similar depth and close network-wide Connection residual error network with intensively connect network and compares, table 3 is intensive connection residual error network and intensively connects the performance of network Comparison result, wherein DenseNet-40-48 expression is the intensive connection network that one 40 layers of growth rate is 48, DRN-38-C- 2 indicate it is the 38 layers of intensive connection residual error network constituted using intensive connection residual error structure C, and by each layer of network The convolution kernel number of convolution expands 2 times, to keep and network capacity similar in intensive connection network;It can from the result of table 3 Out, residual error network is intensively connected in the occupancy for reaching with significantly reducing memory while intensively connecting accuracy rate similar in network Amount.
Table 3
This method overcome in image classification task conventional method effect is bad, common deep learning method calculation amount too Greatly, model training and operation require too high defect to hardware device, by combining the excellent of residual error network and intensive connection network Point occupies less computing resource, effectively reduces hardware cost while capable of reaching higher discrimination, improve image point The application scenarios of class.

Claims (4)

1. a kind of image classification method based on intensive connection residual error network, it is characterised in that this method includes the following steps:
Step 1: carrying out data prediction to input picture, and expand image data amount;
Step 2: picture, which is input to intensive connection residual error network, extracts characteristics of image, intensive connection residual error network is small by one Type intensively connects network and a residual error structure composition, and in intensively connection residual error network, serial operation will be all in network It is input in residual error function again after the output characteristic pattern series connection of convolutional layer;
Step 3: the characteristics of image extracted is input to Softmax classifier, picture classification result is obtained.
2. the image classification method according to claim 1 based on intensive connection residual error network, it is characterised in that: the number All sample standard deviations are subtracted the mean value and variance of entire data set sample by Data preprocess, so that the convergence of model is faster;The expansion It fills image data amount to train the random overturning of picture and rotation, with the diversity of expanding data, so that model learning has more generation The feature of table.
3. the image classification method according to claim 1 based on intensive connection residual error network, it is characterised in that: described close Collection connection residual error network is indicated using following formula:
yl=yl-1+F(Hd([y0..., yd]), [W0..., Wd])
Wherein yl-1And ylRespectively first of network intensively connects outputting and inputting for residual error structure, [y0..., yd] [y0..., yd] indicate all convolutional layers of intensive connection residual error network internal, [W0..., Wd] indicate intensive connection residual error net The corresponding weight parameter of each convolutional layer in network inside.
4. the image classification method according to claim 1 based on intensive connection residual error network, it is characterised in that: described In the picture classification operation of Softmax classifier, the output of Softmax classifier is that each sample belongs to the general of each classification The expression formula of rate, loss function is shown below:
Wherein, xiIndicate the input of i-th of neuron of output layer, θ is learning parameter.
CN201910135688.7A 2019-02-21 2019-02-21 Image classification method based on intensive connection residual error network Pending CN109978003A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910135688.7A CN109978003A (en) 2019-02-21 2019-02-21 Image classification method based on intensive connection residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910135688.7A CN109978003A (en) 2019-02-21 2019-02-21 Image classification method based on intensive connection residual error network

Publications (1)

Publication Number Publication Date
CN109978003A true CN109978003A (en) 2019-07-05

Family

ID=67077229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910135688.7A Pending CN109978003A (en) 2019-02-21 2019-02-21 Image classification method based on intensive connection residual error network

Country Status (1)

Country Link
CN (1) CN109978003A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796716A (en) * 2019-10-21 2020-02-14 东华理工大学 Image coloring method based on multiple residual error networks and regularized transfer learning
CN111652054A (en) * 2020-04-21 2020-09-11 北京迈格威科技有限公司 Joint point detection method, posture recognition method and device
CN111861923A (en) * 2020-07-21 2020-10-30 济南大学 Target identification method and system based on lightweight residual error network image defogging
CN112183718A (en) * 2020-08-31 2021-01-05 华为技术有限公司 Deep learning training method and device for computing equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991646A (en) * 2017-03-28 2017-07-28 福建帝视信息科技有限公司 A kind of image super-resolution method based on intensive connection network
CN107437096A (en) * 2017-07-28 2017-12-05 北京大学 Image classification method based on the efficient depth residual error network model of parameter
CN108596329A (en) * 2018-05-11 2018-09-28 北方民族大学 Threedimensional model sorting technique based on end-to-end Deep integrating learning network
CN108830242A (en) * 2018-06-22 2018-11-16 北京航空航天大学 SAR image targets in ocean classification and Detection method based on convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991646A (en) * 2017-03-28 2017-07-28 福建帝视信息科技有限公司 A kind of image super-resolution method based on intensive connection network
CN107437096A (en) * 2017-07-28 2017-12-05 北京大学 Image classification method based on the efficient depth residual error network model of parameter
CN108596329A (en) * 2018-05-11 2018-09-28 北方民族大学 Threedimensional model sorting technique based on end-to-end Deep integrating learning network
CN108830242A (en) * 2018-06-22 2018-11-16 北京航空航天大学 SAR image targets in ocean classification and Detection method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TIANZHONG SONG 等: "Residual network with dense block", 《JOURNAL OF ELECTRONIC IMAGING》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796716A (en) * 2019-10-21 2020-02-14 东华理工大学 Image coloring method based on multiple residual error networks and regularized transfer learning
CN110796716B (en) * 2019-10-21 2023-04-28 东华理工大学 Image coloring method based on multiple residual error network and regularized transfer learning
CN111652054A (en) * 2020-04-21 2020-09-11 北京迈格威科技有限公司 Joint point detection method, posture recognition method and device
CN111652054B (en) * 2020-04-21 2023-11-03 北京迈格威科技有限公司 Joint point detection method, gesture recognition method and device
CN111861923A (en) * 2020-07-21 2020-10-30 济南大学 Target identification method and system based on lightweight residual error network image defogging
CN112183718A (en) * 2020-08-31 2021-01-05 华为技术有限公司 Deep learning training method and device for computing equipment
WO2022042713A1 (en) * 2020-08-31 2022-03-03 华为技术有限公司 Deep learning training method and apparatus for use in computing device
CN112183718B (en) * 2020-08-31 2023-10-10 华为技术有限公司 Deep learning training method and device for computing equipment

Similar Documents

Publication Publication Date Title
CN111325155B (en) Video motion recognition method based on residual difference type 3D CNN and multi-mode feature fusion strategy
CN109978003A (en) Image classification method based on intensive connection residual error network
CN111369563B (en) Semantic segmentation method based on pyramid void convolutional network
CN110082821B (en) Label-frame-free microseism signal detection method and device
CN112784929B (en) Small sample image classification method and device based on double-element group expansion
CN110781928B (en) Image similarity learning method for extracting multi-resolution features of image
CN111046900A (en) Semi-supervised generation confrontation network image classification method based on local manifold regularization
CN107644221A (en) Convolutional neural networks traffic sign recognition method based on compression of parameters
CN107292267A (en) Photo fraud convolutional neural networks training method and human face in-vivo detection method
CN113762138B (en) Identification method, device, computer equipment and storage medium for fake face pictures
CN109359527B (en) Hair region extraction method and system based on neural network
CN107301396A (en) Video fraud convolutional neural networks training method and human face in-vivo detection method
CN111612799A (en) Face data pair-oriented incomplete reticulate pattern face repairing method and system and storage medium
CN110826462A (en) Human body behavior identification method of non-local double-current convolutional neural network model
CN111242181B (en) RGB-D saliency object detector based on image semantics and detail
CN111178312B (en) Face expression recognition method based on multi-task feature learning network
CN105844291A (en) Characteristic fusion method based on kernel typical correlation analysis
CN112990316A (en) Hyperspectral remote sensing image classification method and system based on multi-saliency feature fusion
CN112884758A (en) Defective insulator sample generation method and system based on style migration method
CN112560828A (en) Lightweight mask face recognition method, system, storage medium and equipment
CN113724354A (en) Reference image color style-based gray level image coloring method
CN114330516A (en) Small sample logo image classification based on multi-graph guided neural network model
Chen et al. Dlfmnet: End-to-end detection and localization of face manipulation using multi-domain features
CN113688715A (en) Facial expression recognition method and system
CN110414586A (en) Antifalsification label based on deep learning tests fake method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190705

RJ01 Rejection of invention patent application after publication