CN109858486A - Deep learning-based data center cloud target identification method - Google Patents
Deep learning-based data center cloud target identification method Download PDFInfo
- Publication number
- CN109858486A CN109858486A CN201910076845.1A CN201910076845A CN109858486A CN 109858486 A CN109858486 A CN 109858486A CN 201910076845 A CN201910076845 A CN 201910076845A CN 109858486 A CN109858486 A CN 109858486A
- Authority
- CN
- China
- Prior art keywords
- module
- network
- neural network
- training
- section
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a data center cloud target identification method based on deep learning, and aims to improve the identification accuracy of small targets and the detection effect of large-picture images. The technical scheme includes that a data center cloud target recognition system based on deep learning and comprising a dimension clustering module and a function module is constructed, a focusing loss function is defined in an output layer of a constructed convolutional neural network, the function module comprises a segmentation detection function, the characteristics of a picture to be trained of a training set are fully utilized, prior information is extracted through the dimension clustering module, the focusing loss function enables training of the network to focus on small targets, and the segmentation detection function is utilized to detect large-picture images in a blocking mode. The dimension clustering module extracts prior information of the training set from the training set label file, so that the positioning accuracy of the target is improved, the identification accuracy of the small target is improved by the focusing loss function, and the detection speed and the detection accuracy of the large-picture image are further improved by the segmentation detection function.
Description
Technical field
The present invention relates to field of target recognition more particularly to a kind of data center's cloud target identifications based on deep learning
Method.
Background technique
Recently as the development of depth learning technology, so that realizing end-to-end (End- using initial data as input
To-End learning process) is possibly realized.Convolutional neural networks (Convolutional Neural Networks, CNN) have
Very strong feature extraction and learning ability.Therefore, the existing object detection method based on deep learning is based on convolutional Neural more
Network obtains most effective one further feature, and relatively multiple by establishing by being trained, learning to large-scale data
Miscellaneous network structure, the association between abundant mining data, to realize target identification end to end.
As shown in Figure 1, it is existing based on the object detection system of deep learning by input/output module, functional module, solution
It analyses module, network layer module, neural network and constructs module, cfg configuration file and network weight file are constituted.
Cfg configuration file is connected with parsing module, and the network parameter that cfg configuration file records building convolutional neural networks supplies
Parsing module is read, which is divided into network layer parameter and network architecture parameters.Network layer parameter includes each network layer (volume
Product neural network includes input layer, convolutional layer, pond layer, full articulamentum and output layer) each characteristic pattern neuron it is a
The number (i.e. the dimension of layer) of number (i.e. the size of layer) and characteristic pattern, wherein the output layer of convolutional neural networks records priori frame
Size, the size of priori frame sets (the generally equivalent to size and ratio of normal image according to the size and ratio of normal image
Zoom in or out), which is fixed;Network architecture parameters include the type of each network layer for forming neural network, number and
Combination.
Network weight file is connected with neural network building module, functional module, and network weight file stores from function mould
The received network weight parameter of block is read for neural network building module.Network weight parameter refers to that neural network connects between layers
The coefficient of the expression formula of the input/output relation of each neuron when connecing.
Parsing module is connected with cfg configuration file, network layer module, neural network building module, which configures from cfg
File reads the network parameter of building neural network, network parameter is resolved to network layer parameter and network architecture parameters, and will
Network layer parameter issues network layer module, and network architecture parameters are sent to neural network building module.
Network layer module is connected with parsing module, neural network building module, and analytically module receives network layer to the module
Parameter instantiates each network layer using network layer parameter, and the network layer after instantiation is sent to neural network building module;Its
In, loss function is defined in the output layer of convolutional neural networks, loss function is the pre- of the output of measurement convolutional neural networks
The function of gap between measured value and true value.The value of loss function is smaller, with regard to the better of representative model fitting.Common loss
Function has 0-1 loss function, absolute error loss function, quadratic loss function and cross entropy loss function etc..It is existing to be used for mesh
Identifying loss function used by other convolutional neural networks is the form based on quadratic loss function, is primarily adapted for use in common figure
Picture identification (in normal image object to be measured size account for full figure ratio is higher, target numbers are less, be distributed sparse), it is not right
Big Small object distinguishes.
CNN is trained using training function, and training function is divided into two stages to the training of CNN: propagated forward and anti-
To propagation.In the propagated forward stage, each layer Forward (propagated forward) function is successively called, obtains layer-by-layer output, most
Later layer (i.e. output layer) exports predicted value, and predicted value is obtained loss function value by loss function compared with sample true value;So
Backward (backpropagation) function of output layer calculates weight parameter updated value afterwards, successively calls each layer of Backward
(backpropagation) function successively reaches first layer by backpropagation, network weight parameter at the end of backpropagation together more
Newly.As training carries out, network weight parameter is constantly updated, and loss function value constantly reduces, i.e., the predicted value of network output with
True value error is also smaller and smaller.When the value of loss function is no longer reduced, indicates that training is completed, obtain network weight parameter.
Select different loss functions, it will so that the training of neural network stresses direction difference, the weight of each layer in training process
Updated value also will be different, and the detection effect of last neural network model also will be different.
Neural network building module is connected with parsing module, network layer module, network weight file, functional module, the mould
Analytically module receives network architecture parameters to block, receives the network layer after instantiation from network layer module, is joined according to network structure
Number combines each network layer, constructs the basic framework of neural network.Neural network constructs module also from network weight file acquisition
Network weight parameter is that the basic framework of neural network assigns weight parameter, completes building for neural network, neural network is sent out
Give functional module.
Input/output module is connected with functional module, and input/output module is read from the test set that user provides to mapping
Testing image is converted the structural body (such as image, data and box structural body) that functional module can be identified and be handled by picture, will
These structural bodies are sent to functional module;And recognition result is received from functional module, recognition result is exported to user.
Functional module is connected with input/output module, neural network building module, network weight file, and functional module is called
Training function is (referring to document Jake Bouvrie.Notes on Convolutional Neural Networks [J]
.2006.11, it is translated into: convolutional neural networks notes) training neural network, network weight parameter is stored into network weight text
Part;Functional module calls detection function, carries out target identification using neural network, obtains neural network to the identification knot of image
Recognition result is sent to input/output module by fruit.
The method that the existing target identification system based on deep learning carries out target identification are as follows:
1) parsing module reads network parameter from cfg configuration file, network layer parameter is sent into network layer module, by network
Structural parameters are sent to neural network building module;
2) analytically module receives network layer parameter to network layer module, defines and realize that each network layer, output network layer arrive
Neural network constructs module;
3) analytically module receives network architecture parameters to neural network building module, receives network layer from network layer module,
It is combined network layer according to order according to network architecture parameters, constructs the basic framework of neural network.Neural network constructs module
Network weight parameter is obtained from network weight file again, is that the basic framework of neural network assigns weight, completes neural network
Build, neural network constructs module and neural network is sent to functional module;
4) input/output module receives the testing image of user's input, and is fixed dimension M*M (one by testing image scaling
As be set as 416*416), the structural body that functional module can be identified and be handled is then converted to, such as image, data and box structure
Body, input/output module is by these structural body input functional modules.Functional module calls training function training neural network, obtains
Network weight parameter is stored into network weight file by network weight parameter;Functional module calls detection function, utilizes nerve
Network carries out target detection, and the prediction result of image is calculated and obtained by neural network output layer, for the prediction of target position
Be add that position offset obtains using priori frame (specific testing principle is referring to document Joseph Redmon.YOLO9000
Better, Faster, Stronger [C] .Hawaii Convention Center.2017, pp7263-7271. translations
YOLO9000, more preferably, faster, stronger, CVPR2017 meeting paper, the row of page 3 the 1st the 4th row of page-the 4.), obtain neural network
After the recognition result of image, recognition result is transmitted to input/output module, by input/output module by recognition result export to
User.
However, existing, based on the target identification method of deep learning, there is following technical problems:
1) existing object detection method is based on predicted position on the basis of priori frame for the prediction of target frame position
What relative displacement obtained, presetting priori frame is set according to the size and ratio of normal image, the value be it is fixed,
It is stored in cfg file.However in actual conditions, size and the ratio difference of many targets are larger, and the value of priori frame should not be straight
The priori frame size of the presetting fixation of female connector, the accuracy of target identification be not high;
2) loss function in existing object detection method is designed for normal image, mesh to be measured in usual image
The ratio that dimensioning accounts for full figure is higher, and target numbers are few, is distributed sparse.However many Aerial Images are (such as in actual conditions
Remote sensing images) object to be measured often accounts for full figure ratio very little, and is distributed than comparatively dense, when this results in training network to target and
The training of background stresses unbalance, Small object identification inaccuracy insufficient to Small object training;
3) input/output module of existing recognition methods will carry out the normalized pre- place of size to the image of all inputs
Reason, every image will zoom to the size of M*M (generally several hundred to multiply several hundred, to be 416*416 in YOLOv2).However some figures
As such as remote sensing images, often size reaches thousands of multiplied by thousands of or even tens of thousands of multiplied by tens of thousands of, and target size therein is usually only
There are tens pixels.If directly inputting neural network to be detected, size normalization can enable image lose many detailed information,
Most of target can all become a point, and the detection effect of big map sheet image is had a greatly reduced quality.
In consideration of it, how to solve the problems, such as that Small object and big map sheet image recognition accuracy are low, Small object is effectively improved
The detection effect of recognition accuracy and big map sheet image becomes this field researcher urgent problem to be solved.
Summary of the invention
The technical problem to be solved by the present invention is to propose a kind of cloud target identification side, data center based on deep learning
Method makes full use of the feature of training set picture to be trained, and refines prior information by dimension cluster module, detects letter using segmentation
Number piecemeal detects big map sheet image, solves the problems, such as that Small object and big map sheet image recognition accuracy are low, effectively improves Small object
Recognition accuracy and big map sheet image detection effect.
The technical scheme is that
The first step constructs data center's cloud target identification system based on deep learning.Data based on deep learning
Center cloud target identification system by cloud server, groups of clients at.Telnet, client are installed in client
In store the data set that required by task to be measured is wanted, data set includes test set picture to be measured (figure to be detected in test set
Piece), training set picture to be trained (being used to train the picture of neural network in training set), training set label file, training set mark
Sign the indicia framing information of target in file record training set picture to be trained, position coordinates, width, height and mesh including indicia framing
Target classification (such as aircraft, ship).Client logs in cloud server by telnet, and data set is uploaded to cloud clothes
Business device sends training instruction, detection instruction to cloud server before starting training and testing, long-range to cloud server progress
Training, detection;Cloud server carries out neural metwork training and target identification, dispatches cloud server according to the instruction of client
Computing resource and storage resource, and the training progress msg and recognition result of neural network are sent to client.
Cloud server removes and is equipped with input/output module, functional module, parsing module, network layer module, neural network
Module, cfg configuration file, network weight file are constructed, dimension cluster module is also equipped with.
Dimension cluster module is connected with client, cfg configuration file, which receives training set label text from client
Part, dimension cluster module carry out refinement analysis to the indicia framing information in training set label file, and priori frame size is calculated,
Priori frame size is written in cfg configuration file.
Cfg configuration file is connected with dimension cluster module, parsing module, and cfg configuration file is in addition to recording building convolution mind
Outside network parameter through network, also (belong to from the received priori frame size of dimension cluster module as the output layer parameter of network
In network layer parameter) storage.
Parsing module is connected with cfg configuration file, network layer module, neural network building module.The module is configured from cfg
File reads the network parameter of building neural network, and parsing network parameter is network layer parameter and network architecture parameters, and by net
Network layers parameter is sent to network layer module, and network architecture parameters are sent to neural network building module.
Network layer module is connected with parsing module, neural network building module, and analytically module receives network layer to the module
Parameter instantiates each network layer using network layer parameter, and the network layer after instantiation is sent to neural network building module.
Unlike network layer module shown in FIG. 1, loss function defined in the output layer of convolutional neural networks is focused lost
Function, focused lost function be based on cross entropy loss function it is improved, for detection target detection complexity carry out
It distinguishes, increases the difficult detection target such as Small object weight shared in loss function, enhance the detection effect for Small object
(concrete principle of focused lost function is referring to document Lin T Y, Goyal P, Girshick R, He K.Focal Loss for fruit
For Dense Object Detection [C] .ICCV2017 paper arXiv preprint arXiv:1708.02002,
2018:1-10 is translated into: for the focused lost function of intensive target detection)).Focused lost function is for actual conditions
Aerial Images design, this kind of image object to be measured usually accounts for full figure ratio very little, comparatively dense is compared in distribution.
Neural network building module is connected with parsing module, network layer module, network weight file, functional module.Nerve
Analytically module receives network architecture parameters to network struction module, receives network layer from network layer module, is joined according to network structure
Number combines network layer according to order, constructs the basic framework of neural network.Neural network constructs module also from network weight text
Part reads network weight parameter, is that the basic framework of neural network assigns weight parameter, completes building for neural network, nerve net
Network constructs module and neural network is sent to functional module.
Network weight file is connected with neural network building module, functional module, and network weight file stores from function mould
The received network weight parameter of block is read for neural network building module.
Input/output module is connected with functional module, and input/output module receives test set testing image from client, will
Testing image is converted into the structural body (such as image, data and box structural body) that program can be identified and be handled, and these are tied
Structure body is sent to functional module.
Functional module is connected with input/output module, neural network building module, network weight file, client, with Fig. 1
Equally, also there are trained function and detection function in functional module, functional module calls training function training neural network, by network
Weight parameter is sent to network weight file;Different with Fig. 1 is that detection function is changed, and becomes segmentation detection letter
Number, functional module call segmentation detection function to carry out target detection using neural network, obtain identification of the neural network to image
As a result, recognition result is sent to input/output module.
Second step, dimension cluster module, parsing module, network layer module and the neural network of cloud server construct mould
Block cooperates, and constructs the basic framework of neural network, method are as follows:
2.1 dimension cluster modules receive training set label file from client, read indicia framing from training set label file
Information finds out priori frame size, method are as follows:
2.1.1 dimension cluster module obtains training set picture to be trained (multiple pictures to be trained) from training set label file
The indicia framing information (these indicia framing information have been got well by user's mark) of middle target, with the width and high structure of each indicia framing
At binary group (wi,hi) be used as element (w expression is wide, and h indicates high, i expression indicia framing serial number), in composition set S, set S
Element number is N, and N is the number of indicia framing in picture to be trained, i ∈ [1, N];
2.1.2 dimension cluster module sets cluster centre number as k, and k is positive integer, and definition maximum number of iterations is Num,
Num is generally the integer between 10 to 100, initializes the first cluster centre set C1For empty set, if C1The current number of middle element
For N', N' initial value is 0.
2.1.3 k cluster centre is initialized, method is:
2.1.3.1 dimension cluster module randomly chooses an element (w from Sl,hl), l ∈ [1, N] is set to first
Set C is added in a cluster centre1, enable variable N'=1;
2.1.3.2 enabling variable m=1, n=1;
2.1.3.3 dimension cluster module calculates the element (w in Sm,hm) and C1In element (wn,hn) distance d ((wm,
hm),(wn,hn)):
d((wm,hm),(wn,hn))=1-IOU ((wm,hm),(wn,hn))
Wherein, for any one element (a, b) in dimension cluster module calculating S, (a, b are respectively the wide w of indicia framingmWith
High hm) and C1In any one element (c, d) (c, d are respectively the wide w of cluster centrenWith high hn) rectangle frame hand over and than IOU's
Calculation is as follows:
If a >=c, b >=d, then
If a >=c, b≤d, then
If a≤c, b >=d, then
If a≤c, b≤d, then
If 2.1.3.4 n < N', enables n=n+1, turn 2.1.3.3;If n=N' turns 2.1.3.5;
If 2.1.3.5 m < N, enables m=m+1, n=1, turn 2.1.3.3;If m=N turns 2.1.3.6;
2.1.3.6 variable m=1, n=1, D (w are enabledm,hm)=1;D(wm,hm) it is arbitrary element (w in Sm,hm) and C1In
Arbitrary element (wn,hn) distance minimum value;
2.1.3.7 if d ((wm,hm),(wn,hn)) < D (wm,hm), then enable D (wm,hm)=d ((wm,hm),(wn,hn)), turn
2.1.3.8;Otherwise directly turn 2.1.3.8;
If 2.1.3.8 n < N', enables n=n+1, turn 2.1.3.7;If n=N' turns 2.1.3.9;
If 2.1.3.9 m < N, m=m+1, n=1, D (w are enabledm,hm)=1, turns 2.1.3.7;If m=N turns 2.1.3.10;
2.1.3.10 dimension cluster module calculate minimum range and
2.1.3.11 dimension cluster module takes the N'+1 cluster centre of method choice by weight distribution probability:
2.1.3.11.1 value r is obtained multiplied by random value random (random ∈ [0,1]) with SUM, initialization takes and variable
Cur=0 enables m=1;
2.1.3.11.2 dimension cluster module calculates cur=cur+D (wm,hm)
If 2.1.3.11.3 cur≤r enables m=m+1, turn 2.1.3.11.2;If cur > r, the element (w in Sm,hm) plus
Enter set C1, N'=N'+1 is enabled, 2.1.3.12 is turned;
2.1.3.12 if N'< k, goes to step 2.1.2.2;If N'=k, the first cluster centre set C is obtained1, turn 2.1.4.
2.1.4 the number of iterations t=1 is enabled, the iterative calculation of dimension cluster module generates t+1 cluster centre set, and step is such as
Under:
2.1.4.1 dimension cluster module is according to element each in S and CtThe distance of middle k cluster centre, will be each in S
Element incorporates cluster belonging to nearest cluster centre into, and method is:
For element each in S, CtIn have a cluster centre distance d be minimum therewith.It will be with first cluster centre
(w1,h1) apart from the smallest element it is divided into a set C1, will be with second cluster centre (w2,h2) drawn apart from the smallest element
It is divided into a set C2, and so on, k set is obtained, C is expressed as1, C2..., Cp..., Ck, p ∈ [1, k].
2.1.4.2 finding out C respectively1, C2..., Cp..., CkMean value (the w of middle each element1',h1')(w'2,h2') ...,
(w'p,h'p) ..., (w'k,h'k), wherein w'pFor CpThe arithmetic mean of instantaneous value of the abscissa of middle each element, h'pFor CpMiddle each element
The arithmetic mean of instantaneous value of ordinate, k obtained mean value is as t+1 cluster centre set Ct+1, t=t+1;
If 2.1.4.3 t < Num, goes to step 2.1.4.1;If t=Num, by C at this timet+1Middle k element is as priori frame
Width and high write-in cfg configuration file, turn 2.2.
2.2 parsing modules receive the network parameter of building neural network from cfg configuration file, and parsing network parameter is network
Layer parameter and network architecture parameters, and network layer parameter is issued into network layer module, network architecture parameters are sent to nerve net
Network constructs module.
Analytically module receives network layer parameter to 2.4 network layer modules, instantiates each network layer using network layer parameter,
Focused lost function is defined in output layer, and network layer is sent to neural network building module.
2.5 neural networks construct module, and analytically module receives network architecture parameters, receives network layer from network layer module,
Network layer is combined according to network architecture parameters, constructs the basic framework of neural network.
Third step, cloud server and client, which cooperate, carries out the training of neural network, completes taking for neural network
It builds, method is:
3.1 functional modules obtain training instruction from client;
3.2 input/output modules, functional module, the basic framework of neural network building module training neural network, method
It is:
3.2.1 input/output module receives training set picture to be measured from client, converts journey for training set picture to be measured
The structural body that sequence can be identified and be handled, such as image, data and box structural body.
3.2.2 structural body is sent to functional module by input/output module.
3.2.3 neural network constructs module using random number as input, initializes the network weight parameter of neural network, is
The basic framework of neural network assigns weight parameter, completes initial neural network and builds.
3.2.4 neural network constructs module and initial neural network is sent to functional module;
3.2.5 functional module using structural body training neural network, method is: functional module using structural body as input,
The focused lost function in initial neural network output layer is called, instructs neural network to be instructed using focused lost function
Practice, and generate trained network weight parameter (network weight parameter update concrete principle and method referring to document Jake
Bouvrie.Notes on Convolutional Neural Networks [J] .2006.11, is translated into: convolutional neural networks pen
Note).
Trained network weight parameter is stored into network weight file by 3.3 functional modules.
3.4 neural networks building module reads trained network weight parameter from network weight file, will train
Network weight parameter assign neural network basic framework, complete building for neural network.
4th step, cloud server and client, which cooperate, carries out target detection identification to testing image, and method is:
4.1 functional modules receive detection instruction from client;
4.2 functional modules, input/output module, functional module, neural network building module cooperate and carry out target inspection
Identification is surveyed, method is:
4.2.1 input/output module receives test set picture P to be measured from client, converts function mould for picture P to be measured
The structural body that block can be identified and be handled, such as image, data and box structural body;
4.2.2 functional module receives structural body from input/output module, and receives nerve net from neural network building module
Network;
4.2.3 functional module calls segmentation detection function, is split inspection using structural body picture P to be measured to test set
It surveys, the method is as follows:
4.2.3.1 the width and a height of W and H for assuming P enable m=0 using the upper left corner as coordinate origin (0,0), and n=0, M are mind
Size through network input layer, between generally 100 to 1000;
4.2.3.2 segmentation detection function is section [m, m+M] to wide coordinate using neural network, and high coordinate is section
Slice in [n, n+M] range carries out target detection, and the prediction result of image is calculated and obtained by neural network output layer, for
The prediction of target position adds position offset to obtain using priori frame, and obtaining wide coordinate is section [m, m+M], high coordinate
For the recognition result being respectively sliced in section [n, n+M] range, i.e., the position coordinates and classification of each target;
If 4.2.3.3 m < W-M, m=m+M, turn 4.2.3.2;If W-M≤m≤W, turns 4.2.3.4;
4.2.3.4 segmentation detection function is section [m, W] to wide coordinate using neural network, and high coordinate is section
Slice in [n, n+M] range carries out target detection, and obtaining wide coordinate is section [m, W], and high coordinate is section [n, n+M]
The recognition result being respectively sliced in range, m=0;
If 4.2.3.5 n < H-M, n=n+M, turn 4.2.3.2;If H-M≤n≤H, turns 4.2.3.6;
4.2.3.6 segmentation detection function is section [m, m+M] to wide coordinate using neural network, and high coordinate is section
Slice in [n, H] range carries out target detection, and obtaining wide coordinate is section [m, m+M], and high coordinate is section [n, H] model
Enclose the interior recognition result being respectively sliced;
If 4.2.3.7 m < W-M, m=m+M, turn 4.2.3.6;If W-M≤m≤W, turns 4.2.3.8;
4.2.3.8 segmentation detection function is section [m, W] to wide coordinate using neural network, and high coordinate is section
Slice in [n, H] range carries out target detection, and obtaining wide coordinate is section [m, W], and high coordinate is section [n, H] range
The interior recognition result being respectively sliced;
4.2.3.9 the wide coordinate in 4.2.3.2 is section [m, m+M] by segmentation detection function, and high coordinate is section [n, n+
M] each slice recognition result in range, wide coordinate is section [m, W] in 4.2.3.4, and high coordinate is in section [n, n+M] range
Each slice recognition result, wide coordinate is section [m, m+M] in 4.2.3.6, and high coordinate is each slice in section [n, H] range
Recognition result, wide coordinate is section [m, W] in 4.2.3.8, high coordinate be each slice recognition result in section [n, H] range into
Row integration obtains the recognition result of entire image P (wide and height is respectively W and H).
4.2.4 the recognition result of P is transmitted to input/output module by functional module;
4.3 input/output modules export the recognition result of P to client.
Following technical effect can achieve using the present invention:
1. second step of the present invention can extract the elder generation of training set by design dimension cluster module from training set label file
Information is tested, priori frame size is calculated, improves the positional accuracy of target;
2. the present invention is added to focused lost function instead of existing loss function, the training of network is made to lay particular emphasis on image
In Small object, improve the recognition accuracy of Small object;
3. the present invention loses serious situation for big map sheet image detection information, using segmentation detection function, improve
The detection speed of big map sheet image and the accuracy of detection.
Detailed description of the invention
Fig. 1 is the architecture diagram of the existing target identification method based on deep learning;
Fig. 2 is overview flow chart of the invention.
Fig. 3 is the architecture diagram for data center's cloud target identification method based on deep learning that the present invention designs.
Specific embodiment
Fig. 2 is overview flow chart of the invention.As shown in Fig. 2, the present invention the following steps are included:
The first step constructs data center's cloud target identification system based on deep learning.Data based on deep learning
Center cloud target identification system as shown in figure 3, by cloud server, groups of clients at.Telnet is installed in client
Software, stores the data set that required by task to be measured is wanted in client, and data set includes that test set picture to be measured, training set wait instructing
Practice picture, training set label file.Client logs in cloud server by telnet, and data set is uploaded to cloud
Server sends training instruction, detection instruction to cloud server before starting training and testing, and carries out to cloud server remote
Cheng Xunlian, detection;Cloud server carries out neural metwork training and target identification, dispatches cloud service according to the instruction of client
The computing resource and storage resource of device, and the training progress msg and recognition result of neural network are sent to client.
Cloud server removes and is equipped with input/output module, functional module, parsing module, network layer module, neural network
Module, cfg configuration file, network weight file are constructed, dimension cluster module is also equipped with.
Dimension cluster module is connected with client, cfg configuration file, which receives training set label text from client
Part, dimension cluster module carry out refinement analysis to the indicia framing information in training set label file, and priori frame size is calculated,
Priori frame size is written in cfg configuration file.
Cfg configuration file is connected with dimension cluster module, parsing module, and cfg configuration file is in addition to recording building convolution mind
Outside network parameter through network, also deposited from the received priori frame size of dimension cluster module as the output layer parameter of network
Storage.
Parsing module is connected with cfg configuration file, network layer module, neural network building module.The module is configured from cfg
File reads the network parameter of building neural network, and parsing network parameter is network layer parameter and network architecture parameters, and by net
Network layers parameter is sent to network layer module, and network architecture parameters are sent to neural network building module.
Network layer module is connected with parsing module, neural network building module, and analytically module receives network layer to the module
Parameter instantiates each network layer using network layer parameter, and the network layer after instantiation is sent to neural network building module.
Unlike network layer module shown in FIG. 1, loss function defined in the output layer of convolutional neural networks is focused lost
Function, focused lost function be based on cross entropy loss function it is improved, for detection target detection complexity carry out
It distinguishes, increases the difficult detection target such as Small object weight shared in loss function, enhance the detection effect for Small object
Fruit.
Neural network building module is connected with parsing module, network layer module, network weight file, functional module.Nerve
Analytically module receives network architecture parameters to network struction module, receives network layer from network layer module, is joined according to network structure
Number combines network layer according to order, constructs the basic framework of neural network.Neural network constructs module also from network weight text
Part reads network weight parameter, is that the basic framework of neural network assigns weight parameter, completes building for neural network, nerve net
Network constructs module and neural network is sent to functional module.
Network weight file is connected with neural network building module, functional module, and network weight file stores from function mould
The received network weight parameter of block is read for neural network building module.
Input/output module is connected with functional module, and input/output module receives test set testing image from client, will
Testing image is converted into the structural body (such as image, data and box structural body) that program can be identified and be handled, and these are tied
Structure body is sent to functional module.
Functional module is connected with input/output module, neural network building module, network weight file, client, with Fig. 1
Equally, also there are trained function and detection function in functional module, functional module calls training function training neural network, by network
Weight parameter is sent to network weight file;Different with Fig. 1 is that detection function is changed, and becomes segmentation detection letter
Number, functional module call segmentation detection function to carry out target detection using neural network, obtain identification of the neural network to image
As a result, recognition result is sent to input/output module.
Second step, dimension cluster module, parsing module, network layer module and the neural network of cloud server construct mould
Block cooperates, and constructs the basic framework of neural network, method are as follows:
2.1 dimension cluster modules receive training set label file from client, read indicia framing from training set label file
Information finds out priori frame size, method are as follows:
2.1.1 dimension cluster module obtains the indicia framing letter of target in training set picture to be trained from training set label file
Breath, with the binary group (w of the width of each indicia framing and high compositioni,hi) as element, (w indicates wide, and h indicates high, and i indicates label
Frame serial number), set S is constituted, the element number in set S is N, and N is the number of indicia framing in picture to be trained, i ∈ [1, N];
2.1.2 dimension cluster module sets cluster centre number as k, and k is positive integer, and definition maximum number of iterations is Num,
Num is generally the integer between 10 to 100, initializes the first cluster centre set C1For empty set, if C1The current number of middle element
For N', N' initial value is 0.
2.1.3 k cluster centre is initialized, method is:
2.1.3.1 dimension cluster module randomly chooses an element (w from Sl,hl), l ∈ [1, N] is set to first
Set C is added in a cluster centre1, enable variable N'=1;
2.1.3.2 enabling variable m=1, n=1;
2.1.3.3 dimension cluster module calculates the element (w in Sm,hm) and C1In element (wn,hn) distance d ((wm,
hm),(wn,hn)):
d((wm,hm),(wn,hn))=1-IOU ((wm,hm),(wn,hn))
Wherein, for any one element (a, b) in dimension cluster module calculating S, (a, b are respectively the wide w of indicia framingmWith
High hm) and C1In any one element (c, d) (c, d are respectively the wide w of cluster centrenWith high hn) rectangle frame hand over and than IOU's
Calculation is as follows:
If a >=c, b >=d, then
If a >=c, b≤d, then
If a≤c, b >=d, then
If a≤c, b≤d, then
If 2.1.3.4 n < N', enables n=n+1, turn 2.1.3.3;If n=N' turns 2.1.3.5;
If 2.1.3.5 m < N, enables m=m+1, n=1, turn 2.1.3.3;If m=N turns 2.1.3.6;
2.1.3.6 variable m=1, n=1, D (w are enabledm,hm)=1;D(wm,hm) it is arbitrary element (w in Sm,hm) and C1In
Arbitrary element (wn,hn) distance minimum value;
2.1.3.7 if d ((wm,hm),(wn,hn)) < D (wm,hm), then enable D (wm,hm)=d ((wm,hm),(wn,hn)), turn
2.1.3.8;Otherwise directly turn 2.1.3.8;
If 2.1.3.8 n < N', enables n=n+1, turn 2.1.3.7;If n=N' turns 2.1.3.9;
If 2.1.3.9 m < N, m=m+1, n=1, D (w are enabledm,hm)=1, turns 2.1.3.7;If m=N turns 2.1.3.10;
2.1.3.10 dimension cluster module calculate minimum range and
2.1.3.11 dimension cluster module takes the N'+1 cluster centre of method choice by weight distribution probability:
2.1.3.11.1 value r is obtained multiplied by random value random (random ∈ [0,1]) with SUM, initialization takes and variable
Cur=0 enables m=1;
2.1.3.11.2 dimension cluster module calculates cur=cur+D (wm,hm)
If 2.1.3.11.3 cur≤r enables m=m+1, turn 2.1.3.11.2;If cur > r, the element (w in Sm,hm) plus
Enter set C1, N'=N'+1 is enabled, 2.1.3.12 is turned;
2.1.3.12 if N'< k, goes to step 2.1.2.2;If N'=k, the first cluster centre set C is obtained1, turn 2.1.4.
2.1.4 the number of iterations t=1 is enabled, the iterative calculation of dimension cluster module generates t+1 cluster centre set, and step is such as
Under:
2.1.4.1 dimension cluster module is according to element each in S and CtThe distance of middle k cluster centre, will be each in S
Element incorporates cluster belonging to nearest cluster centre into, and method is:
For element each in S, CtIn have a cluster centre distance d be minimum therewith.It will be with first cluster centre
(w1,h1) apart from the smallest element it is divided into a set C1, will be with second cluster centre (w2,h2) drawn apart from the smallest element
It is divided into a set C2, and so on, k set is obtained, C is expressed as1, C2..., Cp..., Ck, p ∈ [1, k].
2.1.4.2 finding out C respectively1, C2..., Cp..., CkMean value (the w of middle each element1',h1')(w'2,h2') ...,
(w'p,h'p) ..., (w'k,h'k), wherein w'pFor CpThe arithmetic mean of instantaneous value of the abscissa of middle each element, h'pFor CpMiddle each element
The arithmetic mean of instantaneous value of ordinate, k obtained mean value is as t+1 cluster centre set Ct+1, t=t+1;
If 2.1.4.3 t < Num, goes to step 2.1.4.1;If t=Num, by C at this timet+1Middle k element is as priori frame
Width and high write-in cfg configuration file, turn 2.2.
2.2 parsing modules receive the network parameter of building neural network from cfg configuration file, and parsing network parameter is network
Layer parameter and network architecture parameters, and network layer parameter is issued into network layer module, network architecture parameters are sent to nerve net
Network constructs module.
Analytically module receives network layer parameter to 2.4 network layer modules, instantiates each network layer using network layer parameter,
Focused lost function is defined in output layer, and network layer is sent to neural network building module.
2.5 neural networks construct module, and analytically module receives network architecture parameters, receives network layer from network layer module,
Network layer is combined according to network architecture parameters, constructs the basic framework of neural network.
Third step, cloud server and client, which cooperate, carries out the training of neural network, completes taking for neural network
It builds, method is:
3.1 functional modules obtain training instruction from client;
3.2 input/output modules, functional module, the basic framework of neural network building module training neural network, method
It is:
3.2.1 input/output module receives training set picture to be measured from client, converts journey for training set picture to be measured
The structural body that sequence can be identified and be handled.
3.2.2 structural body is sent to functional module by input/output module.
3.2.3 neural network constructs module using random number as input, initializes the network weight parameter of neural network, is
The basic framework of neural network assigns weight parameter, completes initial neural network and builds.
3.2.4 neural network constructs module and initial neural network is sent to functional module;
3.2.5 functional module using structural body training neural network, method is: functional module using structural body as input,
The focused lost function in initial neural network output layer is called, instructs neural network to be instructed using focused lost function
Practice, and generates trained network weight parameter.
Trained network weight parameter is stored into network weight file by 3.3 functional modules.
3.4 neural networks building module reads trained network weight parameter from network weight file, will train
Network weight parameter assign neural network basic framework, complete building for neural network.
4th step, cloud server and client, which cooperate, carries out target detection identification to testing image, and method is:
4.1 functional modules receive detection instruction from client;
4.2 functional modules, input/output module, functional module, neural network building module cooperate and carry out target inspection
Identification is surveyed, method is:
4.2.1 input/output module receives test set picture P to be measured from client, converts function mould for picture P to be measured
The structural body that block can be identified and be handled, such as image, data and box structural body;
4.2.2 functional module receives structural body from input/output module, and receives nerve net from neural network building module
Network;
4.2.3 functional module calls segmentation detection function, is split inspection using structural body picture P to be measured to test set
It surveys, the method is as follows:
4.2.3.1 the width and a height of W and H for assuming P enable m=0 using the upper left corner as coordinate origin (0,0), and n=0, M are mind
Size through network input layer, between generally 100 to 1000;
4.2.3.2 segmentation detection function is section [m, m+M] to wide coordinate using neural network, and high coordinate is section
Slice in [n, n+M] range carries out target detection, and the prediction result of image is calculated and obtained by neural network output layer, for
The prediction of target position adds position offset to obtain using priori frame, and obtaining wide coordinate is section [m, m+M], high coordinate
For the recognition result being respectively sliced in section [n, n+M] range, i.e., the position coordinates and classification of each target;
If 4.2.3.3 m < W-M, m=m+M, turn 4.2.3.2;If W-M≤m≤W, turns 4.2.3.4;
4.2.3.4 segmentation detection function is section [m, W] to wide coordinate using neural network, and high coordinate is section
Slice in [n, n+M] range carries out target detection, and obtaining wide coordinate is section [m, W], and high coordinate is section [n, n+M]
The recognition result being respectively sliced in range, m=0;
If 4.2.3.5 n < H-M, n=n+M, turn 4.2.3.2;If H-M≤n≤H, turns 4.2.3.6;
4.2.3.6 segmentation detection function is section [m, m+M] to wide coordinate using neural network, and high coordinate is section
Slice in [n, H] range carries out target detection, and obtaining wide coordinate is section [m, m+M], and high coordinate is section [n, H] model
Enclose the interior recognition result being respectively sliced;
If 4.2.3.7 m < W-M, m=m+M, turn 4.2.3.6;If W-M≤m≤W, turns 4.2.3.8;
4.2.3.8 segmentation detection function is section [m, W] to wide coordinate using neural network, and high coordinate is section
Slice in [n, H] range carries out target detection, and obtaining wide coordinate is section [m, W], and high coordinate is section [n, H] range
The interior recognition result being respectively sliced;
4.2.3.9 the wide coordinate in 4.2.3.2 is section [m, m+M] by segmentation detection function, and high coordinate is section [n, n+
M] each slice recognition result in range, wide coordinate is section [m, W] in 4.2.3.4, and high coordinate is in section [n, n+M] range
Each slice recognition result, wide coordinate is section [m, m+M] in 4.2.3.6, and high coordinate is each slice in section [n, H] range
Recognition result,
4.2.3.8 wide coordinate is section [m, W] in, high coordinate be each slice recognition result in section [n, H] range into
Row integration obtains the recognition result of entire image P (wide and height is respectively W and H).
4.2.4 the recognition result of P is transmitted to input/output module by functional module;
4.3 input/output modules export the recognition result of P to client.
Claims (4)
1. a kind of data center's cloud target identification method based on deep learning, it is characterised in that the following steps are included:
The first step constructs data center's cloud target identification system based on deep learning;Data center based on deep learning
Cloud target identification system by cloud server, groups of clients at;Telnet is installed in client, is deposited in client
The data set that required by task to be measured is wanted is stored up, data set includes test set picture to be measured, training set picture to be trained, training set mark
File is signed, training set label file records the indicia framing information of target in training set picture to be trained, the position including indicia framing
Coordinate, width, height and target classification;Client logs in cloud server by telnet, and data set is uploaded to cloud
Server is held, training instruction, detection instruction is sent to cloud server before starting training and testing, cloud server is carried out
Long-range training, detection;Cloud server carries out neural metwork training and target identification, dispatches cloud clothes according to the instruction of client
The computing resource and storage resource of business device, and the training progress msg and recognition result of neural network are sent to client;
Cloud server, which removes, is equipped with input/output module, functional module, parsing module, network layer module, neural network building
Module, cfg configuration file, network weight file, are also equipped with dimension cluster module;
Dimension cluster module is connected with client, cfg configuration file, which receives training set label file, dimension from client
Degree cluster module carries out refinement analysis to the indicia framing information in training set label file, and priori frame size is calculated, will be first
Frame size is tested to be written in cfg configuration file;
Cfg configuration file is connected with dimension cluster module, parsing module, and cfg configuration file constructs convolutional Neural net in addition to recording
Outside the network parameter of network, also store from the received priori frame size of dimension cluster module as the output layer parameter of network;
Parsing module is connected with cfg configuration file, network layer module, neural network building module;The module is from cfg configuration file
The network parameter of building neural network is read, parsing network parameter is network layer parameter and network architecture parameters, and by network layer
Parameter is sent to network layer module, and network architecture parameters are sent to neural network building module;
Network layer module is connected with parsing module, neural network building module, and analytically module receives network layer parameter to the module,
Each network layer is instantiated using network layer parameter, and the network layer after instantiation is sent to neural network building module;Convolution
Loss function defined in the output layer of neural network is focused lost function;
Neural network building module is connected with parsing module, network layer module, network weight file, functional module;Neural network
Constructing module, analytically module receives network architecture parameters, receives network layer from network layer module, will according to network architecture parameters
Network layer is combined according to order, constructs the basic framework of neural network;Neural network constructs module and also reads from network weight file
Network weight parameter is taken, is that the basic framework of neural network assigns weight parameter, completes building for neural network, neural network structure
It models block and neural network is sent to functional module;
Network weight file is connected with neural network building module, functional module, and the storage of network weight file connects from functional module
The network weight parameter of receipts is read for neural network building module;
Input/output module is connected with functional module, and input/output module receives test set testing image from client, will be to be measured
Image is converted into the structural body that program can be identified and be handled, and these structural bodies are sent to functional module;
Functional module is connected with input/output module, neural network building module, network weight file, client, functional module
Training function training neural network is called, network weight parameter is sent to network weight file;Functional module calls segmentation inspection
It surveys function and carries out target detection using neural network, obtain neural network to the recognition result of image, recognition result is sent to
Input/output module;
Second step, dimension cluster module, parsing module, network layer module and the neural network of cloud server construct module phase
Mutually cooperation, constructs the basic framework of neural network, method are as follows:
2.1 dimension cluster modules receive training set label file from client, read indicia framing information from training set label file,
Find out priori frame size, method are as follows:
2.1.1 dimension cluster module obtains the indicia framing information of target in training set picture to be trained from training set label file,
With the binary group (w of the width of each indicia framing and high compositioni,hi) it is used as element, constitute set S, the element number in set S
It is the number of indicia framing in picture to be trained for N, N, w indicates wide, and h indicates high, and i indicates the serial number of indicia framing, i ∈ [1, N];
2.1.2 dimension cluster module sets cluster centre number as k, and k is positive integer, and definition maximum number of iterations is Num, initially
Change the first cluster centre set C1For empty set, if C1The current number of middle element is N', and N' initial value is 0;
2.1.3 k cluster centre is initialized, method is:
2.1.3.1 dimension cluster module randomly chooses an element (w from Sl,hl), l ∈ [1, N] is set to first and gathers
Set C is added in class center1, enable variable N'=1;
2.1.3.2 enabling variable m=1, n=1;
2.1.3.3 dimension cluster module calculates the element (w in Sm,hm) and C1In element (wn,hn) distance d ((wm,hm),
(wn,hn)):
d((wm,hm),(wn,hn))=1-IOU ((wm,hm),(wn,hn))
Wherein, a, b are respectively the wide w of indicia framingmWith high hm, c, d are respectively the wide w of cluster centrenWith high hn, dimension is clustered
Module calculates any one element (a, b) and C in S1In any one element (c, d) rectangle frame hand over and than the calculating side of IOU
Formula is as follows:
If a >=c, b >=d, then
If a >=c, b≤d, then
If a≤c, b >=d, then
If a≤c, b≤d, then
If 2.1.3.4 n < N', enables n=n+1, turn 2.1.3.3;If n=N' turns 2.1.3.5;
If 2.1.3.5 m < N, enables m=m+1, n=1, turn 2.1.3.3;If m=N turns 2.1.3.6;
2.1.3.6 variable m=1, n=1, D (w are enabledm,hm)=1;D(wm,hm) it is arbitrary element (w in Sm,hm) and C1In appoint
Anticipate element (wn,hn) distance minimum value;
2.1.3.7 if d ((wm,hm),(wn,hn)) < D (wm,hm), then enable D (wm,hm)=d ((wm,hm),(wn,hn)), turn
2.1.3.8;Otherwise directly turn 2.1.3.8;
If 2.1.3.8 n < N', enables n=n+1, turn 2.1.3.7;If n=N' turns 2.1.3.9;
If 2.1.3.9 m < N, m=m+1, n=1, D (w are enabledm,hm)=1, turns 2.1.3.7;If m=N turns 2.1.3.10;
2.1.3.10 dimension cluster module calculate minimum range and
2.1.3.11 dimension cluster module takes the N'+1 cluster centre of method choice by weight distribution probability:
2.1.3.11.1 value r is obtained multiplied by random value random (random ∈ [0,1]) with SUM, initialization takes and variable cur=
0, enable m=1;
2.1.3.11.2 dimension cluster module calculates cur=cur+D (wm,hm)
If 2.1.3.11.3 cur≤r enables m=m+1, turn 2.1.3.11.2;If cur > r, the element (w in Sm,hm) collection is added
Close C1, N'=N'+1 is enabled, 2.1.3.12 is turned;
2.1.3.12 if N'< k, goes to step 2.1.2.2;If N'=k, the first cluster centre set C is obtained1, turn 2.1.4;
2.1.4 the number of iterations t=1 is enabled, the iterative calculation of dimension cluster module generates t+1 cluster centre set, and steps are as follows:
2.1.4.1 dimension cluster module is according to element each in S and CtThe distance of middle k cluster centre, by each element in S
Incorporate cluster belonging to nearest cluster centre into, method is:
It will be with first cluster centre (w1,h1) apart from the smallest element it is divided into a set C1, will be with second cluster centre
(w2,h2) apart from the smallest element it is divided into a set C2, and so on, k set is obtained, C is expressed as1, C2...,
Cp..., Ck, p ∈ [1, k];
2.1.4.2 finding out C respectively1, C2..., Cp..., CkMean value (the w ' of middle each element1,h′1)(w′2,h′2) ..., (w 'p,h
′p) ..., (w 'k,h′k), wherein w 'pFor CpThe arithmetic mean of instantaneous value of the abscissa of middle each element, h 'pFor CpThe vertical seat of middle each element
Target arithmetic mean of instantaneous value, k obtained mean value is as t+1 cluster centre set Ct+1, t=t+1;
If 2.1.4.3 t < Num, goes to step 2.1.4.1;If t=Num, by C at this timet+1Width of the middle k element as priori frame
Cfg configuration file is written with height, turns 2.2;
2.2 parsing modules receive the network parameter of building neural network from cfg configuration file, parse network parameter as network layer ginseng
Several and network architecture parameters, and network layer parameter is issued into network layer module, network architecture parameters are sent to neural network structure
Model block;
Analytically module receives network layer parameter to 2.4 network layer modules, instantiates each network layer using network layer parameter, is exporting
Focused lost function is defined in layer, and network layer is sent to neural network building module;
2.5 neural networks construct module, and analytically module receives network architecture parameters, receives network layer from network layer module, according to
Network architecture parameters combine network layer, construct the basic framework of neural network;
Third step, cloud server and client, which cooperate, carries out the training of neural network, completes building for neural network, side
Method is:
3.1 functional modules obtain training instruction from client;
3.2 input/output modules, functional module, the basic framework of neural network building module training neural network, generate training
Good network weight parameter;
Trained network weight parameter is stored into network weight file by 3.3 functional modules;
3.4 neural networks building module reads trained network weight parameter from network weight file, by trained net
Network weight parameter assigns the basic framework of neural network, completes building for neural network;
4th step, cloud server and client, which cooperate, carries out target detection identification to testing image, and method is:
4.1 functional modules receive detection instruction from client;
4.2 functional modules, input/output module, functional module, neural network building module cooperate and carry out target detection knowledge
Not, method is:
4.2.1 input/output module receives test set picture P to be measured from client, and converting functional module for picture P to be measured can
With the structural body for identifying and handling;
4.2.2 functional module receives structural body from input/output module, and receives neural network from neural network building module;
4.2.3 functional module calls segmentation detection function, is split detection using structural body picture P to be measured to test set, obtains
To the recognition result of P, method is:
4.2.3.1 assume that the width of P enables m=0, n=0 using the upper left corner as coordinate origin (0,0) with a height of W and H, M is nerve net
The size of network input layer;
4.2.3.2 segmentation detection function is section [m, m+M] to wide coordinate using neural network, and high coordinate is section [n, n
+ M] slice in range carries out target detection, and the prediction result of image is calculated and is obtained by neural network output layer, for target
The prediction of position adds position offset to obtain using priori frame, and obtaining wide coordinate is section [m, m+M], and high coordinate is area
Between the recognition result that is respectively sliced in [n, n+M] range, i.e., the position coordinates and classification of each target;
If 4.2.3.3 m < W-M, m=m+M, turn 4.2.3.2;If W-M≤m≤W, turns 4.2.3.4;
4.2.3.4 segmentation detection function is section [m, W] to wide coordinate using neural network, and high coordinate is section [n, n+
M] slice in range carries out target detection, and obtaining wide coordinate is section [m, W], and high coordinate is section [n, n+M] range
The interior recognition result being respectively sliced, m=0;
If 4.2.3.5 n < H-M, n=n+M, turn 4.2.3.2;If H-M≤n≤H, turns 4.2.3.6;
4.2.3.6 segmentation detection function using neural network to wide coordinate be section [m, m+M], high coordinate be section [n,
H] slice in range carries out target detection, and obtaining wide coordinate is section [m, m+M], and high coordinate is section [n, H] range
The interior recognition result being respectively sliced;
If 4.2.3.7 m < W-M, m=m+M, turn 4.2.3.6;If W-M≤m≤W, turns 4.2.3.8;
4.2.3.8 segmentation detection function is section [m, W] to wide coordinate using neural network, and high coordinate is section [n, H]
Slice in range carries out target detection, and obtaining wide coordinate is section [m, W], and high coordinate is in section [n, H] range
The recognition result being respectively sliced;
4.2.3.9 wide coordinate is section [m, m+M] by segmentation detection function, and high coordinate is respectively cutting in section [n, n+M] range
Piece recognition result, wide coordinate are section [m, W], and high coordinate is each slice recognition result, wide coordinate in section [n, n+M] range
For section [m, m+M], high coordinate is that each slice recognition result, the wide coordinate in section [n, H] range are section [m, W], and height is sat
Each slice recognition result being designated as in section [n, H] range is integrated, and obtains the recognition result of entire image P;
4.2.4 the recognition result of P is transmitted to input/output module by functional module;
4.3 input/output modules export the recognition result of P to client.
2. a kind of data center's cloud target identification method based on deep learning as described in claim 1, it is characterised in that
The structural body refers to image, data and box structural body.
3. a kind of data center's cloud target identification method based on deep learning as described in claim 1, it is characterised in that
The Num is the integer between 10 to 100, and the M is between 100 to 1000.
4. a kind of data center's cloud target identification method based on deep learning as described in claim 1, it is characterised in that
The method of the basic framework of training neural network described in 3.2 steps is:
3.2.1 input/output module receives training set picture to be measured from client, and converting program for training set picture to be measured can
With the structural body for identifying and handling;
3.2.2 structural body is sent to functional module by input/output module;
3.2.3 neural network constructs module using random number as input, initializes the network weight parameter of neural network, for nerve
The basic framework of network assigns weight parameter, completes initial neural network and builds;
3.2.4 neural network constructs module and initial neural network is sent to functional module;
3.2.5 using structural body training neural network, method is functional module: functional module is called using structural body as input
Focused lost function in initial neural network output layer instructs neural network to be trained using focused lost function, and
Generate trained network weight parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910076845.1A CN109858486B (en) | 2019-01-27 | 2019-01-27 | Deep learning-based data center cloud target identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910076845.1A CN109858486B (en) | 2019-01-27 | 2019-01-27 | Deep learning-based data center cloud target identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109858486A true CN109858486A (en) | 2019-06-07 |
CN109858486B CN109858486B (en) | 2019-10-25 |
Family
ID=66896159
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910076845.1A Active CN109858486B (en) | 2019-01-27 | 2019-01-27 | Deep learning-based data center cloud target identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109858486B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110633684A (en) * | 2019-09-20 | 2019-12-31 | 南京邮电大学 | Tobacco purchasing grading system and grading method based on deep learning |
CN110650153A (en) * | 2019-10-14 | 2020-01-03 | 北京理工大学 | Industrial control network intrusion detection method based on focus loss deep neural network |
CN111274893A (en) * | 2020-01-14 | 2020-06-12 | 中国人民解放军国防科技大学 | Aircraft image fine-grained identification method based on component segmentation and feature fusion |
CN111339923A (en) * | 2020-02-25 | 2020-06-26 | 盛视科技股份有限公司 | Vehicle bottom inspection method and system |
CN111461028A (en) * | 2020-04-02 | 2020-07-28 | 杭州视在科技有限公司 | Mask detection model training and detection method, medium and device in complex scene |
CN111881764A (en) * | 2020-07-01 | 2020-11-03 | 深圳力维智联技术有限公司 | Target detection method and device, electronic equipment and storage medium |
CN112989980A (en) * | 2021-03-05 | 2021-06-18 | 华南理工大学 | Target detection system and method based on web cloud platform |
CN115019105A (en) * | 2022-06-24 | 2022-09-06 | 厦门大学 | Latent semantic analysis method, device, medium and equipment of point cloud classification model |
CN116503675A (en) * | 2023-06-27 | 2023-07-28 | 南京理工大学 | Multi-category target identification method and system based on strong clustering loss function |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319972A (en) * | 2018-01-18 | 2018-07-24 | 南京师范大学 | A kind of end-to-end difference online learning methods for image, semantic segmentation |
CN108509860A (en) * | 2018-03-09 | 2018-09-07 | 西安电子科技大学 | HOh Xil Tibetan antelope detection method based on convolutional neural networks |
CN109145939A (en) * | 2018-07-02 | 2019-01-04 | 南京师范大学 | A kind of binary channels convolutional neural networks semantic segmentation method of Small object sensitivity |
CN109253722A (en) * | 2018-08-22 | 2019-01-22 | 顺丰科技有限公司 | Merge monocular range-measurement system, method, equipment and the storage medium of semantic segmentation |
-
2019
- 2019-01-27 CN CN201910076845.1A patent/CN109858486B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319972A (en) * | 2018-01-18 | 2018-07-24 | 南京师范大学 | A kind of end-to-end difference online learning methods for image, semantic segmentation |
CN108509860A (en) * | 2018-03-09 | 2018-09-07 | 西安电子科技大学 | HOh Xil Tibetan antelope detection method based on convolutional neural networks |
CN109145939A (en) * | 2018-07-02 | 2019-01-04 | 南京师范大学 | A kind of binary channels convolutional neural networks semantic segmentation method of Small object sensitivity |
CN109253722A (en) * | 2018-08-22 | 2019-01-22 | 顺丰科技有限公司 | Merge monocular range-measurement system, method, equipment and the storage medium of semantic segmentation |
Non-Patent Citations (1)
Title |
---|
姚群力等: "深度卷积神经网络在目标检测中的研究进展", 《计算机工程与应用》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110633684A (en) * | 2019-09-20 | 2019-12-31 | 南京邮电大学 | Tobacco purchasing grading system and grading method based on deep learning |
CN110650153B (en) * | 2019-10-14 | 2021-04-23 | 北京理工大学 | Industrial control network intrusion detection method based on focus loss deep neural network |
CN110650153A (en) * | 2019-10-14 | 2020-01-03 | 北京理工大学 | Industrial control network intrusion detection method based on focus loss deep neural network |
CN111274893B (en) * | 2020-01-14 | 2022-11-08 | 中国人民解放军国防科技大学 | Aircraft image fine-grained identification method based on part segmentation and feature fusion |
CN111274893A (en) * | 2020-01-14 | 2020-06-12 | 中国人民解放军国防科技大学 | Aircraft image fine-grained identification method based on component segmentation and feature fusion |
CN111339923A (en) * | 2020-02-25 | 2020-06-26 | 盛视科技股份有限公司 | Vehicle bottom inspection method and system |
CN111461028A (en) * | 2020-04-02 | 2020-07-28 | 杭州视在科技有限公司 | Mask detection model training and detection method, medium and device in complex scene |
CN111881764A (en) * | 2020-07-01 | 2020-11-03 | 深圳力维智联技术有限公司 | Target detection method and device, electronic equipment and storage medium |
CN111881764B (en) * | 2020-07-01 | 2023-11-03 | 深圳力维智联技术有限公司 | Target detection method and device, electronic equipment and storage medium |
CN112989980A (en) * | 2021-03-05 | 2021-06-18 | 华南理工大学 | Target detection system and method based on web cloud platform |
CN115019105A (en) * | 2022-06-24 | 2022-09-06 | 厦门大学 | Latent semantic analysis method, device, medium and equipment of point cloud classification model |
CN116503675A (en) * | 2023-06-27 | 2023-07-28 | 南京理工大学 | Multi-category target identification method and system based on strong clustering loss function |
CN116503675B (en) * | 2023-06-27 | 2023-08-29 | 南京理工大学 | Multi-category target identification method and system based on strong clustering loss function |
Also Published As
Publication number | Publication date |
---|---|
CN109858486B (en) | 2019-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109858486B (en) | Deep learning-based data center cloud target identification method | |
CN110532859B (en) | Remote sensing image target detection method based on deep evolution pruning convolution net | |
CN114462555B (en) | Multi-scale feature fusion power distribution network equipment identification method based on raspberry group | |
Zhou et al. | D-LinkNet: LinkNet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction | |
CN111797717B (en) | High-speed high-precision SAR image ship detection method | |
CN110245709B (en) | 3D point cloud data semantic segmentation method based on deep learning and self-attention | |
CN111832655B (en) | Multi-scale three-dimensional target detection method based on characteristic pyramid network | |
CN110443298B (en) | Cloud-edge collaborative computing-based DDNN and construction method and application thereof | |
CN110135267A (en) | A kind of subtle object detection method of large scene SAR image | |
CN113780211A (en) | Lightweight aircraft detection method based on improved yolk 4-tiny | |
CN108038445A (en) | A kind of SAR automatic target recognition methods based on various visual angles deep learning frame | |
CN110826428A (en) | Ship detection method in high-speed SAR image | |
CN113609896A (en) | Object-level remote sensing change detection method and system based on dual-correlation attention | |
CN113838064B (en) | Cloud removal method based on branch GAN using multi-temporal remote sensing data | |
CN113283409B (en) | Airplane detection method in aerial image based on EfficientDet and Transformer | |
CN115512103A (en) | Multi-scale fusion remote sensing image semantic segmentation method and system | |
CN104700100A (en) | Feature extraction method for high spatial resolution remote sensing big data | |
CN111008979A (en) | Robust night image semantic segmentation method | |
CN109523558A (en) | A kind of portrait dividing method and system | |
CN114998757A (en) | Target detection method for unmanned aerial vehicle aerial image analysis | |
CN114565824B (en) | Single-stage rotating ship detection method based on full convolution network | |
CN116580322A (en) | Unmanned aerial vehicle infrared small target detection method under ground background | |
CN117853955A (en) | Unmanned aerial vehicle small target detection method based on improved YOLOv5 | |
CN111339950A (en) | Remote sensing image target detection method | |
CN114926691A (en) | Insect pest intelligent identification method and system based on convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |