CN107038450A - Unmanned plane policing system based on deep learning - Google Patents
Unmanned plane policing system based on deep learning Download PDFInfo
- Publication number
- CN107038450A CN107038450A CN201610894675.4A CN201610894675A CN107038450A CN 107038450 A CN107038450 A CN 107038450A CN 201610894675 A CN201610894675 A CN 201610894675A CN 107038450 A CN107038450 A CN 107038450A
- Authority
- CN
- China
- Prior art keywords
- unmanned plane
- deep learning
- cluster
- node
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses the unmanned plane policing system based on deep learning, for unmanned plane during flying management, it is related to the technical field of unmanned plane and image recognition.The present invention builds the unmanned plane policing system of three-decker using visual sensing network technology, fixed video camera array is made up by configuring unmanned plane in each node of visual sensing net to there is monitoring dead angle and the defect of monitor area can not be adjusted flexibly, use convolutional neural networks training data to obtain precision identification model higher, realize the supervision and identification of unmanned plane.
Description
Technical field
The invention discloses the unmanned plane policing system based on deep learning, for unmanned plane during flying management, it is related to nobody
Machine and the technical field of image recognition.
Background technology
Focus mostly in terms of unmanned plane motion control and obtained using unmanned plane on the research of unmanned plane at present
In terms of view data, it is also fewer that real research unmanned plane is supervised and recognized.Unmanned plane is wide because of its flexible controllable advantage
It is general to be incorporated in shooting image/collection video, meanwhile, unmanned plane is because of the personal safety in its flexible controllable pair its flight range and hidden
Private causes puzzlement, it is therefore desirable to propose feasible program to unmanned plane supervision/identification.The existing unmanned plane of foreign countries coordinates police's law enforcement
Scheme, but in view of citizen privacy the problems such as, do not come into operation largely, and these schemes be all confined to military restricted zone this
Used in the specific region of sample.
According to the application special case structure of visual sensing net (Visual Sensor Networks, VSNs) this wireless sense network
Build unmanned plane supervisory systems and can be potentially encountered and there are problems that monitoring.In terms of unmanned plane recognizer, because flying object
There is diversity in pattern, many traditional outstanding features are (such as:Haar, HOG, CSS, LBP) unmanned plane identification can not be applied to
Field.Deep learning algorithm can learn the high-level characteristic to object by its depth network structure, meanwhile, deep learning algorithm
With preferable scalability and training speed has larger room for promotion, therefore, deep learning algorithm is realizes unmanned plane
Identification provides a kind of feasible thinking.In this context, the present invention is directed to propose a kind of feasible unmanned plane supervision/identification side
Case.
The content of the invention
The goal of the invention of the present invention is for the not enough alert there is provided nobody based on deep learning of above-mentioned background technology
System is examined, the supervision and identification of unmanned plane is realized, the technology that identifying schemes are temporarily supervised without effective and feasible unmanned plane is solved
Problem.
The present invention is adopted the following technical scheme that for achieving the above object:
Unmanned plane policing system based on deep learning, including:
Fabric for acquisition monitoring region picture:The visual sensing net of specially multiple cluster compositions, each cluster bag
Containing multiple nodes, each node includes video camera array and unmanned plane;
For recognizing that each cluster in fabric gathers the interlayer structure of the unmanned plane in picture;And,
For the top level structure for storing interlayer structure recognition result, dispatching interlayer structure processor active task.
As the further prioritization scheme of the unmanned plane policing system based on deep learning, each saved in fabric
Unmanned plane during flying in point is in the key area in the node in the range of video camera array shooting blind angle amount or in coverage.
Further, in the unmanned plane policing system based on deep learning, fabric is also included and each section
The one-to-one preprocessed chip of point, each preprocessed chip gathers image to corresponding node and carries out data prediction simultaneously
Pretreated image is transmitted to interlayer structure.
Further, in the unmanned plane policing system based on deep learning, interlayer structure is included to be passed with vision
The corresponding secondary treatment server of number of clusters mesh in sense net, each secondary treatment server recognizes that a cluster gathers the nothing in picture
It is man-machine.
As the unmanned plane policing system based on deep learning further the side of optimization that, interlayer structure is also wrapped
Include the control server of each node, control server side receives the control command for carrying out primary server, controls server
The other end to corresponding node send control command.
As the further prioritization scheme of the unmanned plane policing system based on deep learning, interlayer structure is using volume
Product neutral net magnanimity trains the picture of each cluster collection to be identified model, recognizes that each cluster is adopted in fabric by identification model
Collect the unmanned plane in picture.
As the unmanned plane policing system based on deep learning further prioritization scheme, using convolutional Neural net
The method that network magnanimity trains the picture of each cluster collection to be identified model is:The picture forward direction gathered according to each cluster derives convolution god
Output and loss through network, call backpropagation, and according to costing bio disturbance gradient during backpropagation, gradient is brought into
Positive derivation next time is carried out after in the calculating of right value update, by the way that forward direction is derived again and again and backpropagation is identified
Model.
Further, the unmanned plane policing system based on deep learning, is trained using convolutional neural networks magnanimity
When the picture of each cluster collection is identified model, using reflecting linear element excitation neuron.
Further, in the unmanned plane policing system based on deep learning, the loss layer class of convolutional neural networks
Type is Softmax.
The present invention uses above-mentioned technical proposal, has the advantages that:
(1) the unmanned plane policing system of three-decker is built using visual sensing network technology, by visual sensing net
Unmanned plane is configured in each node to make up fixed video camera array in the presence of monitoring dead angle and monitor area can not be adjusted flexibly
Defect, uses convolutional neural networks training data to obtain precision identification model higher, realizes the supervision and knowledge of unmanned plane
Not;
(2) system is configured with a secondary treatment server, secondary treatment service for each cluster in visual sensing net
The data progress processing that device is first gathered to each cluster can mitigate the burden of master server, be the pretreatment core of each node configuration
Piece can mitigate the burden of secondary servers;
(3) reflecting linear unit is chosen as excitation to solve the problems, such as gradient disappearance, is chosen Softmax loss layers and is caused ladder
Degree is more stablized, in the training process of convolutional neural networks, first positive to derive output and lose, then back-propagation gradient is with more
New weights, train identification model with backpropagation by positive derive again and again, solve minimization of loss unified excellent
The problem of change.
Brief description of the drawings
Composition and node schematic diagram that Fig. 1 is VSNs in system of the invention.
Fig. 2 is the network structure of the system of the present invention.
Fig. 3 is the operation block diagram of the system of the present invention.
Fig. 4 is the depth network structure of the recognizer of the present invention.
Fig. 5 (a), Fig. 5 (b) are respectively that identification model of the present invention identification is correct, the accuracy rate figure under error situations.
Fig. 6 is the ROC curve figure of the identification model of the present invention.
Embodiment
The technical scheme to invention is described in detail below in conjunction with the accompanying drawings.
As shown in figure 1, the present invention constitutes a VSNs node using video camera array and some unmanned planes, using VSNs
Technology builds the framework of unmanned plane policing system.The application adds in each node for constituting VSNs and obtains clearance
Unmanned plane as auxiliary monitoring means, obtaining the unmanned plane of clearance can enter to the monitoring dead angle of video camera array
Row patrol monitoring carries out key monitoring to video camera array coverage.Unmanned plane compensate for ground stationary monitoring and there is monitoring
The defect of leak and monitor mode underaction, while key monitoring can also be carried out by localized region in case of need,
Improve the performance of whole system.
Furthermore, it is contemplated that multinode provide data volume is huge and optimal control structure the need for, will be as shown in Figure 1
Visual sensing net is designed to three-decker as shown in Figure 3.The section not waited by quantity as shown in Figure 2 positioned at the third layer of the bottom
Data are sent to one time by the node unification in a series of clusters (cluster is illustrate only in Fig. 3) composition that point is constituted, each cluster
Level processing server;The second layer positioned at intermediate layer is made up of secondary treatment server, each secondary treatment server process
After the data that one cluster is sent, the data that each secondary treatment server process cluster is sent recognition result is sent to top layer;It is located at
The first layer of top layer is master server, and the recognition result that master server storage secondary treatment server is sent simultaneously takes to secondary treatment
The processor active task of business device is made arrangement and dispatched.In addition, intermediate layer also includes operating the control service being controlled to each node
Device, control server side receives the control command for carrying out primary server, and the other end of control server is sent out to corresponding node
Send control command;To ensure that the accurate of control command is assigned, communication, control server between control server and each node with
Communication between master server is also all two-way.
The initial order that master server is assigned is through controlling server to be conveyed to each node of bottom, and each node receives initial finger
Start IMAQ/video capture after order, whole system enters working condition;Each node is obtained in cluster monitoring image or regard
Frequently (data in abbreviation cluster) need to first pass through a preprocessed chip, and preprocessed chip receives data in cluster to it and carries out including frame
Difference is extracted in some interior previous works, and such purpose is to mitigate secondary treatment server while handling the negative of more piece point data
Load;When central advanced processor, into cluster, each node issues identification instruction, data are sent to the cluster pair in pretreated cluster
Data are identified result and by anticipation in the secondary treatment server answered, secondary treatment server operation recognizer processing cluster
As a result and the message transmission such as positional information is to master server, data and identification knot is uploaded in secondary treatment server process cluster
While fruit, control server is sent feeds back to master server comprising each node running status, master server further according to
The control mode of each node is changed or maintained to feedback information.
The unmanned plane recognizer of secondary treatment server operation, convolutional neural networks are based on by training one
The deep learning network of (Convolutional Neural Networks, CNNs) obtains an efficient identification model, and then
Realize the classification of unmanned plane and non-unmanned plane.Convolutional neural networks are as shown in figure 4, the wave filter quantity of convolutional layer 1 is 48, convolution
Core size is 9;The wave filter quantity of convolutional layer 2 is 64, and convolution kernel size is 5;The wave filter quantity of convolutional layer 3 is 64, convolution
Core size is 3.The pond method of pond layer is both configured to maximum pond.Without Dropout layers in network.
1) selection of excitation function
On the excitation of neuron in network, traditional neural network has Sigmoid functions and TanH functions, and the two are commonly used
Excitation function.From image, input can be mapped to a partial zones by Sigmoid functions and TanH functions well
Between, this than early stage linear incentive function again or step excitation function improved.In fact, depth network is to nonlinear
Dependence does not have so by force, meanwhile, sparse features simultaneously do not need network to have very strong processing linearly inseparable mechanism.Consider
It is more particularly suitable using linear incentive function in deep learning model more than 2 points.
Therefore we select reflecting linear unit (Reflected Linear Unit, ReLU) function as excitation function,
Its expression formula is as follows:
F (x)=max (0, x), (1)
Due to Grad=ErrorSigmoid'(x) x, the Sigmoid functions of both-end saturation, which once carry out recursion, to lead
Multilayer backpropagation is caused, gradient will decay, and then cause e-learning speed to decline.Have selected ReLU as excitation function with
Afterwards, gradient disappearance problem during gradient method training depth network is addressed.Because ReLU functions are single-ended saturations, it is not present
This problem, final goal function can restrain.
2) selection of loss layer
Loss layer is by the way that output is compared and configuration parameter makes Least-cost and drives learning process with target.Loss
Itself calculated by positive pushing manipulation, and the gradient lost is calculated with backstepping method.There is following common type:
Softmax loss layers calculate the softmax of input multinomial Rogers spy's loss.It is conceptive to be equal to one
The softmax layers of immediately special loss layer of a multinomial Rogers, but there is provided the more stable gradient of numerical value.
Euclidean loss layers calculate the sum of the difference of two squares of two inputs:
In formula (2), N is the dimension of theorem in Euclid space,Coordinate of respectively two input quantities in i-th dimension.
Due to it is expected that unmanned plane policing system of the present invention can not only distinguish collection dot image in future
Whether there is unmanned plane in region, additionally it is possible to which it is specifically birds or other objects further to recognize non-unmanned plane object.Therefore, originally
A kind of multi-categorizer-Softmax that application returns extension using Rogers spy, which is returned, realizes classification.Its system equation h and system
Loss function J is respectively:
In formula (3), θ is that line number is k matrix, and parameter matrix characterizes the grader corresponding to a classification per a line,
The feature that each grader is extracted, h are levied in each list of parameter matrixθ(x(i)) represent system under parameter matrix θ to input
Measure the result that x sorts out, y(i)Represent that input quantity x is classified as the classification results of the i-th class, p (y(i)=1 | x(i);θ)、p(y(i)=2 | x(i);θ)、p(y(i)=k | x(i);θ) represent respectively according to ith feature by input quantity x be classified as the 1st class, the 2nd class, kth class it is general
Rate, θ1x(i)、θ2x(i)、θjx(i)、θkx(i)Respectively represent respectively according to ith feature by input quantity x be classified as the 1st class, the 2nd class,
The probability distribution 1 { } of kth class is an indicative function, and when below, bracket intermediate value is true, and functional value is 1, is otherwise 0.
Therefore, the type that the deep learning network losses layer choosing is selected is SoftmaxWithLoss.
3) solution of depth network
The solution of network can improve the parameter of loss by adjusting network propagated forward and the modification of reverse gradient
Update, the optimization of final implementation model.Specifically, solution is called first is just pushing through Cheng Shengcheng outputs and is losing, so
Call the method for backpropagation to produce the gradient of model afterwards, gradient is merged into right value update after this, so that loss
Minimize.
The gradient tried to achieve will be used for parameter renewal, it is desirable to be able to solve the scheme that minimization of loss unifies optimization problem, right
Input data set D, optimization aim is exactly whole on whole data set | D | the average loss function of data instance:
Here, fW(X(i)) it is data instance X(i)On penalty values, r (W) is Weighted Coefficients λ regularization term.
Due to | D | can be with very large, so in practice, the iteration of each solution is estimated using the random of the target
Meter, portrays the mini batch that a size is N, N < < | D |:
Model calculates f in forward processW, gradient is calculated in reverse procedureParameter updates Δ W according to solution party
Case is from wrong gradientRegularization gradientAnd formed in the certain content of other methods.
A kind of conventional method is stochastic gradient descent method (Stochastic Gradient Descent, SGD).Formula
It is as follows:
Wherein, V represents right value update amount, and subscript represents iterations, and α is learning rate, and μ is momentum parameter.Parameter alpha and μ
Be set with certain strategy and can follow.
The present invention is carried out first on the basis of traditional visual sensing net (Visual SensorNetworks, VSNs)
Expand, devise the building plan of a set of unmanned plane policing system, the system research is then directed to again a kind of based on depth
The unmanned plane recognizer of habit, so as to provide important technology support for unmanned plane policing system.The unmanned plane recognizer will be logical
The deep learning network that training one is based on convolutional neural networks (Convolutional Neural Networks, CNNs) is crossed,
An efficient identification model is drawn, the classification between unmanned plane and non-unmanned plane is realized.
The step of present invention uses deep learning model training is as shown in table 1:
Unmanned plane recognizer flow of the table 1 based on deep learning
We are tested under caffe frameworks, and platform uses GTX980Ti, and operating system is Ubuntu 14.04.
First, data prediction
The data of experiment come from ImageNet and Baidu's picture artificial screening, are not required to by extracting this step.
First, find instructions are performed, sample is imported in text, then corresponding label is generated by sed instructions.
Then, the script file create_somenet.sh that operation caffe is provided is literary by image data and its label text
Part changes into lmdb forms, then generates corresponding average file with script file make_somenet_mean.sh.
2nd, network training
The network structure that the application is designed is write as prototxt files, configures each layer by the form of markup language,
Input data is linked to by data layers;Training collocation file solver.prototxt is set simultaneously, and learning strategy is set to
Step, basic learning rate is set to 0.001, and maximum iteration is 5000 times.
Then start training network, run train_somenet.sh, start autonomous learning.During model training, root
According to loss function value loss convergence situation, manual termination training can be shifted to an earlier date.
3rd, model measurement
Obtain after model, python code developments are carried out on ipython_notebook platforms, complete test analysis
Work.Caffe related libraries are first directed in program, model file is then loaded into and applies deployment file
Deploy.prototxt, then create an array to preserve test sample.
Next accuracy rate calculating and AUC is carried out respectively to calculate, AUC calculated and obtain by below equation (to i ∈ 1 ...,
a+b-1}):
FPR is rate of false alarm, and TPR is recall rate.
Below in conjunction with the accompanying drawings, performance evaluation is carried out to the model trained in experiment.
As seen from Figure 5, highest discrimination has reached 92.51%.The model training stage has used 2800 pictures, its
In, 1770, birds picture, 1030, unmanned plane picture.614 pictures are inputted during test, wherein, 356, birds picture, nothing
Man-machine 258, picture.Fig. 5 (a) represents the situation that model is correctly classified, and has 568, Fig. 5 (b) represents model errors classification
Situation, totally 46, sum is 614.Birds are mistaken for unmanned plane by mistake classification, or unmanned plane is mistaken for into birds.
From the angle of statistics, it can obtain:
From the point of view of accuracy rate, the recognition effect of the model or good.In addition, the transverse axis of image represents the general of birds
Rate, the longitudinal axis represents the probability of unmanned plane, if the situation of analysis single input, because each data point is met:
probUAV+probbird=1, (8)
probUAV、probbirdRespectively data point is unmanned plane, the probability of birds.
Although it can be clearly seen that nearly all there is number each position on the line that formula (8) is represented from the figure correctly classified
The presence at strong point, but most of point concentrates on two ends, shows, for any input in this partial dot, to be either judged as
Birds or unmanned plane, category of model, which is correctly held, can reach more than 90%.
Fig. 6 compared for the AUC of three models trained.The test result of model shows that this method has preferable property
Can, the AUC of the optimal model of performance is up to 0.974, it was demonstrated that its sensitivity and specificity are all relatively outstanding.And by classification
Device is simply changed, it is possible to achieve the increase of classification number, model is had certain scalability.
Claims (9)
1. the unmanned plane policing system based on deep learning, it is characterised in that including:
Fabric for acquisition monitoring region picture:The visual sensing net of specially multiple cluster compositions, each cluster is comprising more
Individual node, each node includes video camera array and unmanned plane;
For recognizing that each cluster in fabric gathers the interlayer structure of the unmanned plane in picture;And,
For the top level structure for storing interlayer structure recognition result, dispatching interlayer structure processor active task.
2. the unmanned plane policing system based on deep learning according to claim 1, it is characterised in that in the fabric
Emphasis area of the unmanned plane during flying in the node in the range of video camera array shooting blind angle amount or in coverage in each node
In domain.
3. the unmanned plane policing system according to claim 1 or claim 2 based on deep learning, it is characterised in that the bottom knot
Structure also includes and gathers image to corresponding node with the one-to-one preprocessed chip of each node, each preprocessed chip
Carry out data prediction and transmit pretreated image to interlayer structure.
4. the unmanned plane policing system according to claim 1 or claim 2 based on deep learning, it is characterised in that the intermediate layer
Structure includes secondary treatment server corresponding with number of clusters mesh in visual sensing net, and each secondary treatment server recognizes a cluster
Unmanned plane in gathered picture.
5. the unmanned plane policing system based on deep learning according to claim 4, it is characterised in that the interlayer structure
Also include regulating and controlling the control server of each node, control server side receives the control command for carrying out primary server, control clothes
The other end of business device sends control command to corresponding node.
6. the unmanned plane policing system based on deep learning according to claim 1, it is characterised in that the interlayer structure
The picture for training each cluster collection using convolutional neural networks magnanimity is identified model, is recognized by identification model each in fabric
Cluster gathers the unmanned plane in picture.
7. the unmanned plane policing system based on deep learning according to claim 6, it is characterised in that use convolutional Neural net
The method that network magnanimity trains the picture of each cluster collection to be identified model is:The picture forward direction gathered according to each cluster derives convolution god
Output and loss through network, call backpropagation, and according to costing bio disturbance gradient during backpropagation, gradient is brought into
Positive derivation next time is carried out after in the calculating of right value update, by the way that forward direction is derived again and again and backpropagation is identified
Model.
8. the unmanned plane policing system based on deep learning according to claim 7, it is characterised in that use convolutional Neural net
When the picture that network magnanimity trains each cluster to gather is identified model, using reflecting linear element excitation neuron.
9. the unmanned plane policing system based on deep learning according to claim 8, it is characterised in that the convolutional Neural net
The loss channel type of network is Softmax.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610894675.4A CN107038450A (en) | 2016-10-13 | 2016-10-13 | Unmanned plane policing system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610894675.4A CN107038450A (en) | 2016-10-13 | 2016-10-13 | Unmanned plane policing system based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107038450A true CN107038450A (en) | 2017-08-11 |
Family
ID=59533137
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610894675.4A Pending CN107038450A (en) | 2016-10-13 | 2016-10-13 | Unmanned plane policing system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107038450A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108345873A (en) * | 2018-03-22 | 2018-07-31 | 哈尔滨工业大学 | A kind of multiple degrees of freedom body motion information analytic method based on multilayer convolutional neural networks |
CN108692709A (en) * | 2018-04-26 | 2018-10-23 | 济南浪潮高新科技投资发展有限公司 | A kind of farmland the condition of a disaster detection method, system, unmanned plane and cloud server |
CN110262529A (en) * | 2019-06-13 | 2019-09-20 | 桂林电子科技大学 | A kind of monitoring unmanned method and system based on convolutional neural networks |
CN113133105A (en) * | 2019-12-31 | 2021-07-16 | 丽水青达科技合伙企业(有限合伙) | Unmanned aerial vehicle data collection method based on deep reinforcement learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103617691A (en) * | 2013-11-11 | 2014-03-05 | 成都市晶林电子技术有限公司 | Forest fire early warning monitoring center |
CN203535753U (en) * | 2013-11-11 | 2014-04-09 | 成都市晶林电子技术有限公司 | Multilayer forest fireproof monitoring center |
CN103826251A (en) * | 2013-12-17 | 2014-05-28 | 西北工业大学 | Mobile element and clustering mixed sensor network data collection method |
CN105205448A (en) * | 2015-08-11 | 2015-12-30 | 中国科学院自动化研究所 | Character recognition model training method based on deep learning and recognition method thereof |
CN105760835A (en) * | 2016-02-17 | 2016-07-13 | 天津中科智能识别产业技术研究院有限公司 | Gait segmentation and gait recognition integrated method based on deep learning |
-
2016
- 2016-10-13 CN CN201610894675.4A patent/CN107038450A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103617691A (en) * | 2013-11-11 | 2014-03-05 | 成都市晶林电子技术有限公司 | Forest fire early warning monitoring center |
CN203535753U (en) * | 2013-11-11 | 2014-04-09 | 成都市晶林电子技术有限公司 | Multilayer forest fireproof monitoring center |
CN103826251A (en) * | 2013-12-17 | 2014-05-28 | 西北工业大学 | Mobile element and clustering mixed sensor network data collection method |
CN105205448A (en) * | 2015-08-11 | 2015-12-30 | 中国科学院自动化研究所 | Character recognition model training method based on deep learning and recognition method thereof |
CN105760835A (en) * | 2016-02-17 | 2016-07-13 | 天津中科智能识别产业技术研究院有限公司 | Gait segmentation and gait recognition integrated method based on deep learning |
Non-Patent Citations (2)
Title |
---|
王强: "基于视觉传感网络的目标跟踪系统设计", 《现代电子技术》 * |
陈少华: "电力塔无线传感器网络监测系统关键技术研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108345873A (en) * | 2018-03-22 | 2018-07-31 | 哈尔滨工业大学 | A kind of multiple degrees of freedom body motion information analytic method based on multilayer convolutional neural networks |
CN108692709A (en) * | 2018-04-26 | 2018-10-23 | 济南浪潮高新科技投资发展有限公司 | A kind of farmland the condition of a disaster detection method, system, unmanned plane and cloud server |
CN110262529A (en) * | 2019-06-13 | 2019-09-20 | 桂林电子科技大学 | A kind of monitoring unmanned method and system based on convolutional neural networks |
CN110262529B (en) * | 2019-06-13 | 2022-06-03 | 桂林电子科技大学 | Unmanned aerial vehicle monitoring method and system based on convolutional neural network |
CN113133105A (en) * | 2019-12-31 | 2021-07-16 | 丽水青达科技合伙企业(有限合伙) | Unmanned aerial vehicle data collection method based on deep reinforcement learning |
CN113133105B (en) * | 2019-12-31 | 2022-07-12 | 丽水青达科技合伙企业(有限合伙) | Unmanned aerial vehicle data collection method based on deep reinforcement learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110321603B (en) | Depth calculation model for gas path fault diagnosis of aircraft engine | |
CN107818302A (en) | Non-rigid multi-scale object detection method based on convolutional neural network | |
CN108021947B (en) | A kind of layering extreme learning machine target identification method of view-based access control model | |
CN107038450A (en) | Unmanned plane policing system based on deep learning | |
CN110587606A (en) | Open scene-oriented multi-robot autonomous collaborative search and rescue method | |
CN108053052B (en) | A kind of oil truck oil and gas leakage speed intelligent monitor system | |
Suryo et al. | Improved time series prediction using LSTM neural network for smart agriculture application | |
CN112766496B (en) | Deep learning model safety guarantee compression method and device based on reinforcement learning | |
CN106656357A (en) | System and method of evaluating state of power frequency communication channel | |
CN115545334B (en) | Land utilization type prediction method and device, electronic equipment and storage medium | |
CN116029604B (en) | Cage-raised meat duck breeding environment regulation and control method based on health comfort level | |
CN118378054B (en) | Real-time reliability assessment system and method for submarine-launched unmanned aerial vehicle | |
KR102619200B1 (en) | Method and computer program for creating a neural network model that automatically controls environmental facilities based on artificial intelligence | |
CN114584406A (en) | Industrial big data privacy protection system and method for federated learning | |
CN114048546A (en) | Graph convolution network and unsupervised domain self-adaptive prediction method for residual service life of aircraft engine | |
CN112560948A (en) | Eye fundus map classification method and imaging method under data deviation | |
CN112949391A (en) | Intelligent security inspection method based on deep learning harmonic signal analysis | |
CN112272074A (en) | Information transmission rate control method and system based on neural network | |
CN112784915A (en) | Image classification method for enhancing robustness of deep neural network by optimizing decision boundary | |
CN112381213A (en) | Industrial equipment residual life prediction method based on bidirectional long-term and short-term memory network | |
CN118052300A (en) | Air quality numerical model and statistical model fusion method based on machine learning | |
CN114154612A (en) | Intelligent agent behavior model construction method based on causal relationship inference | |
CN117894389A (en) | SSA-optimized VMD and LSTM-based prediction method for concentration data of dissolved gas in transformer oil | |
CN112560252A (en) | Prediction method for residual life of aircraft engine | |
CN117275222A (en) | Traffic flow prediction method integrating one-dimensional convolution and attribute enhancement units |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170811 |
|
RJ01 | Rejection of invention patent application after publication |