CN108520273A - A kind of quick detection recognition method of dense small item based on target detection - Google Patents
A kind of quick detection recognition method of dense small item based on target detection Download PDFInfo
- Publication number
- CN108520273A CN108520273A CN201810253095.6A CN201810253095A CN108520273A CN 108520273 A CN108520273 A CN 108520273A CN 201810253095 A CN201810253095 A CN 201810253095A CN 108520273 A CN108520273 A CN 108520273A
- Authority
- CN
- China
- Prior art keywords
- convolutional neural
- model
- target
- neural networks
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
A kind of quick detection recognition method of dense small item based on target detection:The commodity picture of acquisition is handled by python matlab programming languages;Convolutional neural networks model, including convolutional layer, pond layer and full articulamentum are established, model includes convolution sum RoIpooling;The classification of network model and bounding box, which return, to be suggested to convolutional neural networks model, region by training sample and carries out multitask joint training, study update is carried out to the convolution nuclear parameter in convolutional neural networks model by backpropagation, collect the hyper parameter for determining convolutional neural networks model by verifying, until loss function reaches target set point;The accuracy of identification of trained convolutional Neural model is tested by test set.The present invention can greatly improve the working efficiency of goods providers and market administrative staff, save cost of labor, have larger commercial value to commodity number and distribution efficiently and accurately geo-statistic.
Description
Technical field
The present invention relates to a kind of dense quick detection recognition methods of small item.More particularly to a kind of based on target detection
The dense quick detection recognition method of small item.
Background technology
With the extensive use of principle of computer vision, ground to what target was detected by computer image processing technology
Study carefully and is paid attention to deeply by domestic and foreign scholars.Target detection can be divided into two crucial subtasks:Target classification and target positioning, i.e.,
Each target object generic can be accurately judged in the image that a width contains one or more target objects, and is given
Go out corresponding bounding box.
Traditional algorithm of target detection has following drawback:1) candidate frame, number redundancy, meter are chosen by sliding window method
It calculates complicated;2) feature in region is extracted by the template of hand-designed, the feature that this mode is extracted is often low level
, and excessively by the thought concept influence of people, therefore be not enough to express target object completely.
In order to overcome the limitation of hand-designed feature extractor, machine is allowed to learn to obtain the higher level of image automatically
Feature, Hinton et al. proposed the viewpoint of deep learning in 2006, and pointed out that the high-level characteristic of image can be by the spy of low layer
Sign composition.Due to powerful modeling ability and automatically mode of learning end to end, depth convolutional neural networks can be from a large amount of
The drawbacks of learning to arrive validity feature, avoid conventional method artificial design features in data.In recent years, using deep neural network
Huge success, such as VGGNet, GoogleNet, ResNet are achieved in field of target recognition to extract feature, but existing
Network model almost comes from diversity possessed by data itself, model for the adaptability of object geometric deformation
Mechanism that is internal and not having adaptation geometric deformation.Therefore, the densely distributed small item under different geometric deformations are detected
It is poor with recognition effect, it searches to the bottom, is because the convolution operation itself used in existing convolutional neural networks model has
Fixed geometry, and the geometry of convolutional network built by its stacking is also fixed, thus without pair
In the ability of geometric deformation modeling.How model itself adaptation to object under the geometric deformations such as different sizes, posture is enhanced
Ability is the key challenge faced in current field of target recognition, and improves detection efficiency and the knowledge of dense distribution small item
The key of other precision.
Invention content
The dense Small object of this kind of distribution of small item can be promoted the technical problem to be solved by the invention is to provide a kind of
The quick detection recognition method of dense small item based on target detection of the accuracy of identification of object.
The technical solution adopted in the present invention is:A kind of dense small item based on target detection quickly detect identification side
Method includes the following steps:
1) commodity picture of acquisition is handled by python matlab programming languages:Every width picture is used
Unified format is named, and regard 80% in picture as training set, 20% is used as test set, and 60% wherein in training set is made
, as verification sample, the target object of each picture in training set and test set is manually marked for training sample, 20%,
And markup information is stored in the corresponding xml document of picture;
2) convolutional neural networks model is established, the convolutional neural networks model includes convolutional layer, pond layer and connects entirely
Layer is connect, model includes convolution sum RoIpooling;
3) by training sample to convolutional neural networks model, region suggest network model classification and bounding box return into
Row multitask joint training carries out study update by backpropagation to the convolution nuclear parameter w in convolutional neural networks model, leads to
The hyper parameter that verification collection determines convolutional neural networks model is crossed, until loss function reaches target set point;
4) accuracy of identification of trained convolutional Neural model is tested by test set.
Artificial mark described in step 1) is to be framed the target object in picture using rectangle frame, and mark out square
The coordinate in the shape frame upper left corner and the lower right corner indicates the classification of the object.
In step 2):
The convolution is:
The RoI pooling are:
In formula, w is convolution nuclear parameter, m0It is the top left co-ordinate of the receptive field M of convolution kernel, mnIt is in convolution kernel receptive field M
Point coordinates, Δ mnIt is the offset of each point coordinates in convolution kernel, p and q represent the pixel value of each position in receptive field, m'0It is pond
Change the top left co-ordinate of core receptive field, m is the coordinate of the point in the core receptive field of pond, and bin indicates pond core region, (x, y) table
Show the point coordinates in the core region of pond, Δ mxyIndicate the offset put in the core region of pond.
Hyper parameter described in step 3) includes:Learning rate, regularization parameter, the number of plies of neural network, each hidden layer
The number of middle neuron, iterations, the size of small lot data, the coding mode of output neuron, cost function selection,
The method of weights initialisation, the type of neuron activation functions and the scale for participating in training pattern data.
Loss function described in step 3) is as follows:
Wherein, L'clsThe loss function of presentation class, LregIndicate the loss function that bounding box returns, NclsIt is the sample of classification
This number, NregIt is the bounding box number returned, p is the probability for being predicted as target, pnBe n-th sample predictions it is the general of target
Rate,It is n-th of sample label information, β ∈ (0,1) are the factor for balancing positive sample and negative sample, (1-pn)αIt is limitation
Model focuses more on the sample of difficult classification, and α is a natural number, can be set according to specific identification mission, tnIt is one
Vector indicates the coordinate of predicted boundary frame,It is a vector, indicates the coordinate of the bounding box manually marked, smoothL1It is L1
Regularization loss function, t indicate the vector of bounding box.
A kind of quick detection recognition method of dense small item based on target detection of the present invention, by convolutional Neural net
In network model using deformable convolution sum RoIpooling operate, in algorithm of target detection using innovate loss function come
Train the model that can be realized to small item detection identification.By the present invention in that being made with the commodity picture under input real scene
For training sample and test sample, and mean value is carried out, the pretreatment operations such as normalization;Convolutional neural networks using the present invention
Model can preferably adapt to the commodity target under different geometric deformations, increase model to the target under different sizes, form
Ground adaptive ability, in addition, the loss function of classification reduces and easily divides by being improved to original cross entropy loss function
The influence to loss function of class sample so that model more efficiently learns dense Small object object.The present invention is with quotient
Product are target, are detected to much informations such as brand, the classifications of commodity by the method for deep learning, efficient to its number and distribution
Accurate geo-statistic greatly improves the working efficiency of goods providers and market administrative staff, saves cost of labor, has larger
Commercial value.
Description of the drawings
Fig. 1 is a kind of flow chart of the quick detection recognition method of dense small item based on target detection of the present invention.
Specific implementation mode
Knowledge is quickly detected to a kind of dense small item based on target detection of the present invention with reference to embodiment and attached drawing
Other method is described in detail.
A kind of quick detection recognition method of dense small item based on target detection of the present invention is this kind of for small item
Target object is distributed adaptation energy of the dense and existing convolutional neural networks model to target object geometric deformation in picture
The characteristics of force difference, method of the invention operate by using convolution sum RoI Pooling, make its quotient to different geometric deformations
Product photo has adaptive ability.
As shown in Figure 1, a kind of quick detection recognition method of dense small item based on target detection of the present invention, including such as
Lower step:
1) commodity picture of acquisition is handled by python matlab programming languages:Every width picture is used
Unified format is named, and regard 80% in picture as training set, 20% is used as test set, and 60% wherein in training set is made
, as verification sample, the target object of each picture in training set and test set is manually marked for training sample, 20%,
And markup information is stored in the corresponding xml document of picture;The artificial mark is to use rectangle frame by the mesh in picture
Mark object frames, and marks out the coordinate in the rectangle frame upper left corner and the lower right corner, indicates the classification of the object.
2) convolutional neural networks model is established, the convolutional neural networks model includes convolutional layer, pond layer and connects entirely
Layer is connect, model includes convolution sum RoI pooling;Wherein:
The convolution is:
The RoI pooling are:
In formula, w is convolution nuclear parameter, m0It is the top left co-ordinate of the receptive field M of convolution kernel, mnIt is in convolution kernel receptive field M
Point coordinates, Δ mnIt is the offset of each point in convolution kernel, p and q represent the pixel value of each position in receptive field, m'0Pond core
The top left co-ordinate of receptive field, bin (x, y) indicate the point coordinates in the core region of pond, Δ mxyIndicate point in the core region of pond
Offset.
3) by training sample to convolutional neural networks model, region suggest network model classification and bounding box return into
Row multitask joint training carries out study update by backpropagation to the convolution nuclear parameter w in convolutional neural networks model, leads to
The hyper parameter that verification collection determines convolutional neural networks model is crossed, until loss function reaches target set point;Wherein,
The hyper parameter includes:Neuron in learning rate, regularization parameter, the number of plies of neural network, each hidden layer
Number, the rounds of study, the size of small lot data, the coding mode of output neuron, the selection of cost function, weight
The method of initialization, the type of neuron activation functions and the scale for participating in training pattern data.
The loss function is as follows:
Wherein, NclsIt is the number of samples of classification, NregIt is the bounding box number returned, p is the probability for being predicted as target, pn
It is the probability that n-th of sample predictions is target,N-th of sample label information, β ∈ (0,1) be for balance positive sample and
The factor of negative sample, (1-pn)αIt is the sample that limited model focuses more on difficult classification, α is a natural number, can be according to specific
Identification mission is set, tnIt is a vector, indicates the coordinate of predicted boundary frame,It is a vector, what expression manually marked
The coordinate of bounding box, smoothL1It is L1 regularization loss functions, t indicates the vector of bounding box.
4) accuracy of identification of trained convolutional Neural model is tested by test set.
Claims (5)
1. a kind of quick detection recognition method of dense small item based on target detection, which is characterized in that include the following steps:
1) commodity picture of acquisition is handled by python matlab programming languages:To every width picture using unified
Format is named, and regard 80% in picture as training set, 20% is used as test set, and 60% wherein in training set is as instruction
Practice sample, 20%, as verification sample, manually marks the target object of each picture in training set and test set, and will
Markup information is stored in the corresponding xml document of picture;
2) convolutional neural networks model is established, the convolutional neural networks model includes convolutional layer, pond layer and full articulamentum,
Model includes convolution sum RoIpooling;
3) classification of network model and bounding box recurrence progress to be suggested to convolutional neural networks model, region by training sample
Task cooperative is trained, and study update is carried out to the convolution nuclear parameter w in convolutional neural networks model by backpropagation, by testing
Card collection determines the hyper parameter of convolutional neural networks model, until loss function reaches target set point;
4) accuracy of identification of trained convolutional Neural model is tested by test set.
2. a kind of quick detection recognition method of dense small item based on target detection according to claim 1, feature
It is, the artificial mark described in step 1), is to be framed the target object in picture using rectangle frame, and mark out rectangle frame
The coordinate in the upper left corner and the lower right corner indicates the classification of the object.
3. a kind of quick detection recognition method of dense small item based on target detection according to claim 1, feature
It is, in step 2):
The convolution is:
The RoI pooling are:
In formula, w is convolution nuclear parameter, m0It is the top left co-ordinate of the receptive field M of convolution kernel, mnIt is the point in convolution kernel receptive field M
Coordinate, Δ mnIt is the offset of each point coordinates in convolution kernel, p and q represent the pixel value of each position in receptive field, m'0It is Chi Huahe
The top left co-ordinate of receptive field, m are the coordinates of the point in the core receptive field of pond, and bin indicates that pond core region, (x, y) indicate pond
Change the point coordinates in core region, Δ mxyIndicate the offset put in the core region of pond.
4. a kind of quick detection recognition method of dense small item based on target detection according to claim 1, feature
It is, the hyper parameter described in step 3) includes:It is refreshing in learning rate, regularization parameter, the number of plies of neural network, each hidden layer
Number, iterations, the size of small lot data, the coding mode of output neuron, the selection of cost function, weight through member
The method of initialization, the type of neuron activation functions and the scale for participating in training pattern data.
5. a kind of quick detection recognition method of dense small item based on target detection according to claim 1, feature
It is, the loss function described in step 3) is as follows:
Wherein, L'clsThe loss function of presentation class, LregIndicate the loss function that bounding box returns, NclsIt is the sample number of classification
Mesh, NregIt is the bounding box number returned, p is the probability for being predicted as target, pnIt is the probability that n-th of sample predictions is target,
It is n-th of sample label information, β ∈ (0,1) are the factor for balancing positive sample and negative sample, (1-pn)αBe limited model more
Add the sample for being absorbed in difficult classification, α is a natural number, can be set according to specific identification mission, tnIt is a vector, table
Show the coordinate of predicted boundary frame,It is a vector, indicates the coordinate of the bounding box manually marked, smoothL1It is L1 regularizations
Loss function, t indicate the vector of bounding box.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810253095.6A CN108520273A (en) | 2018-03-26 | 2018-03-26 | A kind of quick detection recognition method of dense small item based on target detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810253095.6A CN108520273A (en) | 2018-03-26 | 2018-03-26 | A kind of quick detection recognition method of dense small item based on target detection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108520273A true CN108520273A (en) | 2018-09-11 |
Family
ID=63434197
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810253095.6A Pending CN108520273A (en) | 2018-03-26 | 2018-03-26 | A kind of quick detection recognition method of dense small item based on target detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108520273A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376605A (en) * | 2018-09-26 | 2019-02-22 | 福州大学 | A kind of electric inspection process image bird-resistant fault detection method |
CN109492636A (en) * | 2018-09-30 | 2019-03-19 | 浙江工业大学 | Object detection method based on adaptive receptive field deep learning |
CN109558792A (en) * | 2018-10-11 | 2019-04-02 | 成都三零凯天通信实业有限公司 | Method and system for detecting Internet logo content based on samples and features |
CN109949359A (en) * | 2019-02-14 | 2019-06-28 | 深兰科技(上海)有限公司 | A kind of method and apparatus carrying out target detection based on SSD model |
CN110378361A (en) * | 2018-11-23 | 2019-10-25 | 北京京东尚科信息技术有限公司 | A kind of method and apparatus for Articles detecting of intensively taking |
CN110647906A (en) * | 2019-08-02 | 2020-01-03 | 杭州电子科技大学 | Clothing target detection method based on fast R-CNN method |
CN110674850A (en) * | 2019-09-03 | 2020-01-10 | 武汉大学 | Image description generation method based on attention mechanism |
CN110826647A (en) * | 2019-12-09 | 2020-02-21 | 国网智能科技股份有限公司 | Method and system for automatically detecting foreign matter appearance of power equipment |
CN110929668A (en) * | 2019-11-29 | 2020-03-27 | 珠海大横琴科技发展有限公司 | Commodity detection method and device based on unmanned goods shelf |
CN111222382A (en) * | 2018-11-27 | 2020-06-02 | 北京京东尚科信息技术有限公司 | Commodity settlement method, commodity settlement device, commodity settlement medium and electronic equipment based on images |
CN111241893A (en) * | 2018-11-29 | 2020-06-05 | 阿里巴巴集团控股有限公司 | Identification recognition method, device and system |
CN111259710A (en) * | 2018-12-03 | 2020-06-09 | 初速度(苏州)科技有限公司 | Parking space structure detection model training method adopting parking space frame lines and end points |
CN111768553A (en) * | 2019-04-02 | 2020-10-13 | 珠海格力电器股份有限公司 | Vending method of automatic vending cabinet and automatic vending cabinet |
CN111931877A (en) * | 2020-10-12 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Target detection method, device, equipment and storage medium |
CN112365324A (en) * | 2020-12-02 | 2021-02-12 | 杭州微洱网络科技有限公司 | Commodity picture detection method suitable for E-commerce platform |
CN112508132A (en) * | 2021-01-29 | 2021-03-16 | 广州市玄武无线科技股份有限公司 | Training method and device for identifying SKU |
CN114266887A (en) * | 2021-12-27 | 2022-04-01 | 浙江工业大学 | Large-scale trademark detection method based on deep learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599827A (en) * | 2016-12-09 | 2017-04-26 | 浙江工商大学 | Small target rapid detection method based on deep convolution neural network |
CN106845430A (en) * | 2017-02-06 | 2017-06-13 | 东华大学 | Pedestrian detection and tracking based on acceleration region convolutional neural networks |
US20170169315A1 (en) * | 2015-12-15 | 2017-06-15 | Sighthound, Inc. | Deeply learned convolutional neural networks (cnns) for object localization and classification |
CN107451602A (en) * | 2017-07-06 | 2017-12-08 | 浙江工业大学 | A kind of fruits and vegetables detection method based on deep learning |
CN107808167A (en) * | 2017-10-27 | 2018-03-16 | 深圳市唯特视科技有限公司 | A kind of method that complete convolutional network based on deformable segment carries out target detection |
-
2018
- 2018-03-26 CN CN201810253095.6A patent/CN108520273A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170169315A1 (en) * | 2015-12-15 | 2017-06-15 | Sighthound, Inc. | Deeply learned convolutional neural networks (cnns) for object localization and classification |
CN106599827A (en) * | 2016-12-09 | 2017-04-26 | 浙江工商大学 | Small target rapid detection method based on deep convolution neural network |
CN106845430A (en) * | 2017-02-06 | 2017-06-13 | 东华大学 | Pedestrian detection and tracking based on acceleration region convolutional neural networks |
CN107451602A (en) * | 2017-07-06 | 2017-12-08 | 浙江工业大学 | A kind of fruits and vegetables detection method based on deep learning |
CN107808167A (en) * | 2017-10-27 | 2018-03-16 | 深圳市唯特视科技有限公司 | A kind of method that complete convolutional network based on deformable segment carries out target detection |
Non-Patent Citations (2)
Title |
---|
JIFENG DAI ET AL.: "Deformable Convolutional Networks", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
TSUNG-YI LIN ET AL.: "Focal Loss for Dense Object Detection", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376605A (en) * | 2018-09-26 | 2019-02-22 | 福州大学 | A kind of electric inspection process image bird-resistant fault detection method |
CN109492636A (en) * | 2018-09-30 | 2019-03-19 | 浙江工业大学 | Object detection method based on adaptive receptive field deep learning |
CN109492636B (en) * | 2018-09-30 | 2021-08-03 | 浙江工业大学 | Target detection method based on adaptive receptive field deep learning |
CN109558792B (en) * | 2018-10-11 | 2023-10-13 | 深圳市网联安瑞网络科技有限公司 | Method and system for detecting internet logo content based on samples and features |
CN109558792A (en) * | 2018-10-11 | 2019-04-02 | 成都三零凯天通信实业有限公司 | Method and system for detecting Internet logo content based on samples and features |
CN110378361A (en) * | 2018-11-23 | 2019-10-25 | 北京京东尚科信息技术有限公司 | A kind of method and apparatus for Articles detecting of intensively taking |
CN111222382A (en) * | 2018-11-27 | 2020-06-02 | 北京京东尚科信息技术有限公司 | Commodity settlement method, commodity settlement device, commodity settlement medium and electronic equipment based on images |
CN111241893B (en) * | 2018-11-29 | 2023-06-16 | 阿里巴巴集团控股有限公司 | Identification recognition method, device and system |
CN111241893A (en) * | 2018-11-29 | 2020-06-05 | 阿里巴巴集团控股有限公司 | Identification recognition method, device and system |
CN111259710B (en) * | 2018-12-03 | 2022-06-10 | 魔门塔(苏州)科技有限公司 | Parking space structure detection model training method adopting parking space frame lines and end points |
CN111259710A (en) * | 2018-12-03 | 2020-06-09 | 初速度(苏州)科技有限公司 | Parking space structure detection model training method adopting parking space frame lines and end points |
CN109949359A (en) * | 2019-02-14 | 2019-06-28 | 深兰科技(上海)有限公司 | A kind of method and apparatus carrying out target detection based on SSD model |
CN111768553A (en) * | 2019-04-02 | 2020-10-13 | 珠海格力电器股份有限公司 | Vending method of automatic vending cabinet and automatic vending cabinet |
CN110647906A (en) * | 2019-08-02 | 2020-01-03 | 杭州电子科技大学 | Clothing target detection method based on fast R-CNN method |
CN110674850A (en) * | 2019-09-03 | 2020-01-10 | 武汉大学 | Image description generation method based on attention mechanism |
CN110929668A (en) * | 2019-11-29 | 2020-03-27 | 珠海大横琴科技发展有限公司 | Commodity detection method and device based on unmanned goods shelf |
CN110826647A (en) * | 2019-12-09 | 2020-02-21 | 国网智能科技股份有限公司 | Method and system for automatically detecting foreign matter appearance of power equipment |
CN111931877A (en) * | 2020-10-12 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Target detection method, device, equipment and storage medium |
CN112365324A (en) * | 2020-12-02 | 2021-02-12 | 杭州微洱网络科技有限公司 | Commodity picture detection method suitable for E-commerce platform |
CN112508132A (en) * | 2021-01-29 | 2021-03-16 | 广州市玄武无线科技股份有限公司 | Training method and device for identifying SKU |
CN112508132B (en) * | 2021-01-29 | 2021-08-03 | 广州市玄武无线科技股份有限公司 | Training method and device for identifying SKU |
CN114266887A (en) * | 2021-12-27 | 2022-04-01 | 浙江工业大学 | Large-scale trademark detection method based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108520273A (en) | A kind of quick detection recognition method of dense small item based on target detection | |
Yuliang et al. | Detecting curve text in the wild: New dataset and new solution | |
CN109948425B (en) | Pedestrian searching method and device for structure-aware self-attention and online instance aggregation matching | |
Wu et al. | 3d shapenets: A deep representation for volumetric shapes | |
CN108830188A (en) | Vehicle checking method based on deep learning | |
CN109523520A (en) | A kind of chromosome automatic counting method based on deep learning | |
CN110276269A (en) | A kind of Remote Sensing Target detection method based on attention mechanism | |
CN106599941A (en) | Method for identifying handwritten numbers based on convolutional neural network and support vector machine | |
CN109800628A (en) | A kind of network structure and detection method for reinforcing SSD Small object pedestrian detection performance | |
CN106529499A (en) | Fourier descriptor and gait energy image fusion feature-based gait identification method | |
CN107895160A (en) | Human face detection and tracing device and method | |
CN110991435A (en) | Express waybill key information positioning method and device based on deep learning | |
CN109815770A (en) | Two-dimentional code detection method, apparatus and system | |
CN109919230A (en) | Based on the pyramidal medical image pulmonary nodule detection method of cycle specificity | |
CN105335725A (en) | Gait identification identity authentication method based on feature fusion | |
CN110287873A (en) | Noncooperative target pose measuring method, system and terminal device based on deep neural network | |
CN105205449A (en) | Sign language recognition method based on deep learning | |
CN109343920A (en) | A kind of image processing method and its device, equipment and storage medium | |
CN108681735A (en) | Optical character recognition method based on convolutional neural networks deep learning model | |
CN108898269A (en) | Electric power image-context impact evaluation method based on measurement | |
CN110223310A (en) | A kind of line-structured light center line and cabinet edge detection method based on deep learning | |
CN113762269A (en) | Chinese character OCR recognition method, system, medium and application based on neural network | |
CN109948609A (en) | Intelligently reading localization method based on deep learning | |
CN109903339A (en) | A kind of video group personage's position finding and detection method based on multidimensional fusion feature | |
CN109816634A (en) | Detection method, model training method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180911 |
|
WD01 | Invention patent application deemed withdrawn after publication |