CN106845430A - Pedestrian detection and tracking based on acceleration region convolutional neural networks - Google Patents
Pedestrian detection and tracking based on acceleration region convolutional neural networks Download PDFInfo
- Publication number
- CN106845430A CN106845430A CN201710066312.6A CN201710066312A CN106845430A CN 106845430 A CN106845430 A CN 106845430A CN 201710066312 A CN201710066312 A CN 201710066312A CN 106845430 A CN106845430 A CN 106845430A
- Authority
- CN
- China
- Prior art keywords
- convolutional neural
- neural networks
- pedestrian
- region
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of pedestrian's recognition and tracking method based on acceleration region convolutional neural networks, first training, test data set are gathered by being loaded with the robot of infrared camera at night, training, test data set are pre-processed on request, then all training and test pictures is carried out during locations of real targets marks and recorded sample file;Acceleration region convolutional neural networks are built again, acceleration region convolutional neural networks are trained using training dataset, obtain the bounding box of the last probability and region that belong to pedestrian area using non-maxima suppression algorithm to network output;Using the degree of accuracy of test data set test network, satisfactory network model is obtained;The picture of night robot collection is input into acceleration region convolutional neural networks model, output in real time belongs to the probability of pedestrian area and the bounding box in region to model online.The present invention can efficiently identify the pedestrian in infrared image, and the pedestrian target in infrared video can be tracked in real time.
Description
Technical field
The present invention relates to a kind of night robot pedestrian detection based on acceleration region convolutional neural networks and tracking,
The method belongs to infrared night vision image processing field, by the method robot can be realized in night detect and track in real time
Pedestrian.
Background technology
With developing rapidly for robot technology and infrared imagery technique, the application field that both combine is also more extensive.Example
Such as, night using robot carry out pedestrian detection with tracking, reach detective with monitoring effect.As the reality higher of robot
Existing, in night running, pedestrian is also the main object of its detection to Unmanned Systems.But infrared image is in itself gray-scale map
Picture, colourless multimedia message, grain details are few, the characteristics of signal to noise ratio is low, so the pedestrian detection in infrared image is to live very much with tracking
The research field of jump.
In pedestrian's follow-up study, Yasuno et al. (M.Yasuno, S.Ryousuke, N.Yasuda, Pedestrain
Detection and Tacking in Far Infrared Images[C].In Proceedings of IEEE
Conference on Intelligent Transpotation Systems, 2005:182-187.), by tracing area
Stencil matching is inside carried out to track the position of head.Dai et al. (X.Dai, F.zheng, X.Liu.Layered
representation for pedestrain detection and tracking in infrared imagery[J]
.IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
2005,3 (1):13-18.) think that human body four limbs deformation in motion process is larger, have impact on the performance of tracking.In order to remove four
The influence of limb, therefore only head is tracked with body.The current infrared pedestrian tracking algorithm for proposing, is all exactly right
Human body is a certain or certain several position is tracked, and is tracked rather than to whole pedestrian.
For a long time, the most popular method of pedestrian detection is the method based on pedestrian's feature extraction and machine learning.Wang Lei
(pedestrian detection algorithm research [D] the HeFei University of Technologys in Wang Lei infrared images, 2015:26-44.) use and first extract positive and negative
The feature of sample, positive negative sample here refers respectively to the picture comprising pedestrian and the picture not comprising pedestrian, training classification
Device, then travels through the complete image of a width with slip window sampling, the grader that recycling is trained window is carried out pedestrian with it is non-
The discriminant classification of pedestrian, reaches the purpose of pedestrian detection.Although this method can obtain preferable testing result, due to
When this method carries out pedestrian detection to entire image, using be that multiple dimensioned sliding window is traveled through to entire image,
Generate substantial amounts of detection window, and feature extraction carried out to all of detection window successively, result in the sharp increase of amount of calculation,
Speed is extremely slow.
In recent years, depth convolutional neural networks were developed rapidly, in image classification, natural language processing and target detection etc.
Huge success is achieved using upper.It is advantageous that extract the feature of image and classified, it is excellent in order to give full play to its
(IRSHICKR, DONAHUEJ, NAJMANETL, et al.Rich feature the hierarchies for such as gesture, Girshick
accurate object detection and semantic segmentation[C].IEEE Conference on
Computer Vision and Pattern Recognition, 2014:580-587.) propose region convolutional neural networks
(R-CNN) target detection problems of image are converted into classification problem by framework, achieve Detection results well.The base of the method
This thought is first to extract several candidate target rectangular areas in the picture, then each candidate region is carried with depth convolutional network
Target signature is taken, finally a grader is trained with SVMs, candidate target region is classified.According to each region
Classification score goes out final object boundary using non-maxima suppression algorithm optimization.However, candidate region therein is not to use again
Multi-scale sliding window mouthful before is obtained, but uses the selective search algorithm based on layering and many similarity measurements to generate
About 2000 multi-level candidate frames.
The convolutional network and the grader for classifying that R-CNN extracts feature will be separated and trained, and result in training process will
Take a substantial amount of time and memory space;And the training of grader is uncorrelated to feature extraction network, this be also it is irrational,
Have impact on the accuracy rate of target detection.Therefore Girshick (R.Girshick.Fast-RCNN.IEEE International
Conference on Computer Vision, 2015.) have also been proposed fast area convolutional neural networks Fast-RCNN moulds
Type, a taxonomy model is entered by feature extraction and fusion for classification, improves the speed of training pattern and the accuracy rate of target detection.
It is non-due to being individually created candidate region using selective search algorithm although Fast-RCNN has improvement
Often time-consuming, this is that the algorithm is unable to reach real-time fatal reason.
The content of the invention
The technical problem to be solved in the present invention is how to realize real-time pedestrian detection with tracking using robot at night.
For pedestrian tracking algorithm, if the discrimination of i.e. pedestrian detection algorithm is high, then in detecting infrared video
All pedestrians of each frame, provide the positional information of whole pedestrian, rather than a part for human body.If additionally, pedestrian detection
Algorithm has real-time, so can easily realize pedestrian tracking.So, it is of the invention it is important that how to realize identification high
Rate, the pedestrian detection of real-time.As long as realizing above-mentioned efficient pedestrian detection, then also when the water comes, a channel is formed for pedestrian tracking.
In order to solve the above-mentioned technical problem, the technical scheme is that providing a kind of based on acceleration region convolutional Neural net
The pedestrian detection and tracking of network, it is characterised in that comprise the following steps:
Step 1:Two groups of infrared pictures are gathered at night by the robot for being loaded with infrared camera, one group infrared picture is made
It is training dataset, another group of infrared picture is used as test data set;To training dataset and all pictures of test data set
Name is carried out in accordance with regulations, and makes the picture name list of training dataset and test data set;
Step 2:Locations of real targets mark, Ji Jiangsuo are carried out to all pictures that training dataset and test data are concentrated
There are all pedestrian targets in picture to be gone out with collimation mark, by the number of pedestrian in picture and the upper left bottom right 4 of the bounding box of pedestrian
Individual coordinate information recorded in sample file;
Step 3:Acceleration region convolutional neural networks are built, is trained using the picture and sample file of training dataset and accelerated
Region convolutional neural networks;Acceleration region convolutional neural networks include advising network for the region for extracting candidate region and are used for
The convolutional neural networks of pedestrian detection, advise that network selects several candidate regions by region, then by these candidate regions
Input to convolutional neural networks, convolutional neural networks export the score that these candidate regions are pedestrians and its bounding box refine it
Coordinate points afterwards;The output of convolutional neural networks is obtained using non-maxima suppression algorithm and last belongs to the general of pedestrian area
Rate and the bounding box in region;
Step 4:The acceleration region convolutional Neural trained using the picture and sample file testing procedure 3 of test data set
Network, if being unsatisfactory for error requirements, the re -training of return to step 3, untill error requirements are met;Obtaining meeting precision will
The acceleration region convolutional neural networks model asked;
Step 5:The acceleration region convolutional neural networks model that step 4 is set up is used for online night robot row in real time
People detect with tracking, will night robot collection picture input acceleration region convolutional neural networks model, model is real online
When output belong to pedestrian area probability and region bounding box.
Preferably, the acceleration region convolutional neural networks be a series of convolution, excitation, pond and full connection procedure,
Using ZF frameworks, the framework includes that network and target identification network, and region suggestion network and target identification network are advised in region
In characteristic pattern extract part use parameter sharing mechanism.
The present invention can be used for robot and unmanned vehicle and carry out real-time by infrared camera in the case where night is unglazed
Pedestrian detection and tracking.The present invention by acceleration region convolutional neural networks be applied to the real-time pedestrian detection of infrared video with
Tracking, without generating candidate region using other method in advance, without choosing pedestrian's feature by hand, by training end to end,
Directly input an infrared picture, the pedestrian position in output picture.The invention ensure that in infrared video pedestrian detection and with
The correctness and real-time of track.
The method that the present invention is provided by using acceleration region convolutional neural networks, without be individually created candidate region and
Pedestrian's feature is chosen by hand, and candidate region generation is realized that realization is operated end to end, and the method is bright also by convolutional network
The aobvious speed for accelerating pedestrian's identification, improves the correctness of identification.
Brief description of the drawings
Fig. 1 is the night vision image pedestrian's identification process figure based on acceleration region convolutional neural networks;
Fig. 2 is acceleration region convolutional neural networks structure chart.
Specific embodiment
With reference to specific embodiment, the present invention is expanded on further.It should be understood that these embodiments are merely to illustrate the present invention
Rather than limitation the scope of the present invention.In addition, it is to be understood that after the content for having read instruction of the present invention, people in the art
Member can make various changes or modifications to the present invention, and these equivalent form of values equally fall within the application appended claims and limited
Scope.
A kind of night robot pedestrian detection and tracking based on acceleration region convolutional neural networks, including following step
Suddenly:
Step 1:Build night vision image training and test data set.Using laboratory be loaded with the robot of infrared camera from
Row collection experiment picture, used as training dataset, 200 infrared pictures are used as test data set, every for 2000 infrared pictures
Picture size is 720*576.All pictures to training dataset and test data set are renamed by regulation, and make training
The picture name list of data set and test data set.
Step 2:Marking program is write with Python, locations of real targets mark manually is carried out to all training and test pictures
Note, all pedestrian targets that will be in all pictures are gone out with collimation mark, by the bounding box of the number of pedestrian in picture and pedestrian
The coordinate record of upper left bottom right 4 is in .xml.
Step 3:Build acceleration region convolutional neural networks, repetitive exercise.Using ready-made training in step 1 and step 2
Collection training acceleration region convolutional neural networks, wherein acceleration region convolutional neural networks include convolutional layer, the region of shared parameter
Suggestion network and convolutional network.The convolutional layer of shared parameter is used for the extraction of characteristic pattern, and this feature figure is fed to region simultaneously
In suggestion network and in convolutional network.Region suggestion network calculates candidate region for study, and these candidate regions are also input into
To in convolutional network.Last convolutional network is used to predict score and its bounding box essence for exporting that these candidate regions are pedestrians
Repair the coordinate points after (recurrence).
Fig. 1 is the night vision image pedestrian's identification process figure based on acceleration region convolutional neural networks.Firstly the need of to infrared
During real pedestrian position in image is marked and recorded text.Then acceleration region convolutional neural networks are built, will
The infrared picture of training true pedestrian position file corresponding with per pictures is put into the network of structure and is learnt.Iteration
After practising certain number of times, the model parameter of network is obtained.Then input test image, acceleration region convolutional neural networks can be according to preceding
The model parameter that face training is obtained carries out pedestrian's identification to test image, finally gives the side of all pedestrians in test night vision image
Boundary's frame.
Fig. 2 is acceleration region convolutional neural networks structure chart;Acceleration region convolutional neural networks mainly include three parts:
The convolutional layer of shared parameter, region suggestion network and convolutional network.The convolutional layer of shared parameter is used for the extraction of characteristic pattern, the spy
Figure is levied while being fed in region suggestion network and in convolutional network.Region suggestion network calculates candidate regions for study
Domain, these candidate regions are also entered into convolutional network.Last convolutional network be used for predict be probably pedestrian position area
Domain, and output loss is calculated with actual pedestrian position, for updating network parameter.
The acceleration region convolutional neural networks that the present invention is used were waited for a series of convolution, excitation, pond and full connection
Journey, using ZF frameworks, the framework includes that network RPN and target identification network Fast-RCNN, and RPN and Fast- are advised in region
Characteristic pattern in RCNN networks extracts convolutional layer of the part using parameter sharing mechanism.
The convolutional layer for being used for characteristic pattern extraction in the present invention has 5.Assuming that convolutional layer is f, parameter is θ, then the mathematics of f
Expression formula is:
f(X;θ)=WLHL-1
Wherein, HlIt is the l layers of output of Hidden unit, blIt is l layers of deviation, WlIt is l layers of weights, and blAnd WlComposition can
The parameter θ of training, pool () represents pondization operation, and characteristic point integration that will be in small neighbourhood obtains new feature so that feature subtracts
Few, parameter is reduced, and pond unit has translation invariance.The method in pond mainly includes average-pondization and maximum-pond
Change, the present invention is main using maximum-pondization operation.Relu () is represented and is made a nonlinear transformation to characteristic pattern so that wanted
Information by and filter out undesired information.L is the integer not less than 1.Last convolutional layer has 256 convolution kernels,
So characteristic pattern has 256, characteristic dimension is 256 dimensions, and each characteristic pattern size is about 40*60, these characteristic patterns are inputed to
Advise the convolutional network of network and target identification in region.The convolutional layer parameter configuration that characteristic pattern is extracted is as shown in table 1.
The feature extraction convolutional layer parameter configuration of table 1
In region suggestion network, with the sliding window sliding characteristics figure of 3*3, when sliding window slides into each position, prediction input
3 kinds of yardsticks (128,256,512) of image and 3 kinds of length-width ratios (1: 1,1: 2,2: candidate region 1), so each sliding position
Just there are 9 candidate regions, piece image can generate about 2000 (40*60*9) individual candidate regions.Two points are connect behind convolutional layer
The full articulamentum of branch, one is that classification layer (cls-layer) exports 2 scores, for judging that candidate region is target or the back of the body
Scape, another is that border returns layer (reg-layer) 4 scores of output, is finely adjusted for the border to candidate region, so
9 candidate regions on a position, full * 9 results of articulamentum final output (2+4).Although being selected by region suggestion network
The candidate region for taking there are about 2000, but the invention has been screened first 300 and has been input to mesh according to the score height of candidate region
Other convolutional network is identified, can so accelerate speed.
To in the convolutional network of target identification, identification network uses Fast-RCNN networks to input candidate frame, removes ginseng
Outside the convolutional layer of the shared extraction features of number, behind connect full articulamentum and the excitation that two convolution check figures are 4096 successively
Layer, is output as 2 classification layer, and the border for being output as 4 returns layer and loss layer.
When region suggestion network is trained, a binary label is distributed to each candidate region, positive label can be distributed
To two class candidate regions:(1) there are the Chong Die candidates of highest IoU (the ratio between common factor union) with certain real goal (GT) bounding box
Region (perhaps less than 0.7), (2) have the overlapping candidate regions of the IoU more than 0.7 with any GT bounding boxes.One GT bounding box
Positive label may be distributed to multiple candidate regions.And negative label is then distributed to and is below 0.3 with the IoU ratios of all GT bounding boxes
Candidate region.The candidate region of anon-normal non-negative does not have any effect to training objective.
As Fast R-CNN, also in compliance with multitask loss during the suggestion network training of region, object function is minimized.One
The loss function of individual image is defined as:
Wherein, i is the index of candidate region in training batch (mini-batch), piIt is that i-th candidate region is
The prediction probability of target.If candidate region is just, GT labelsIt is then 1, conversely,It is 0.tiIt is a vector, i.e. ti=
(tx, ty, tw, th), 4 parametrization coordinates of the bounding box of prediction are represented,It is GT bounding boxes corresponding with positive candidate region
Coordinate vector, i.e.,Classification Loss LclsIt is the logarithm loss of two classifications (target and non-targeted), its
In, i is the index of candidate region in training batch (mini-batch), piIt is that i-th candidate region is the prediction of target
Probability.If candidate region is just, GT labelsIt is then 1, conversely,It is 0.λ is balance weight, and 10, N are taken as in the present inventioncls
It is the size of mini-batch, i.e., 256, NregBe the quantity of candidate region, i.e., about 2400.Classification Loss LclsIt is two classifications
The logarithm loss of (target and non-targeted), i.e.,:For returning loss Lreg, useTo calculate.R is the loss function (smooth with robustnessL1), it is defined as:
pi*LregThis means only positive candidate regionJust there is recurrence to lose, other situations just do not have
For returning, the present invention uses 4 coordinates:
tx=(x-xa)/wa, ty=(y-ya)/ha, tw=log (w/wa), th=log (h/ha),
Wherein (tx, ty, tw, th) represent that 4 of predicted boundary frame parameterize coordinate vectors,Represent
4 parametrization coordinate vectors of GT bounding boxes corresponding with positive candidate region, above-mentioned two vector is used for counting loss.X, y, w,
H refers to the centre coordinate (x, y) of predicted boundary frame, wide and height respectively;xa, ya, wa, haRefer to the center of candidate region bounding box respectively
Coordinate (xa, ya), it is wide and high;x*, y*, w*, h*Refer to the centre coordinate (x of GT bounding boxes respectively*, y*), it is wide and high.Can be understood as
Returned from candidate region bounding box to the bounding box of neighbouring GT bounding boxes.
It is above-mentioned be region advise network loss function, and the convolutional network of target identification still using Fast-RCNN it
The loss function of itself.It is of the invention by the way of alternately training when whole network is trained, i.e.,:
(1) network is advised according to above-mentioned region, the network model initialization of ImageNet pre-training, and it is end-to-end
Fine setting region suggestion network parameter is extracted for candidate frame, the stage iteration 80000 times.
(2) candidate region of the generation of the first step is utilized, individually a detection network, Fast is trained by Fast R-CNN
R-CNN detection networks are equally that at this time two networks are also without shared volume by the model initialization of ImageNet pre-training
Lamination, the stage iteration 40000 times.
(3) carry out region again with detection network Fast R-CNN and advise network training, but fixed shared convolutional layer, and
Only the exclusive layer of network, present two network share convolutional layers, the stage iteration 80000 times are advised in fine setting region.
(4) shared convolutional layer is kept to fix, other layers of fine setting Fast R-CNN.So, two network shares are identical
Convolutional layer, constitute a unified network, the stage iteration 40000 times.
By above-mentioned iterative learning, you can draw network parameter.
According to the above-mentioned model parameter for training, one infrared picture of input is that exportable 300 candidate regions are targets
Probability and boundary coordinate, recycle non-maxima suppression algorithm obtain the last probability for belonging to pedestrian area and region
Bounding box.
Step 4:The acceleration region convolutional Neural trained using the picture and sample file testing procedure 3 of test data set
Network, meets error requirements, obtains meeting the acceleration region convolutional neural networks model of required precision;
Step 5:The acceleration region convolutional neural networks model that step 4 is set up is used for online night robot row in real time
People detect with tracking, will night robot collection picture input acceleration region convolutional neural networks model, model is real online
When output belong to pedestrian area probability and region bounding box.
Experiment shows that the acceleration region convolutional neural networks used in the present invention have very to pedestrian's identification in night vision image
Good effect, discrimination is high, and real-time is good.
Claims (2)
1. a kind of pedestrian detection and tracking based on acceleration region convolutional neural networks, it is characterised in that including following step
Suddenly:
Step 1:Two groups of infrared pictures are gathered at night by the robot for being loaded with infrared camera, one group of infrared picture is used as instruction
Practice data set, another group of infrared picture is used as test data set;All pictures to training dataset and test data set press rule
Surely it is named, and makes the picture name list of training dataset and test data set;
Step 2:Locations of real targets mark is carried out to all pictures that training dataset and test data are concentrated, will all figures
All pedestrian targets in piece are gone out with collimation mark, by the seat of upper left bottom right 4 of the number of pedestrian in picture and the bounding box of pedestrian
Mark information record is in sample file;
Step 3:Acceleration region convolutional neural networks are built, acceleration region is trained using the picture and sample file of training dataset
Convolutional neural networks;Acceleration region convolutional neural networks include advising network and for pedestrian for the region for extracting candidate region
The convolutional neural networks of detection, advise that network selects several candidate regions, then these candidate regions are input into by region
To convolutional neural networks, convolutional neural networks are exported after the score that these candidate regions are pedestrians and its bounding box refine
Coordinate points;By the output of convolutional neural networks using non-maxima suppression algorithm obtain the last probability for belonging to pedestrian area with
And the bounding box in region;
Step 4:The acceleration region convolutional Neural net trained using the picture and sample file testing procedure 3 of test data set
Network, if being unsatisfactory for error requirements, the re -training of return to step 3, untill error requirements are met;Obtain meeting required precision
Acceleration region convolutional neural networks model;
Step 5:The acceleration region convolutional neural networks model that step 4 is set up is used for online night robot pedestrian inspection in real time
Survey with tracking, will night robot collection picture input acceleration region convolutional neural networks model, model is online defeated in real time
Go out to belong to the probability of pedestrian area and the bounding box in region.
2. the pedestrian detection and tracking of acceleration region convolutional neural networks are based on as claimed in claim 1, and its feature exists
In:The acceleration region convolutional neural networks are a series of convolution, excitation, pond and full connection procedure, using ZF frameworks, should
Framework includes that region advises that the characteristic pattern in network and target identification network, and region suggestion network and target identification network is extracted
Part uses parameter sharing mechanism.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710066312.6A CN106845430A (en) | 2017-02-06 | 2017-02-06 | Pedestrian detection and tracking based on acceleration region convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710066312.6A CN106845430A (en) | 2017-02-06 | 2017-02-06 | Pedestrian detection and tracking based on acceleration region convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106845430A true CN106845430A (en) | 2017-06-13 |
Family
ID=59122050
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710066312.6A Pending CN106845430A (en) | 2017-02-06 | 2017-02-06 | Pedestrian detection and tracking based on acceleration region convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106845430A (en) |
Cited By (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107103283A (en) * | 2017-03-24 | 2017-08-29 | 中国科学院计算技术研究所 | A kind of SAR image Ship Target geometric properties parallel extraction method and device |
CN107220603A (en) * | 2017-05-18 | 2017-09-29 | 惠龙易通国际物流股份有限公司 | Vehicle checking method and device based on deep learning |
CN107292306A (en) * | 2017-07-07 | 2017-10-24 | 北京小米移动软件有限公司 | Object detection method and device |
CN107330920A (en) * | 2017-06-28 | 2017-11-07 | 华中科技大学 | A kind of monitor video multi-target tracking method based on deep learning |
CN107451607A (en) * | 2017-07-13 | 2017-12-08 | 山东中磁视讯股份有限公司 | A kind of personal identification method of the typical character based on deep learning |
CN107808139A (en) * | 2017-11-01 | 2018-03-16 | 电子科技大学 | A kind of real-time monitoring threat analysis method and system based on deep learning |
CN107808138A (en) * | 2017-10-31 | 2018-03-16 | 电子科技大学 | A kind of communication signal recognition method based on FasterR CNN |
CN107944369A (en) * | 2017-11-17 | 2018-04-20 | 大连大学 | A kind of pedestrian detection method based on tandem zones generation network and enhancing random forest |
CN108052946A (en) * | 2017-12-11 | 2018-05-18 | 国网上海市电力公司 | A kind of high pressure cabinet switch automatic identifying method based on convolutional neural networks |
CN108121986A (en) * | 2017-12-29 | 2018-06-05 | 深圳云天励飞技术有限公司 | Object detection method and device, computer installation and computer readable storage medium |
CN108133197A (en) * | 2018-01-05 | 2018-06-08 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of information |
CN108182413A (en) * | 2017-12-29 | 2018-06-19 | 中国矿业大学(北京) | A kind of mine movable object detecting and tracking recognition methods |
CN108197575A (en) * | 2018-01-05 | 2018-06-22 | 中国电子科技集团公司电子科学研究院 | A kind of abnormal behaviour recognition methods detected based on target detection and bone point and device |
CN108257139A (en) * | 2018-02-26 | 2018-07-06 | 中国科学院大学 | RGB-D three-dimension object detection methods based on deep learning |
CN108446662A (en) * | 2018-04-02 | 2018-08-24 | 电子科技大学 | A kind of pedestrian detection method based on semantic segmentation information |
CN108520273A (en) * | 2018-03-26 | 2018-09-11 | 天津大学 | A kind of quick detection recognition method of dense small item based on target detection |
CN108564097A (en) * | 2017-12-05 | 2018-09-21 | 华南理工大学 | A kind of multiscale target detection method based on depth convolutional neural networks |
CN108830280A (en) * | 2018-05-14 | 2018-11-16 | 华南理工大学 | A kind of small target detecting method based on region nomination |
CN108830152A (en) * | 2018-05-07 | 2018-11-16 | 北京红云智胜科技有限公司 | The pedestrian detection method and system that deep learning network and manual features are combined |
CN108921056A (en) * | 2018-06-18 | 2018-11-30 | 上海大学 | Pedestrian detection method based on neural network towards automobile assistant driving |
CN109086678A (en) * | 2018-07-09 | 2018-12-25 | 天津大学 | A kind of pedestrian detection method extracting image multi-stage characteristics based on depth supervised learning |
CN109242516A (en) * | 2018-09-06 | 2019-01-18 | 北京京东尚科信息技术有限公司 | The single method and apparatus of processing service |
WO2019037498A1 (en) * | 2017-08-25 | 2019-02-28 | 腾讯科技(深圳)有限公司 | Active tracking method, device and system |
CN109636846A (en) * | 2018-12-06 | 2019-04-16 | 重庆邮电大学 | Object localization method based on circulation attention convolutional neural networks |
CN109670573A (en) * | 2017-10-13 | 2019-04-23 | 斯特拉德视觉公司 | Utilize the learning method and learning device of the parameter of loss increase adjustment CNN and the test method and test device that use them |
CN109670523A (en) * | 2017-10-13 | 2019-04-23 | 斯特拉德视觉公司 | The method of bounding box corresponding with the object in image is obtained with the convolutional neural networks for including tracking network and using its computing device |
CN109711332A (en) * | 2018-12-26 | 2019-05-03 | 浙江捷尚视觉科技股份有限公司 | A kind of face tracking method and application based on regression algorithm |
US10296794B2 (en) | 2016-12-20 | 2019-05-21 | Jayant Rtti | On-demand artificial intelligence and roadway stewardship system |
CN109903310A (en) * | 2019-01-23 | 2019-06-18 | 平安科技(深圳)有限公司 | Method for tracking target, device, computer installation and computer storage medium |
CN109932730A (en) * | 2019-02-22 | 2019-06-25 | 东华大学 | Laser radar object detection method based on multiple dimensioned monopole three dimensional detection network |
CN110084257A (en) * | 2018-01-26 | 2019-08-02 | 北京京东尚科信息技术有限公司 | Method and apparatus for detecting target |
CN110097050A (en) * | 2019-04-03 | 2019-08-06 | 平安科技(深圳)有限公司 | Pedestrian detection method, device, computer equipment and storage medium |
CN110135480A (en) * | 2019-04-30 | 2019-08-16 | 南开大学 | A kind of network data learning method for eliminating deviation based on unsupervised object detection |
CN110147738A (en) * | 2019-04-29 | 2019-08-20 | 中国人民解放军海军特色医学中心 | A kind of driver fatigue monitoring and pre-alarming method and system |
WO2019175686A1 (en) | 2018-03-12 | 2019-09-19 | Ratti Jayant | On-demand artificial intelligence and roadway stewardship system |
CN110298238A (en) * | 2019-05-20 | 2019-10-01 | 平安科技(深圳)有限公司 | Pedestrian's visual tracking method, model training method, device, equipment and storage medium |
GB2572472A (en) * | 2018-02-01 | 2019-10-02 | Ford Global Tech Llc | Validating gesture recognition capabilities of automated systems |
CN110322475A (en) * | 2019-05-23 | 2019-10-11 | 北京中科晶上科技股份有限公司 | A kind of sparse detection method of video |
CN110399868A (en) * | 2018-04-19 | 2019-11-01 | 北京大学深圳研究生院 | A kind of seashore wetland birds detection method |
CN110414299A (en) * | 2018-04-28 | 2019-11-05 | 中山大学 | A kind of monkey face Genetic relationship method based on computer vision |
CN110458864A (en) * | 2019-07-02 | 2019-11-15 | 南京邮电大学 | Based on the method for tracking target and target tracker for integrating semantic knowledge and example aspects |
CN110472542A (en) * | 2019-08-05 | 2019-11-19 | 深圳北斗通信科技有限公司 | A kind of infrared image pedestrian detection method and detection system based on deep learning |
CN110490058A (en) * | 2019-07-09 | 2019-11-22 | 北京迈格威科技有限公司 | Training method, device, system and the computer-readable medium of pedestrian detection model |
CN110633641A (en) * | 2019-08-15 | 2019-12-31 | 河北工业大学 | Intelligent security pedestrian detection method, system and device and storage medium |
CN111209810A (en) * | 2018-12-26 | 2020-05-29 | 浙江大学 | Bounding box segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time in visible light and infrared images |
CN111340760A (en) * | 2020-02-17 | 2020-06-26 | 中国人民解放军国防科技大学 | Knee joint positioning method based on multitask two-stage convolutional neural network |
WO2020164270A1 (en) * | 2019-02-15 | 2020-08-20 | 平安科技(深圳)有限公司 | Deep-learning-based pedestrian detection method, system and apparatus, and storage medium |
CN111626276A (en) * | 2020-07-30 | 2020-09-04 | 之江实验室 | Two-stage neural network-based work shoe wearing detection method and device |
GB2586996A (en) * | 2019-09-11 | 2021-03-17 | Canon Kk | A method, apparatus and computer program for acquiring a training set of images |
CN112861631A (en) * | 2020-12-31 | 2021-05-28 | 南京理工大学 | Wagon balance human body intrusion detection method based on Mask Rcnn and SSD |
CN112926500A (en) * | 2021-03-22 | 2021-06-08 | 重庆邮电大学 | Pedestrian detection method combining head and overall information |
CN112949510A (en) * | 2021-03-08 | 2021-06-11 | 香港理工大学深圳研究院 | Human detection method based on fast R-CNN thermal infrared image |
CN113947586A (en) * | 2021-10-21 | 2022-01-18 | 江苏和瑞智能科技股份有限公司 | Method for accurately anchoring pharmaceutical packages from acquired images |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063719A (en) * | 2014-06-27 | 2014-09-24 | 深圳市赛为智能股份有限公司 | Method and device for pedestrian detection based on depth convolutional network |
CN104166861A (en) * | 2014-08-11 | 2014-11-26 | 叶茂 | Pedestrian detection method |
CN104217225A (en) * | 2014-09-02 | 2014-12-17 | 中国科学院自动化研究所 | A visual target detection and labeling method |
CN105184271A (en) * | 2015-09-18 | 2015-12-23 | 苏州派瑞雷尔智能科技有限公司 | Automatic vehicle detection method based on deep learning |
CN105654067A (en) * | 2016-02-02 | 2016-06-08 | 北京格灵深瞳信息技术有限公司 | Vehicle detection method and device |
CN106096561A (en) * | 2016-06-16 | 2016-11-09 | 重庆邮电大学 | Infrared pedestrian detection method based on image block degree of depth learning characteristic |
CN106127164A (en) * | 2016-06-29 | 2016-11-16 | 北京智芯原动科技有限公司 | The pedestrian detection method with convolutional neural networks and device is detected based on significance |
CN106250812A (en) * | 2016-07-15 | 2016-12-21 | 汤平 | A kind of model recognizing method based on quick R CNN deep neural network |
-
2017
- 2017-02-06 CN CN201710066312.6A patent/CN106845430A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063719A (en) * | 2014-06-27 | 2014-09-24 | 深圳市赛为智能股份有限公司 | Method and device for pedestrian detection based on depth convolutional network |
CN104166861A (en) * | 2014-08-11 | 2014-11-26 | 叶茂 | Pedestrian detection method |
CN104217225A (en) * | 2014-09-02 | 2014-12-17 | 中国科学院自动化研究所 | A visual target detection and labeling method |
CN105184271A (en) * | 2015-09-18 | 2015-12-23 | 苏州派瑞雷尔智能科技有限公司 | Automatic vehicle detection method based on deep learning |
CN105654067A (en) * | 2016-02-02 | 2016-06-08 | 北京格灵深瞳信息技术有限公司 | Vehicle detection method and device |
CN106096561A (en) * | 2016-06-16 | 2016-11-09 | 重庆邮电大学 | Infrared pedestrian detection method based on image block degree of depth learning characteristic |
CN106127164A (en) * | 2016-06-29 | 2016-11-16 | 北京智芯原动科技有限公司 | The pedestrian detection method with convolutional neural networks and device is detected based on significance |
CN106250812A (en) * | 2016-07-15 | 2016-12-21 | 汤平 | A kind of model recognizing method based on quick R CNN deep neural network |
Cited By (75)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10296794B2 (en) | 2016-12-20 | 2019-05-21 | Jayant Rtti | On-demand artificial intelligence and roadway stewardship system |
CN107103283A (en) * | 2017-03-24 | 2017-08-29 | 中国科学院计算技术研究所 | A kind of SAR image Ship Target geometric properties parallel extraction method and device |
CN107220603A (en) * | 2017-05-18 | 2017-09-29 | 惠龙易通国际物流股份有限公司 | Vehicle checking method and device based on deep learning |
CN107330920A (en) * | 2017-06-28 | 2017-11-07 | 华中科技大学 | A kind of monitor video multi-target tracking method based on deep learning |
CN107330920B (en) * | 2017-06-28 | 2020-01-03 | 华中科技大学 | Monitoring video multi-target tracking method based on deep learning |
CN107292306A (en) * | 2017-07-07 | 2017-10-24 | 北京小米移动软件有限公司 | Object detection method and device |
CN107451607A (en) * | 2017-07-13 | 2017-12-08 | 山东中磁视讯股份有限公司 | A kind of personal identification method of the typical character based on deep learning |
WO2019037498A1 (en) * | 2017-08-25 | 2019-02-28 | 腾讯科技(深圳)有限公司 | Active tracking method, device and system |
CN109670573A (en) * | 2017-10-13 | 2019-04-23 | 斯特拉德视觉公司 | Utilize the learning method and learning device of the parameter of loss increase adjustment CNN and the test method and test device that use them |
CN109670523A (en) * | 2017-10-13 | 2019-04-23 | 斯特拉德视觉公司 | The method of bounding box corresponding with the object in image is obtained with the convolutional neural networks for including tracking network and using its computing device |
CN109670523B (en) * | 2017-10-13 | 2024-01-09 | 斯特拉德视觉公司 | Method for acquiring bounding box corresponding to object in image by convolution neural network including tracking network and computing device using same |
CN107808138B (en) * | 2017-10-31 | 2021-03-30 | 电子科技大学 | Communication signal identification method based on FasterR-CNN |
CN107808138A (en) * | 2017-10-31 | 2018-03-16 | 电子科技大学 | A kind of communication signal recognition method based on FasterR CNN |
CN107808139B (en) * | 2017-11-01 | 2021-08-06 | 电子科技大学 | Real-time monitoring threat analysis method and system based on deep learning |
CN107808139A (en) * | 2017-11-01 | 2018-03-16 | 电子科技大学 | A kind of real-time monitoring threat analysis method and system based on deep learning |
CN107944369A (en) * | 2017-11-17 | 2018-04-20 | 大连大学 | A kind of pedestrian detection method based on tandem zones generation network and enhancing random forest |
CN108564097B (en) * | 2017-12-05 | 2020-09-22 | 华南理工大学 | Multi-scale target detection method based on deep convolutional neural network |
CN108564097A (en) * | 2017-12-05 | 2018-09-21 | 华南理工大学 | A kind of multiscale target detection method based on depth convolutional neural networks |
CN108052946A (en) * | 2017-12-11 | 2018-05-18 | 国网上海市电力公司 | A kind of high pressure cabinet switch automatic identifying method based on convolutional neural networks |
CN108121986A (en) * | 2017-12-29 | 2018-06-05 | 深圳云天励飞技术有限公司 | Object detection method and device, computer installation and computer readable storage medium |
CN108182413A (en) * | 2017-12-29 | 2018-06-19 | 中国矿业大学(北京) | A kind of mine movable object detecting and tracking recognition methods |
CN108133197A (en) * | 2018-01-05 | 2018-06-08 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of information |
CN108197575A (en) * | 2018-01-05 | 2018-06-22 | 中国电子科技集团公司电子科学研究院 | A kind of abnormal behaviour recognition methods detected based on target detection and bone point and device |
CN110084257A (en) * | 2018-01-26 | 2019-08-02 | 北京京东尚科信息技术有限公司 | Method and apparatus for detecting target |
GB2572472A (en) * | 2018-02-01 | 2019-10-02 | Ford Global Tech Llc | Validating gesture recognition capabilities of automated systems |
US10726248B2 (en) | 2018-02-01 | 2020-07-28 | Ford Global Technologies, Llc | Validating gesture recognition capabilities of automated systems |
GB2572472B (en) * | 2018-02-01 | 2021-02-17 | Ford Global Tech Llc | Validating gesture recognition capabilities of automated systems |
CN108257139B (en) * | 2018-02-26 | 2020-09-08 | 中国科学院大学 | RGB-D three-dimensional object detection method based on deep learning |
CN108257139A (en) * | 2018-02-26 | 2018-07-06 | 中国科学院大学 | RGB-D three-dimension object detection methods based on deep learning |
WO2019175686A1 (en) | 2018-03-12 | 2019-09-19 | Ratti Jayant | On-demand artificial intelligence and roadway stewardship system |
CN108520273A (en) * | 2018-03-26 | 2018-09-11 | 天津大学 | A kind of quick detection recognition method of dense small item based on target detection |
CN108446662A (en) * | 2018-04-02 | 2018-08-24 | 电子科技大学 | A kind of pedestrian detection method based on semantic segmentation information |
CN110399868A (en) * | 2018-04-19 | 2019-11-01 | 北京大学深圳研究生院 | A kind of seashore wetland birds detection method |
CN110399868B (en) * | 2018-04-19 | 2022-09-09 | 北京大学深圳研究生院 | Coastal wetland bird detection method |
CN110414299B (en) * | 2018-04-28 | 2024-02-06 | 中山大学 | Monkey face affinity analysis method based on computer vision |
CN110414299A (en) * | 2018-04-28 | 2019-11-05 | 中山大学 | A kind of monkey face Genetic relationship method based on computer vision |
CN108830152B (en) * | 2018-05-07 | 2020-12-29 | 北京红云智胜科技有限公司 | Pedestrian detection method and system combining deep learning network and artificial features |
CN108830152A (en) * | 2018-05-07 | 2018-11-16 | 北京红云智胜科技有限公司 | The pedestrian detection method and system that deep learning network and manual features are combined |
CN108830280A (en) * | 2018-05-14 | 2018-11-16 | 华南理工大学 | A kind of small target detecting method based on region nomination |
CN108830280B (en) * | 2018-05-14 | 2021-10-26 | 华南理工大学 | Small target detection method based on regional nomination |
CN108921056A (en) * | 2018-06-18 | 2018-11-30 | 上海大学 | Pedestrian detection method based on neural network towards automobile assistant driving |
CN109086678B (en) * | 2018-07-09 | 2022-02-25 | 天津大学 | Pedestrian detection method for extracting image multilevel features based on deep supervised learning |
CN109086678A (en) * | 2018-07-09 | 2018-12-25 | 天津大学 | A kind of pedestrian detection method extracting image multi-stage characteristics based on depth supervised learning |
CN109242516A (en) * | 2018-09-06 | 2019-01-18 | 北京京东尚科信息技术有限公司 | The single method and apparatus of processing service |
CN109636846B (en) * | 2018-12-06 | 2022-10-11 | 重庆邮电大学 | Target positioning method based on cyclic attention convolution neural network |
CN109636846A (en) * | 2018-12-06 | 2019-04-16 | 重庆邮电大学 | Object localization method based on circulation attention convolutional neural networks |
CN111209810A (en) * | 2018-12-26 | 2020-05-29 | 浙江大学 | Bounding box segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time in visible light and infrared images |
CN111209810B (en) * | 2018-12-26 | 2023-05-26 | 浙江大学 | Boundary frame segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time through visible light and infrared images |
CN109711332A (en) * | 2018-12-26 | 2019-05-03 | 浙江捷尚视觉科技股份有限公司 | A kind of face tracking method and application based on regression algorithm |
CN109903310A (en) * | 2019-01-23 | 2019-06-18 | 平安科技(深圳)有限公司 | Method for tracking target, device, computer installation and computer storage medium |
WO2020151167A1 (en) * | 2019-01-23 | 2020-07-30 | 平安科技(深圳)有限公司 | Target tracking method and device, computer device and readable storage medium |
WO2020164270A1 (en) * | 2019-02-15 | 2020-08-20 | 平安科技(深圳)有限公司 | Deep-learning-based pedestrian detection method, system and apparatus, and storage medium |
CN109932730A (en) * | 2019-02-22 | 2019-06-25 | 东华大学 | Laser radar object detection method based on multiple dimensioned monopole three dimensional detection network |
CN109932730B (en) * | 2019-02-22 | 2023-06-23 | 东华大学 | Laser radar target detection method based on multi-scale monopole three-dimensional detection network |
CN110097050A (en) * | 2019-04-03 | 2019-08-06 | 平安科技(深圳)有限公司 | Pedestrian detection method, device, computer equipment and storage medium |
CN110097050B (en) * | 2019-04-03 | 2024-03-08 | 平安科技(深圳)有限公司 | Pedestrian detection method, device, computer equipment and storage medium |
CN110147738A (en) * | 2019-04-29 | 2019-08-20 | 中国人民解放军海军特色医学中心 | A kind of driver fatigue monitoring and pre-alarming method and system |
CN110135480A (en) * | 2019-04-30 | 2019-08-16 | 南开大学 | A kind of network data learning method for eliminating deviation based on unsupervised object detection |
CN110298238A (en) * | 2019-05-20 | 2019-10-01 | 平安科技(深圳)有限公司 | Pedestrian's visual tracking method, model training method, device, equipment and storage medium |
CN110298238B (en) * | 2019-05-20 | 2023-06-30 | 平安科技(深圳)有限公司 | Pedestrian vision tracking method, model training method, device, equipment and storage medium |
CN110322475A (en) * | 2019-05-23 | 2019-10-11 | 北京中科晶上科技股份有限公司 | A kind of sparse detection method of video |
CN110458864A (en) * | 2019-07-02 | 2019-11-15 | 南京邮电大学 | Based on the method for tracking target and target tracker for integrating semantic knowledge and example aspects |
CN110490058A (en) * | 2019-07-09 | 2019-11-22 | 北京迈格威科技有限公司 | Training method, device, system and the computer-readable medium of pedestrian detection model |
CN110472542A (en) * | 2019-08-05 | 2019-11-19 | 深圳北斗通信科技有限公司 | A kind of infrared image pedestrian detection method and detection system based on deep learning |
CN110633641A (en) * | 2019-08-15 | 2019-12-31 | 河北工业大学 | Intelligent security pedestrian detection method, system and device and storage medium |
GB2586996B (en) * | 2019-09-11 | 2022-03-09 | Canon Kk | A method, apparatus and computer program for acquiring a training set of images |
GB2586996A (en) * | 2019-09-11 | 2021-03-17 | Canon Kk | A method, apparatus and computer program for acquiring a training set of images |
CN111340760A (en) * | 2020-02-17 | 2020-06-26 | 中国人民解放军国防科技大学 | Knee joint positioning method based on multitask two-stage convolutional neural network |
CN111340760B (en) * | 2020-02-17 | 2022-11-08 | 中国人民解放军国防科技大学 | Knee joint positioning method based on multitask two-stage convolution neural network |
CN111626276A (en) * | 2020-07-30 | 2020-09-04 | 之江实验室 | Two-stage neural network-based work shoe wearing detection method and device |
CN112861631A (en) * | 2020-12-31 | 2021-05-28 | 南京理工大学 | Wagon balance human body intrusion detection method based on Mask Rcnn and SSD |
CN112949510A (en) * | 2021-03-08 | 2021-06-11 | 香港理工大学深圳研究院 | Human detection method based on fast R-CNN thermal infrared image |
CN112926500B (en) * | 2021-03-22 | 2022-09-20 | 重庆邮电大学 | Pedestrian detection method combining head and overall information |
CN112926500A (en) * | 2021-03-22 | 2021-06-08 | 重庆邮电大学 | Pedestrian detection method combining head and overall information |
CN113947586A (en) * | 2021-10-21 | 2022-01-18 | 江苏和瑞智能科技股份有限公司 | Method for accurately anchoring pharmaceutical packages from acquired images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106845430A (en) | Pedestrian detection and tracking based on acceleration region convolutional neural networks | |
CN109948425B (en) | Pedestrian searching method and device for structure-aware self-attention and online instance aggregation matching | |
Zhang et al. | C2FDA: Coarse-to-fine domain adaptation for traffic object detection | |
CN110163187B (en) | F-RCNN-based remote traffic sign detection and identification method | |
CN108171112A (en) | Vehicle identification and tracking based on convolutional neural networks | |
CN111507378A (en) | Method and apparatus for training image processing model | |
Joshi et al. | Comparing random forest approaches to segmenting and classifying gestures | |
CN106778835A (en) | The airport target by using remote sensing image recognition methods of fusion scene information and depth characteristic | |
KR102462934B1 (en) | Video analysis system for digital twin technology | |
CN104200237A (en) | High speed automatic multi-target tracking method based on coring relevant filtering | |
CN110097044A (en) | Stage car plate detection recognition methods based on deep learning | |
CN109658442B (en) | Multi-target tracking method, device, equipment and computer readable storage medium | |
CN112949647B (en) | Three-dimensional scene description method and device, electronic equipment and storage medium | |
CN107169485A (en) | A kind of method for identifying mathematical formula and device | |
CN114821014B (en) | Multi-mode and countermeasure learning-based multi-task target detection and identification method and device | |
CN113807399A (en) | Neural network training method, neural network detection method and neural network detection device | |
CN110334584B (en) | Gesture recognition method based on regional full convolution network | |
Pei et al. | Localized traffic sign detection with multi-scale deconvolution networks | |
CN112634329A (en) | Scene target activity prediction method and device based on space-time and-or graph | |
CN114332473A (en) | Object detection method, object detection device, computer equipment, storage medium and program product | |
CN115690549A (en) | Target detection method for realizing multi-dimensional feature fusion based on parallel interaction architecture model | |
CN115376101A (en) | Incremental learning method and system for automatic driving environment perception | |
Zhao et al. | Cbph-net: A small object detector for behavior recognition in classroom scenarios | |
Kaur et al. | A systematic review of object detection from images using deep learning | |
CN117456480B (en) | Light vehicle re-identification method based on multi-source information fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170613 |