CN110472467A - The detection method for transport hub critical object based on YOLO v3 - Google Patents
The detection method for transport hub critical object based on YOLO v3 Download PDFInfo
- Publication number
- CN110472467A CN110472467A CN201910276350.3A CN201910276350A CN110472467A CN 110472467 A CN110472467 A CN 110472467A CN 201910276350 A CN201910276350 A CN 201910276350A CN 110472467 A CN110472467 A CN 110472467A
- Authority
- CN
- China
- Prior art keywords
- yolo
- bounding box
- prediction block
- transport hub
- detection method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
A kind of detection method for transport hub critical object based on YOLO v3.The present invention is based on the thoughts directly returned to carry out algorithm design, and multiple scale detecting and multi-tag classification may be implemented.Darknet-53 network the present invention is based on the faultiness design of current target detection technique based on ResNet promotes YOLO Technical Architecture detection accuracy and speed as feature extractor, while the defect for making it be bad to detect wisp is improved.Darknet-53 network has taken into account network complexity and Detection accuracy, reduces model calculation amount compared with common target detection feature extracts network VGG-16.The latest developments of artificial intelligence field are introduced into the detection of the main target in transport hub by this patent method, the potentiality for having good effect in detection accuracy and detection speed, while there is expansion to be applied to other field.
Description
Technical field
The present invention relates to field of image processings, in particular to a kind of crucial for transport hub based on YOLO v3
The detection method of object.
Background technique
With the rapid development of society, various new and high technologies are continued to bring out, and push the development of artificial intelligence.Wherein, In
In field of image processing, the technology about object identification is even more to quickly grow.Object detection technology based on image is each in each row
Using very extensive in industry, for example, in unmanned, unmanned supermarket, remote sensing images identification, biomedical detection, military and public
Pacify the fields such as criminal investigation, is required to the participation of image recognition technology.Especially in field of traffic, object recognition technique gradually generation
The detection and identification of pedestrian, motor vehicles, non power driven vehicle are carried out for original technology.
Currently, the algorithm of target detection of mainstream mainly has Faster R-CNN, YOLO, SSD etc..Wherein, RCNN is used
The thought of proposal+classifier, but realized being placed in CNN the step of extracting proposal, computational efficiency is not
It is high.YOLO has good effect on accuracy of identification and speed.But YOLO serial algorithm respectively has its excellent scarce from v1 to v3
Point.The detection mode of YOLO uses thought end to end, is trained using Darknet network.Wherein, YOLOv1 is by whole
Scheme input as network, it using recurrence method directly in output layer to the position of bounding box (bounding box) and its
Affiliated classification carries out recurrence calculating.But since YOLO is using the thinking directly returned, each frame image is only used as solely
The processing that vertical data source is identified and isolated is directed to result handled by each frame image often continuity and consistency
It is not good enough.
Summary of the invention
In view of the deficiencies of the prior art, the present invention provides a kind of based on YOLO v3 for transport hub critical object
Detection method, the present invention detects the main target in the video of transport hub using artificial intelligence technology, to optimize people
Vehicle environment for traffic control, congestion prevention and the application such as dredges and provides reliable data basis and technical support.The present invention is specific
It adopts the following technical scheme that.
Firstly, to achieve the above object, proposing a kind of detection side for transport hub critical object based on YOLO v3
Method, step include: the first step, obtain each frame image in transport hub in monitor video sequentially in time;Respectively to every
One frame image carries out defogging, clear, enhancing processing;According to the more new data set of testing result before, and data are concentrated each
Kind object carries out label and is labeled as object, wherein includes that existing image data is concentrated for traffic in the data set
The data of primary objects in hinge;Wherein, existing image data set include but is not limited to Microsoft coco data set,
PASCAL VOC data set;The size of second step, each frame image that adjusts separately in the first step that treated is p × p, wherein p
For 32 integral multiple;Each obtained image of second step is divided into s × s grid by third step, is distributed for each grid
The B prediction block bounding box for needing to predict carries out constrained learning by YOLO v3 convolutional network, to obtain each
Self-position corresponding to prediction block bounding box, object category information c and the value of the confidence confidence value;Wherein, institute
It states the value of the confidence confidence and acquisition is calculated by following formula: It is described
The self-position coordinate of prediction block bounding box is denoted as (x, y, w, h);Wherein, x and y indicates prediction block bounding box
Center point coordinate, w and h indicate the length and width of prediction block bounding box;Object falls into label For prediction block
The ratio between friendship union between bounding box and ground truth, wherein ground truth indicates prediction block
The union of grid where bounding box;4th step, to the prediction block bounding for calculating acquisition in the third step
The self-position coordinate (x, y, w, h) of box is normalized, and obtains normalization position coordinates (X, Y, W, H);5th step, to institute
State the value of the confidence confidence in each frame image meet threshold value prediction block bounding box carry out NMS (non-maxima suppression,
Non maximum suppression) processing;6th step marks corresponding according to NMS processing result in each frame image
Prediction block bounding box corresponding to object category information c and its corresponding normalization position coordinates (X, Y, W, H)
Range.
Optionally, in the above-mentioned detection method for transport hub critical object based on YOLO v3, the first step
In, when carrying out enhancing processing to each frame image, specifically using GAN network, (production fights network, Generative
Adversarial Networks) carry out image enhancement processing.
Optionally, in the above-mentioned detection method for transport hub critical object based on YOLO v3, the third step
In, YOLO v3 convolutional network is also corresponding with priori frame anchor, and the priori frame anchor is according to first step data obtained
Collection carries out k-means (K mean cluster algorithm) or IOU (hand over and compare) is calculated and obtained.
Optionally, in the above-mentioned detection method for transport hub critical object based on YOLO v3, the priori frame
Anchor is determined by following steps: step A1, predicts its initial bit on each grid for each priori frame anchor
Set coordinate (tx,ty,pw,ph);Step A2 calculates the prediction block bounding box relative to the inclined of image top left corner apex
Shifting amount is (cx,cy);Step A3, calculating priori frame position coordinates corresponding to the priori frame anchor is (bx,by,bw,bh),
Wherein, bx=σ (tx)+cx, by=σ (ty)+cy, Wherein, σ (*) indicates logistic letter
Number, by Unitary coordinate between 0-1.
Optionally, in the above-mentioned detection method for transport hub critical object based on YOLO v3, the third step
In, the training parameter in YOLO v3 convolutional network is provided that decay=0.005, learning_rate=0.001,
Steps=400000.
Optionally, in the above-mentioned detection method for transport hub critical object based on YOLO v3, the YOLO
The constrained learning that v3 convolutional network is carried out carries out in GPU (graphics processor).
Optionally, in the above-mentioned detection method for transport hub critical object based on YOLO v3, the third step
In, in the training process of YOLO v3 convolutional network, to be based on the improved Darknet-53 network of residual error neural network as feature
Extractor.
It optionally, further include following in the above-mentioned detection method for transport hub critical object based on YOLO v3
Step: after carrying out the first step to the processing of the 5th step to each frame image respectively, also respectively to marking in each frame image
Each object is tracked and is counted.
Optionally, in the above-mentioned detection method for transport hub critical object based on YOLO v3, the 4th step
In normalized specific steps are as follows: step 401, obtain each frame image size be XX × YY;Obtain the prediction block
The self-position coordinate of bounding box is (x, y, w, h);Step 402, X=x/XX is calculated;Y=y/YY;W=w/XX;H=
h/YY;Step 403, normalization corresponding to the self-position coordinate (x, y, w, h) of the prediction block bounding box is obtained
Position coordinates are (X, Y, W, H).
Beneficial effect
The present invention carries out algorithm design based on the thought directly returned, and multiple scale detecting and multi-tag classification may be implemented.
The present invention devises with reference to SSD and Resnet network structure improved based on residual error neural network in the detection process
As feature extractor, the defect for making YOLO Technical Architecture be bad to detect wisp is improved Darknet53 network.
Darknet-53 network has taken into account network complexity and Detection accuracy, extracts network VGG-16 with common target detection feature
Compared to reducing model calculation amount.The latest developments of artificial intelligence field are introduced into the main mesh in transport hub by this patent method
Mark detection, the potentiality for having good effect in detection accuracy and detection speed, while there is expansion to be applied to other field.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
It obtains it is clear that understand through the implementation of the invention.
Detailed description of the invention
Attached drawing is used to provide further understanding of the present invention, and constitutes part of specification, and with it is of the invention
Embodiment together, is used to explain the present invention, and is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the testing process schematic diagram for transport hub critical object of the invention based on YOLO v3;
Fig. 2 is calculated between prediction block bounding box and ground truth in the present inventionFriendship union
The ratio between schematic diagram;
Fig. 3 is the frame diagram of YOLO v3 convolutional network used in the present invention;
Fig. 4 is the schematic diagram that priori frame position coordinates corresponding to priori frame anchor are calculated in the present invention;
Fig. 5 is the entirety of the detection method for transport hub critical object provided by the present invention based on YOLO v3
Flow chart;
Fig. 6 is the schematic diagram of the recognition effect of pedestrian in present invention detection transport hub;
Fig. 7 is the schematic diagram of pedestrian and other objects in transport hub detected by the present invention.
Specific embodiment
To keep purpose and the technical solution of the embodiment of the present invention clearer, below in conjunction with the attached of the embodiment of the present invention
Figure, is clearly and completely described the technical solution of the embodiment of the present invention.Obviously, described embodiment is of the invention
A part of the embodiment, instead of all the embodiments.Based on described the embodiment of the present invention, those of ordinary skill in the art
Every other embodiment obtained, shall fall within the protection scope of the present invention under the premise of being not necessarily to creative work.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art
Language and scientific term) there is meaning identical with the general understanding of those of ordinary skill in fields of the present invention.Should also
Understand, those terms such as defined in the general dictionary, which should be understood that, to be had and the meaning in the context of the prior art
The consistent meaning of justice, and unless defined as here, it will not be explained in an idealized or overly formal meaning.
Fig. 1 is a kind of detection method for transport hub critical object based on YOLO v3 according to the present invention.It will
Video frame is considered as independent image, first carries out grid dividing to image, then carries out prediction frame, confidence level and class probability
It calculates, shows testing result eventually by rectangle marked.Specifically, with reference to Fig. 5, the specific steps of which are as follows:
1, each frame for reading video regards every frame as independent image, and carries out defogging to each frame picture and clearly increase
Strength reason, so that the training network in later period obtains better image feature, increases result to obtain the picture of better quality
Accuracy.GAN network can be used in image enhancement network herein, but is not limited to a kind of this method.
2, using existing data set (such as coco, voc etc.), in the transport hub primary objects for this patent detection
It is incorporated to our own data set, is labelled again label to the data set of addition, legacy data collection is extended, so that training
As a result more accurate.Piece image is divided into S × S grid (grid cell) first by YOLO, if the center of some object
It falls in this grid, then this grid is just responsible for predicting this object.For S × S grid, each grid will predict B
A bounding box, each bounding box are responsible for predicting two parameters of self-position and confidence value.It needs herein
The size for adjusting picture, can be adjusted to 320*320,416*416,608*608, this size must be 32 integer multiple.
3, it after getting out data set, is trained using cyclic convolution neural network, some of training parameter settings are such as
Under, decay=0.005, learning_rate=0.001, steps=400000 are trained enterprising in GPU (graphics processor)
Row.In step 2, the multiple that it is 32 that dimension of picture, which needs to handle, is because YOLO v3 has 5 down-samplings, each sampling step length
It is 2, so the full stride (stride refers to the input size of layer divided by output) of network is 2^5=32.B in step 2
This value of the confidence of bounding box prediction has been measured in the box of prediction containing the confidence level of object and this
The information of the two parameters of the accuracy rate of box prediction, the calculation formula of value are as follows:
In formulaObject refers to what data were concentrated
Object marker, grid cell refer to grid range.
IOU value is that the ratio between union is handed between the bounding box and actual ground truth of prediction.With reference to Fig. 2
It is shown,The ratio between friendship union between prediction block bounding box and ground truth, wherein ground
The union of grid where truth indicates prediction block bounding box.
Each bounding box is responsible for predicting two parameters of self-position and confidence value, and self-position needs 4
A parameter (x, y, w, h) supports, x and y indicate that the center point coordinate of prediction block, w and h indicate the length and width of prediction block, so
Each bounding box will predict (x, y, w, h) and confidence totally 5 values, while each grid will also predict a class
Other information, is denoted as C class.Image is divided into S × S grid, and the size of data of output is exactly S × S × (5*B+C), it is noted here that
Class information is for each grid, and confidence information is for each bounding box, this (5*B+C) dimension
In, (5*B-B) dimension is the coordinate for returning box, and it is classification there are also C dimension that B dimension, which is the confidence of box,.In order to facilitate data
It calculates, to coordinate x, y and w, h is normalized, and using the coordinate of grid and the length and width of image, carries out normalizing to the two respectively
Change, its value is allowed to be limited between 0-1, us is facilitated to calculate.In the implementation, it is most important be exactly how allowable loss function, allow
This three aspects are balanced well.Using sum-squared error loss allowable loss function, final
Loss function is as follows:
In this loss function, it is broadly divided into four parts, coordinate prediction, the characteristic value for containing object (object)
(confidence) characteristic value (confidence) prediction and the class prediction for predicting, being free of object (object), utilize loss
Function carries out constrained learning network.
4, YOLO v3 algorithm can utilize new network structure, can be with reference to the design of SSD and Resnet network structure based on residual
The poor improved Darknet-53 network of neural network makes YOLO series methods be bad to detect wisp as feature extractor
Defect is improved.Darknet-53 takes into account network complexity and Detection accuracy, extracts net with common target detection feature
Network VGG-16, which is compared, reduces model calculation amount, the property of Darknet-53 and Darknet-19, Resnet-101, Resnet-152
It can compare as shown in table 1:
The performance comparison table of 1 Darknet-53 of table and Darknet-19, Resnet
As can be seen from Table 1, accuracy rate of the Darknet-53 in Top-1 and Top-5 is respectively 77.2% and 93.8%,
Higher than Darknet-19;Flop operating speed is 1457 times/s, is higher than Darknet-19, Resnet-101 and Resnet-152;
Detectable 78 frame images per second, are higher than Resnet-101 and Resnet-152, can achieve real-time detection.Also therefore YOLO v3
As one of classic algorithm of target detection so far, it all have the effect of to wisp and big object it is relatively good, this
Derived from its multiple dimensioned convolutional network structure, the prediction of generally three scales, respectively 8 × 8,16 × 16,32 × 32, finally
Prediction output quantity dimension is S × S × [3 × (B × 5+C)], and structure chart such as figure is with reference to shown in Fig. 3.
5, it is detected using YOLO v3 algorithm, needs to obtain anchor (priori frame), it specifically can be on data set after expansion
New anchor is retrieved using the methods of k-means, IOU, but is not limited to both methods.Anchor mechanism refers to pair
The frame shape and size of some references is arranged in each grid, as long as carrying out refine to reference frame when detection, instead of
The position of whole image returns.
It first has to determine that the width for referring to frame is high-dimensional using Anchor mechanism.Although the process of network training can also adjust
The width of frame is high-dimensional, finally obtains accurate frame, but if select at the very start it is more representational with reference to frame, then
Network can be more easily detected accurate position.Convolutional neural networks can be predicted on each cell for each bounding box
4 values, the i.e. wide w and high h of coordinate (x, y) and target are denoted as t respectivelyx,ty, pw, ph.If target's center is opposite in cell
(c is offset in the image upper left cornerx,cy), and anchor point frame has height and width pw, ph, then revised bounding box such as Fig. 4
It is shown.Wherein, bx=σ (tx)+cx, by=σ (ty)+cy,Wherein, σ (*) is indicated
Logistic function, by Unitary coordinate between 0-1.
6, it is identified using single frames picture of the YOLO v3 algorithm to extraction, and marks out classification and the position of object.
Its specific practice is as follows:
It is filtered processing using NMS (non-maxima suppression method), after convolutional network training, when test,
Classification (class) information of each grid forecasting is multiplied with the confidence information that bounding box is predicted, just obtains every
The classification information and accuracy rate information (class-specific confidence score) of a bounding box:
Equation left side first item is exactly the classification information of each grid forecasting, second and third is exactly each bounding
The confidence of box prediction.Obtain the classification information and accuracy rate information (class-specific of each box
Confidence score) after, threshold value is set, the low boxes of score is filtered, NMS processing is carried out to the boxes of reservation, just
Obtain final testing result.
Further, intelligent video monitoring function can be also added in the present invention, in transport hub (such as railway station, crossroad
Exit) monitor video in, carry out pedestrian, vehicle detection using YOLO v3, while being tracked using tracking technique, carried out
Other comprehensive service functions such as number and vehicle number statistics.Its recognition result is with reference to shown in Fig. 6 and Fig. 7.
The ingenious use of the present invention principle of YOLO v3 algorithm multiple scale detecting as a result, very to the detection accuracy of Small object
Height increases recall in the case where not changing mAP with the method for anchor box, and is then subtracted using new network structure
33% calculating is lacked.Speed will fast mistake other detection systems (FasterR-CNN, ResNet, SSD), improve recall rate and
Accuracy rate, promotes the accuracy of positioning, while keeping the accuracy of classification.With network intensification and multiple models combination,
So that training accuracy is improved, while data enhancing is carried out to picture, feature is more significant, and picture quality is more so that extracting
Height, while tracking technique is utilized, the targets such as the pedestrian of identification are tracked, the functions such as demographics are carried out.
The above is only embodiments of the present invention, and the description thereof is more specific and detailed, and but it cannot be understood as right
The limitation of the invention patent range.It should be pointed out that for those of ordinary skill in the art, not departing from the present invention
Under the premise of design, various modifications and improvements can be made, these are all belonged to the scope of protection of the present invention.
Claims (9)
1. a kind of detection method for transport hub critical object based on YOLO v3 characterized by comprising
The first step obtains each frame image in transport hub in monitor video sequentially in time;
Defogging is carried out to each frame image respectively, clear, enhancing is handled;
According to the more new data set of testing result before, and concentrates each object to carry out label data and is labeled as object,
It wherein, include that existing image data concentrates the data for being directed to primary objects in transport hub in the data set;Wherein, existing
Some image data sets include but is not limited to public coco data set, PASCAL VOC data set;
The size of second step, each frame image that adjusts separately in the first step that treated is p × p, wherein the integral multiple that p is 32;
Each obtained image of second step is divided into s × s grid by third step, is distributed B for each grid and is needed in advance
The prediction block bounding box of survey carries out constrained learning by YOLO v3 convolutional network, to obtain each prediction block
Self-position corresponding to bounding box, object category information c and the value of the confidence confidence value;Wherein, the confidence
Value confidence is calculated by following formula and is obtained: The prediction block
The self-position coordinate of bounding box is denoted as (x, y, w, h);Wherein, x and y indicates the center of prediction block bounding box
Point coordinate, w and h indicate the length and width of prediction block bounding box;Object falls into label
The ratio between friendship union between prediction block bounding box and ground truth, wherein ground
The union of grid where truth indicates prediction block bounding box;
4th step, to the self-position coordinate of the prediction block bounding box that acquisition is calculated in the third step (x, y,
W, h) it is normalized, obtain normalization position coordinates (X, Y, W, H);
5th step carries out the prediction block bounding box for meeting threshold value of the value of the confidence confidence in each frame image
NMS processing;
6th step marks corresponding to corresponding prediction block bounding box according to NMS processing result in each frame image
Object category information c and its corresponding normalization position coordinates (X, Y, W, H) range.
2. the detection method for transport hub critical object based on YOLO v3, feature exist as described in claim 1
In in the first step, when carrying out enhancing processing to each frame image, specifically using GAN network progress image enhancement processing.
3. such as the detection method for transport hub critical object claimed in claims 1-2 based on YOLO v3, feature exists
In in the third step, YOLO v3 convolutional network is also corresponding with priori frame anchor, and the priori frame anchor is according to first
Data set obtained is walked to carry out k-means or I0U calculating and obtain.
4. the detection method for transport hub critical object based on YOLO v3, feature exist as claimed in claim 3
In the priori frame anchor is determined by following steps:
Step A1 predicts its initial position co-ordinates (t on each grid for each priori frame anchorx, ty, pw, ph);
Step A2, calculating the prediction block bounding box relative to the offset of image top left corner apex is (cx, cy);
Step A3, calculating priori frame position coordinates corresponding to the priori frame anchor is (bx, by, bw, bh), wherein bx=σ
(tx)+cx, by=σ (ty)+cy, Wherein, σ (*) indicates logistic function, and coordinate is returned
One changes between 0-1.
5. the detection method for transport hub critical object based on YOLO v3 as described in claim 1-4, feature exist
In in the third step, the training parameter in YOLO v3 convolutional network is provided that decay=0.005, learning_
Rate=0.001, steps=400000.
6. the detection method for transport hub critical object based on YOLO v3 as described in claim 1-4, feature exist
In the constrained learning that the YOLO v3 convolutional network is carried out carries out on GPU.
7. the detection method for transport hub critical object based on YOLO v3 as described in claim 1-4, feature exist
In in the third step, in the training process of YOLO v3 convolutional network, to be based on the improved Darknet- of residual error neural network
53 networks are as feature extractor.
8. the detection method for transport hub critical object based on YOLO v3 as described in claim 1-7, feature exist
In further comprising the steps of:
It is also each to being marked in each frame image respectively after carrying out the first step to the processing of the 5th step to each frame image respectively
Object is tracked and is counted.
9. the detection method for transport hub critical object based on YOLO v3 as described in claim 1-3, feature exist
In normalized specific steps in the 4th step are as follows:
Step 401, the size for obtaining each frame image is XX × YY;The self-position for obtaining the prediction block bounding box is sat
It is designated as (x, y, w, h);
Step 402, X=x/XX is calculated;Y=y/YY;W=w/XX;H=h/YY;
Step 403, normalization position corresponding to the self-position coordinate (x, y, w, h) of the prediction block bounding box is obtained
Setting coordinate is (X, Y, W, H).
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910276350.3A CN110472467A (en) | 2019-04-08 | 2019-04-08 | The detection method for transport hub critical object based on YOLO v3 |
PCT/CN2019/096014 WO2020206861A1 (en) | 2019-04-08 | 2019-07-15 | Yolo v3-based detection method for key object at transportation junction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910276350.3A CN110472467A (en) | 2019-04-08 | 2019-04-08 | The detection method for transport hub critical object based on YOLO v3 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110472467A true CN110472467A (en) | 2019-11-19 |
Family
ID=68507356
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910276350.3A Pending CN110472467A (en) | 2019-04-08 | 2019-04-08 | The detection method for transport hub critical object based on YOLO v3 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110472467A (en) |
WO (1) | WO2020206861A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929670A (en) * | 2019-12-02 | 2020-03-27 | 合肥城市云数据中心股份有限公司 | Muck truck cleanliness video identification and analysis method based on yolo3 technology |
CN111024072A (en) * | 2019-12-27 | 2020-04-17 | 浙江大学 | Satellite map aided navigation positioning method based on deep learning |
CN111241959A (en) * | 2020-01-06 | 2020-06-05 | 重庆大学 | Method for detecting person without wearing safety helmet through construction site video stream |
CN111582345A (en) * | 2020-04-29 | 2020-08-25 | 中国科学院重庆绿色智能技术研究院 | Target identification method for complex environment under small sample |
CN111738212A (en) * | 2020-07-20 | 2020-10-02 | 平安国际智慧城市科技股份有限公司 | Traffic signal lamp identification method, device, equipment and medium based on artificial intelligence |
CN112257527A (en) * | 2020-10-10 | 2021-01-22 | 西南交通大学 | Mobile phone detection method based on multi-target fusion and space-time video sequence |
CN112329768A (en) * | 2020-10-23 | 2021-02-05 | 上善智城(苏州)信息科技有限公司 | Improved YOLO-based method for identifying fuel-discharging stop sign of gas station |
CN112507896A (en) * | 2020-12-14 | 2021-03-16 | 大连大学 | Method for detecting cherry fruits by adopting improved YOLO-V4 model |
CN112784694A (en) * | 2020-12-31 | 2021-05-11 | 杭州电子科技大学 | EVP-YOLO-based indoor article detection method |
CN113077496A (en) * | 2021-04-16 | 2021-07-06 | 中国科学技术大学 | Real-time vehicle detection and tracking method and system based on lightweight YOLOv3 and medium |
CN113191227A (en) * | 2021-04-20 | 2021-07-30 | 上海东普信息科技有限公司 | Cabinet door state detection method, device, equipment and storage medium |
CN113326755A (en) * | 2021-05-21 | 2021-08-31 | 华南理工大学 | Method for realizing illumination area control by monitoring hand position by illumination system |
Families Citing this family (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112215824A (en) * | 2020-10-16 | 2021-01-12 | 南通大学 | YOLO-v 3-based cloth cover defect detection and auxiliary device and method |
CN112395957A (en) * | 2020-10-28 | 2021-02-23 | 连云港杰瑞电子有限公司 | Online learning method for video target detection |
CN112257809B (en) * | 2020-11-02 | 2023-07-14 | 浙江大华技术股份有限公司 | Target detection network optimization method and device, storage medium and electronic equipment |
CN112287884B (en) * | 2020-11-19 | 2024-02-20 | 长江大学 | Examination abnormal behavior detection method and device and computer readable storage medium |
CN112633327B (en) * | 2020-12-02 | 2023-06-30 | 西安电子科技大学 | Staged metal surface defect detection method, system, medium, equipment and application |
CN112507929B (en) * | 2020-12-16 | 2022-05-13 | 武汉理工大学 | Vehicle body spot welding slag accurate detection method based on improved YOLOv3 network |
CN112561982A (en) * | 2020-12-22 | 2021-03-26 | 电子科技大学中山学院 | High-precision light spot center detection method based on VGG-16 |
CN112288043B (en) * | 2020-12-23 | 2021-08-03 | 飞础科智慧科技(上海)有限公司 | Kiln surface defect detection method, system and medium |
CN112633176B (en) * | 2020-12-24 | 2023-03-14 | 广西大学 | Rail transit obstacle detection method based on deep learning |
CN112734794B (en) * | 2021-01-14 | 2022-12-23 | 北京航空航天大学 | Moving target tracking and positioning method based on deep learning |
CN112750117B (en) * | 2021-01-15 | 2024-01-26 | 河南中抗医学检验有限公司 | Blood cell image detection and counting method based on convolutional neural network |
CN112699967B (en) * | 2021-01-18 | 2024-03-12 | 武汉大学 | Remote airport target detection method based on improved deep neural network |
CN112800934B (en) * | 2021-01-25 | 2023-08-08 | 西北大学 | Behavior recognition method and device for multi-class engineering vehicle |
CN112819780A (en) * | 2021-01-29 | 2021-05-18 | 菲特(天津)检测技术有限公司 | Method and system for detecting surface defects of silk ingots and silk ingot grading system |
CN113033604B (en) * | 2021-02-03 | 2022-11-15 | 淮阴工学院 | Vehicle detection method, system and storage medium based on SF-YOLOv4 network model |
CN112561912B (en) * | 2021-02-20 | 2021-06-01 | 四川大学 | Medical image lymph node detection method based on priori knowledge |
CN113076804B (en) * | 2021-03-09 | 2022-06-17 | 武汉理工大学 | Target detection method, device and system based on YOLOv4 improved algorithm |
CN113095159A (en) * | 2021-03-23 | 2021-07-09 | 陕西师范大学 | Urban road traffic condition analysis method based on CNN |
CN112926681B (en) * | 2021-03-29 | 2022-11-29 | 复旦大学 | Target detection method and device based on deep convolutional neural network |
CN113392852B (en) * | 2021-04-30 | 2024-02-13 | 浙江万里学院 | Vehicle detection method and system based on deep learning |
CN113537226A (en) * | 2021-05-18 | 2021-10-22 | 哈尔滨理工大学 | Smoke detection method based on deep learning |
CN113222982A (en) * | 2021-06-02 | 2021-08-06 | 上海应用技术大学 | Wafer surface defect detection method and system based on improved YOLO network |
CN113393438B (en) * | 2021-06-15 | 2022-09-16 | 哈尔滨理工大学 | Resin lens defect detection method based on convolutional neural network |
CN113469254B (en) * | 2021-07-02 | 2024-04-16 | 上海应用技术大学 | Target detection method and system based on target detection model |
CN113469057B (en) * | 2021-07-02 | 2023-04-28 | 中南大学 | Fire eye video self-adaptive detection method, device, equipment and medium |
CN113569737A (en) * | 2021-07-28 | 2021-10-29 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Notebook screen defect detection method and medium based on autonomous learning network model |
CN113688706B (en) * | 2021-08-16 | 2023-12-05 | 南京信息工程大学 | Vehicle detection method, device, equipment and storage medium |
CN113781458A (en) * | 2021-09-16 | 2021-12-10 | 厦门理工学院 | Artificial intelligence based identification method |
CN113792746B (en) * | 2021-09-18 | 2024-03-12 | 石家庄铁道大学 | Yolo V3-based ground penetrating radar image target detection method |
CN114022412A (en) * | 2021-10-12 | 2022-02-08 | 上海伯耶信息科技有限公司 | Cigarette accessory paper defect detection method based on deep learning visual inspection |
CN113947108A (en) * | 2021-10-15 | 2022-01-18 | 福州大学 | Player tracking detection method based on YOLO V5 |
CN113989708A (en) * | 2021-10-27 | 2022-01-28 | 福州大学 | Campus library epidemic prevention and control method based on YOLO v4 |
CN114022705B (en) * | 2021-10-29 | 2023-08-04 | 电子科技大学 | Self-adaptive target detection method based on scene complexity pre-classification |
CN114187242A (en) * | 2021-11-25 | 2022-03-15 | 北京航空航天大学 | Guidance optical fiber surface defect detection and positioning method based on deep learning |
CN114648685A (en) * | 2022-03-23 | 2022-06-21 | 成都臻识科技发展有限公司 | Method and system for converting anchor-free algorithm into anchor-based algorithm |
CN114818880B (en) * | 2022-04-07 | 2024-04-09 | 齐鲁工业大学 | Method and system for automatically identifying key operation flow of YOLOv3 railway |
CN114898320B (en) * | 2022-05-30 | 2023-07-28 | 西南交通大学 | YOLO v 5-based train positioning method and system |
CN114723750B (en) * | 2022-06-07 | 2022-09-16 | 南昌大学 | Transmission line strain clamp defect detection method based on improved YOLOX algorithm |
CN116721403A (en) * | 2023-06-19 | 2023-09-08 | 山东高速集团有限公司 | Road traffic sign detection method |
CN117115856B (en) * | 2023-08-02 | 2024-04-05 | 珠海微度芯创科技有限责任公司 | Target detection method based on image fusion, human body security inspection equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230278A (en) * | 2018-02-24 | 2018-06-29 | 中山大学 | A kind of image based on generation confrontation network goes raindrop method |
CN109697420A (en) * | 2018-12-17 | 2019-04-30 | 长安大学 | A kind of Moving target detection and tracking towards urban transportation |
CN109829400A (en) * | 2019-01-18 | 2019-05-31 | 青岛大学 | A kind of fast vehicle detection method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10614326B2 (en) * | 2017-03-06 | 2020-04-07 | Honda Motor Co., Ltd. | System and method for vehicle control based on object and color detection |
CN109117794A (en) * | 2018-08-16 | 2019-01-01 | 广东工业大学 | A kind of moving target behavior tracking method, apparatus, equipment and readable storage medium storing program for executing |
CN109272509B (en) * | 2018-09-06 | 2021-10-29 | 郑州云海信息技术有限公司 | Target detection method, device and equipment for continuous images and storage medium |
CN109325438B (en) * | 2018-09-18 | 2021-06-15 | 桂林电子科技大学 | Real-time identification method of live panoramic traffic sign |
-
2019
- 2019-04-08 CN CN201910276350.3A patent/CN110472467A/en active Pending
- 2019-07-15 WO PCT/CN2019/096014 patent/WO2020206861A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230278A (en) * | 2018-02-24 | 2018-06-29 | 中山大学 | A kind of image based on generation confrontation network goes raindrop method |
CN109697420A (en) * | 2018-12-17 | 2019-04-30 | 长安大学 | A kind of Moving target detection and tracking towards urban transportation |
CN109829400A (en) * | 2019-01-18 | 2019-05-31 | 青岛大学 | A kind of fast vehicle detection method |
Non-Patent Citations (4)
Title |
---|
JOSEPH REDMON 等: "YOLOv3: An Incremental Improvement", 《ARXIV.ORG》 * |
JOSEPH REDMON 等: "you only look once,unified,real-time object detection", 《ARXIV.ORG》 * |
大写的ZDQ: ""这可能是最详细的目标检测YOLO_v1的解释"", 《HTTPS://BLOG.CSDN.NET/U010712012/ARTICLE/DETAILS/85116365》 * |
张富凯 等: "基于改进YOLOv3 的快速车辆检测方法", 《计算机工程与应用》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929670A (en) * | 2019-12-02 | 2020-03-27 | 合肥城市云数据中心股份有限公司 | Muck truck cleanliness video identification and analysis method based on yolo3 technology |
CN111024072A (en) * | 2019-12-27 | 2020-04-17 | 浙江大学 | Satellite map aided navigation positioning method based on deep learning |
CN111241959A (en) * | 2020-01-06 | 2020-06-05 | 重庆大学 | Method for detecting person without wearing safety helmet through construction site video stream |
CN111582345A (en) * | 2020-04-29 | 2020-08-25 | 中国科学院重庆绿色智能技术研究院 | Target identification method for complex environment under small sample |
CN111738212A (en) * | 2020-07-20 | 2020-10-02 | 平安国际智慧城市科技股份有限公司 | Traffic signal lamp identification method, device, equipment and medium based on artificial intelligence |
CN111738212B (en) * | 2020-07-20 | 2020-11-20 | 平安国际智慧城市科技股份有限公司 | Traffic signal lamp identification method, device, equipment and medium based on artificial intelligence |
CN112257527A (en) * | 2020-10-10 | 2021-01-22 | 西南交通大学 | Mobile phone detection method based on multi-target fusion and space-time video sequence |
CN112257527B (en) * | 2020-10-10 | 2022-09-02 | 西南交通大学 | Mobile phone detection method based on multi-target fusion and space-time video sequence |
CN112329768A (en) * | 2020-10-23 | 2021-02-05 | 上善智城(苏州)信息科技有限公司 | Improved YOLO-based method for identifying fuel-discharging stop sign of gas station |
CN112507896A (en) * | 2020-12-14 | 2021-03-16 | 大连大学 | Method for detecting cherry fruits by adopting improved YOLO-V4 model |
CN112507896B (en) * | 2020-12-14 | 2023-11-07 | 大连大学 | Method for detecting cherry fruits by adopting improved YOLO-V4 model |
CN112784694A (en) * | 2020-12-31 | 2021-05-11 | 杭州电子科技大学 | EVP-YOLO-based indoor article detection method |
CN113077496A (en) * | 2021-04-16 | 2021-07-06 | 中国科学技术大学 | Real-time vehicle detection and tracking method and system based on lightweight YOLOv3 and medium |
CN113191227A (en) * | 2021-04-20 | 2021-07-30 | 上海东普信息科技有限公司 | Cabinet door state detection method, device, equipment and storage medium |
CN113326755A (en) * | 2021-05-21 | 2021-08-31 | 华南理工大学 | Method for realizing illumination area control by monitoring hand position by illumination system |
Also Published As
Publication number | Publication date |
---|---|
WO2020206861A1 (en) | 2020-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110472467A (en) | The detection method for transport hub critical object based on YOLO v3 | |
WO2020173226A1 (en) | Spatial-temporal behavior detection method | |
CN112418117B (en) | Small target detection method based on unmanned aerial vehicle image | |
CN107145889B (en) | Target identification method based on double CNN network with RoI pooling | |
CN106780612A (en) | Object detecting method and device in a kind of image | |
CN112836639A (en) | Pedestrian multi-target tracking video identification method based on improved YOLOv3 model | |
CN103116896A (en) | Visual saliency model based automatic detecting and tracking method | |
CN108182447A (en) | A kind of adaptive particle filter method for tracking target based on deep learning | |
CN106296743A (en) | A kind of adaptive motion method for tracking target and unmanned plane follow the tracks of system | |
CN103440667A (en) | Automatic device for stably tracing moving targets under shielding states | |
CN109800756A (en) | A kind of text detection recognition methods for the intensive text of Chinese historical document | |
CN110334656A (en) | Multi-source Remote Sensing Images Clean water withdraw method and device based on information source probability weight | |
CN110458022A (en) | It is a kind of based on domain adapt to can autonomous learning object detection method | |
CN110532937A (en) | Method for distinguishing is known to targeting accuracy with before disaggregated model progress train based on identification model | |
CN104680193A (en) | Online target classification method and system based on fast similarity network fusion algorithm | |
CN110245592A (en) | A method of for promoting pedestrian's weight discrimination of monitoring scene | |
CN107609509A (en) | A kind of action identification method based on motion salient region detection | |
CN104463909A (en) | Visual target tracking method based on credibility combination map model | |
CN104573701B (en) | A kind of automatic testing method of Tassel of Corn | |
CN115527133A (en) | High-resolution image background optimization method based on target density information | |
CN103996207A (en) | Object tracking method | |
CN113780145A (en) | Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium | |
CN113724293A (en) | Vision-based intelligent internet public transport scene target tracking method and system | |
CN113192108B (en) | Man-in-loop training method and related device for vision tracking model | |
CN113192106B (en) | Livestock tracking method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191119 |