CN111507196A - Vehicle type identification method based on machine vision and deep learning - Google Patents
Vehicle type identification method based on machine vision and deep learning Download PDFInfo
- Publication number
- CN111507196A CN111507196A CN202010204312.XA CN202010204312A CN111507196A CN 111507196 A CN111507196 A CN 111507196A CN 202010204312 A CN202010204312 A CN 202010204312A CN 111507196 A CN111507196 A CN 111507196A
- Authority
- CN
- China
- Prior art keywords
- detection
- image
- cls
- result
- classifier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000013135 deep learning Methods 0.000 title claims abstract description 9
- 238000001514 detection method Methods 0.000 claims abstract description 102
- 238000011176 pooling Methods 0.000 claims description 15
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 101150115304 cls-2 gene Proteins 0.000 claims description 3
- 101150058580 cls-3 gene Proteins 0.000 claims description 3
- 101150053100 cls1 gene Proteins 0.000 claims description 3
- 238000007477 logistic regression Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 3
- 238000012217 deletion Methods 0.000 claims description 2
- 230000037430 deletion Effects 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 1
- 238000013100 final test Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a vehicle type identification method based on machine vision and deep learning; at present, most of the vehicle identification field takes images collected by a high-altitude camera as a data set, and the images collected by a mobile platform are rarely adopted as the data set. If the traditional image recognition technology is utilized, the requirement of mobile violation evidence collection cannot be met. The method comprises the steps of firstly collecting image information of road automobiles through a vehicle-mounted mobile platform, carrying out primary automobile target detection and identification by using a yolov3 algorithm in deep learning, and then comprehensively judging whether to send the three classifiers for forecasting according to a detection frame and a predicted value threshold. And determining whether the detection frame is detected incorrectly and deleting the detection frame according to the detection results of the three classifiers and the target detection algorithm result. And finally, updating the detection result of the system. The method is suitable for the field of vehicle identification of the vehicle-mounted mobile platform in the non-limited operation environment, and achieves a good effect in an actual application scene.
Description
Technical Field
The invention belongs to the field of machine vision (or intelligent transportation), and particularly relates to a vehicle type identification algorithm based on machine vision and deep learning.
Background
At present, machine vision and deep learning technologies are widely applied to intelligent traffic systems, such as the fields of license plate recognition, traffic flow detection, vehicle violation detection, road vehicle type recognition and the like.
The vehicle identification means that a machine vision technology is applied to input digital images or videos acquired by a camera as images, a target detection framework in deep learning is used for identifying vehicle types in the images, and the images are used as one of judgment bases of vehicle violation.
However, in the field of intelligent transportation, because some image acquisition is realized by using a vehicle-mounted mobile platform, vehicles in the image are overlapped and shielded seriously. And the phenomenon that vehicles of the same type are large and small in the image is obvious, so that the error rate is high when the traditional target detection method is used.
Disclosure of Invention
The invention aims to reduce the error rate of automobile type identification as much as possible, and provides a method for collecting the overall identification type of a classifier by sending the detected result into a plurality of classifiers for re-identification and prediction through judgment conditions, and combining the identification results of the plurality of classifiers to collect the overall identification type of the classifier, wherein the final result of the automobile type identification is obtained after the identification result of the classifier and the identification result of target detection are comprehensively analyzed. The method comprises the steps of detecting the vehicle type under other application backgrounds, such as using an automobile image acquired by a camera image at the position of a traffic light, processing the road automobile image acquired by a mobile vehicle-mounted platform, and outputting a final automobile type detection result.
The whole invention provided by the technology is as follows:
step (1) automobile image acquisition
Acquiring an automobile image g (x, y) in a roadside or intersection area with a violation phenomenon by using a mobile platform of a vehicle-mounted digital camera, and storing the acquired image at a mobile platform storage end;
step (2) image preprocessing
Preprocessing the collected color images, respectively carrying out mean value filtering processing, and removing noise signals in the images, wherein the formula is as follows:
f (x, y) represents the image information after median filtering, g (x, y) represents the original information of the image, M is the number of pixel points in the region, and s is the value range of x and y in the region;
and (3) performing target detection on the preprocessed digital image by using a yolov3 network framework, and obtaining a primary detection frame and a predicted value:
(a) carrying out size normalization on the preprocessed images, and uniformly converting the acquired images into 416 × 416 size images by adopting an interpolation method; the processing formula is as follows:
wherein f1i,jThe (x, y) is the pixel information of the converted image, the f (u, v) is the pixel information of the original image, when the interpolation method is used for zooming the image, the position information of the pixel point position corresponding to the original image after conversion is found out, and the processing method is as follows:
u=x*(srcwidth/dstwidth)
v=y*(srcheight/dstheight)
wherein srcwidth and srgheight represent width and height values of the image after conversion, and dstwidth and dstight represent width and height values of the image before conversion;
(b) sending the normalized image size into yolov3 convolution network for several convolution and pooling operations; the formula for its convolution and pooling operations is as follows:
whereinIs the convolution sign, Y is the convolution output, a3×3The convolution kernel size is Y1 is the maximum pooling layer output, and h and w are the pooling frame height and width information;
(c) and (3) performing logistic regression operation on the feature frames subjected to the convolution pooling to obtain a primary prediction frame and a primary detection frame, wherein the loss expression is as follows:
Loss=Losslxy+Losslwh+Losslcls+Losslconf
of which L osslxyIndicating the location loss, L osslwhDenotes the dimension loss, L osslclsRepresents the class loss, L osslconfThe location loss is represented by the location of the location,
step (4) judging whether the detection frame and the classification result of yolov3 need to be sent to the classifier again for re-identification
After the primary detection result of yolov3 is obtained, it needs to judge whether the detection frame needs to be input into the classifier for predicting again according to the area and the threshold value of the detection frame, and the judgment formula is as follows:
wherein Y isiIf the judgment result of the ith detection target in the image is 1, the detection result is required to enter the classifier for re-detection, and if the judgment result is 0, the detection result is the final output result; yo _ area is the area of the prediction frame, and yo _ pre is the detection confidence of yolov 3; area _ th is a predicted frame area threshold, and pre _ th is a threshold;
if Y isiIf not, directly sending the ith automobile type detection frame in the image to a classifier for re-identificationOutputting a detection result;
step (5) determining the judgment result
Simultaneously sending the detection frames needing to be identified again into the three classifiers, and comparing the output results of the three classifiers to obtain the final output of the classifiers, wherein the formula expression is as follows:
wherein cls _ cls represents the final classification result output by the classifier, and cls _ pre represents the confidence of the classifier; cls _ cls1, cls _ cls2 and cls _ cls3 respectively represent classification results of the three classifiers, and cls _ pre1, cls _ pre2 and cls _ pre3 respectively represent confidence degrees of the three classifiers;
after the output result of the classifier is obtained, the output result and the result of the target detector are jointly judged whether to remove the detection frame, the system detection result is refreshed, and the next picture detection is carried out, wherein the judgment formula is as follows:
wherein Y represents the final detection result, yo _ cls represents the yolov3 detection class, cls _ pre represents the classifier classification confidence, and cls _ cls represents the classifier class; 0 represents the deletion of the detection box;
the yolov3 target detector and the classifier jointly classify the types of automobiles in the images, and the output of the whole system is optimized.
The system based on the invention can be divided into three parts, namely an image acquisition module, an image target detection module and an image classification module. The image acquisition module is mainly used for acquiring high-quality traffic images, wherein the high-quality traffic images comprise a mobile platform, 500 ten thousand pixels and 23.27fps MV-CA050-10GM/GC industrial camera acquisition images; the image processing module detects the vehicle target of the image by using yolov3 and identifies the vehicle type, and the image classification module is used for reclassifying the detection result of yolov3 and optimizing the detection result of yolov 3.
The working process of the system provided by the invention comprises the following steps:
the mobile violation identification system patrols and walks on public roads at ordinary times, and when workers find that the vehicles violate the regulations, the camera shooting system is started to collect violation evidences. The system stores the pictures and sends the pictures to yolov3 target detection network for vehicle type identification. And the classifier re-identification is carried out on the region of which the recognition accuracy of the yolov3 system is not high. And integrating the recognition results of the two deep learning frames as the final recognition detection result of the region.
Compared with the prior art, the invention has the following advantages and effects:
(1) the invention provides a method for yolov3 target detection and multi-classifier combined score, which comprises the following steps: the automobile type of the detection frame is identified again through adding multiple classifiers, compared with the existing method for simply using yolov3 target detection, the method has better detection effect, and particularly, areas with low identification rate cannot be identified again. The method for adding a plurality of classifiers after target detection and predicting the detection frame improves the accuracy and precision of identification, particularly has ideal processing effect on a larger target area in an image, and obtains accurate information of an actual target detection result.
(2) Compared with the traditional target detection method, the method can correct the detection result again by using a plurality of classifiers to operate simultaneously under the condition of utilizing the existing target detection framework. The method has a good effect particularly on the detection frame which is identified by the target detection frame only and has low confidence coefficient of the target frame. The method is applied to industrial detection, can reduce the identification and detection errors to a certain extent, and improves the identification accuracy of the system to the types of the road vehicles.
Drawings
FIG. 1 is a schematic diagram of an overall system of an example project of the present invention;
FIG. 2 is a flow chart of the present invention for utilizing multiple classifiers for re-prediction;
FIG. 3 is a flow chart of a final test result determination according to an embodiment of the present invention;
Detailed Description
The following detailed description is made in conjunction with the system for detecting the type of a traffic road vehicle based on machine vision and the accompanying drawings, so as to clearly and completely describe the technical solution in the implementation example of the present invention.
The embodiment of the invention provides a residual yarn detection method based on machine vision, which can be divided into three steps of image acquisition and preprocessing, yolov3 primary target detection and classifier re-identification in a specific scheme as shown in figure 1. Firstly, images are shot from a traffic road scene by using a mobile platform and an industrial camera, and preprocessing such as Gaussian filtering is carried out on the collected images. And sending the preprocessed image into a yolov3 target detector to obtain a primary detection frame and confidence, and then determining whether to send the preprocessed image into a plurality of classifiers for secondary prediction by judging the area of the detection frame and the threshold of the confidence to obtain prediction classification and confidence. And finally, combining the yolov3 detection result and the overall classifier operation result to jointly output a target detection result.
The method for improving the vehicle identification rate provided in the embodiment specifically comprises the following steps:
step (1): automobile image acquisition
An automobile image f (x, y) is collected at a roadside or intersection in an area with a phenomenon of violation by using a mobile platform and a 500-million-pixel, 23.27fps MV-CA050-10GM/GC industrial camera, and the collected image is saved in a mobile platform terminal.
Step (2) image preprocessing
Preprocessing the collected color images, respectively carrying out mean value filtering processing, and removing noise signals in the images, wherein the formula is as follows:
wherein f (x, y) represents the image information after median filtering, g (x, y) represents the original information of the image, M is the number of pixel points in the region, and s is the value range of x and y in the region.
And (3) performing target detection on the preprocessed digital image by using a yolov3 network framework, and obtaining a primary detection frame and a predicted value:
(a) and carrying out size normalization on the preprocessed images, and uniformly converting the acquired images into 416 × 416 size images by adopting an interpolation method. The processing formula is as follows:
wherein f1i,jThe (x, y) is the pixel information of the converted image, the f (u, v) is the pixel information of the original image, when the interpolation method is used for zooming the image, the position information of the pixel point position corresponding to the original image after conversion is found out, and the processing method is as follows:
u=x*(srcwidth/dstwidth)
v=y*(srcheight/dstheight)
where (srckidth, srchight) represents size information of the image after conversion, and (dstwidth, dstight) represents size information of the image before conversion.
(b) The normalized image size is fed into yolov3 convolution network for several convolution and pooling operations. The formula for its convolution and pooling operations is as follows:
whereinIs the convolution sign, Y is the convolution output, a3×3For the convolution kernel size, Y1 is the maximum pooling level output, and h, w are pooling frame height, width information.
(c) And (3) performing logistic regression operation on the feature frames subjected to the convolution pooling to obtain a primary prediction frame and a primary detection frame, wherein the loss expression is as follows:
Loss=Losslxy+Losslwh+Losslcls+Losslconf
of which L osslxyIndicating the location loss, L osslwhDenotes the dimension loss, L osslclsRepresents the class loss, L osslconfThe location loss is represented by the location of the location,
and (4): judging whether to enter the classifier
After the primary detection result of yolov3 is obtained, it is necessary to determine whether the detection frame needs to be input into the classifier for predicting again according to the area and the threshold size of the detection frame, as shown in fig. 2, the determination formula is as follows:
wherein Y isiAnd the judgment result of the ith detection target in the image is 1, which indicates that the detection result is to enter the classifier for re-detection, and 0, which indicates that the detection result is the final output result. yo _ area is the prediction frame area, yo _ pre is the prediction value of yolov 3. area _ th is a predicted frame area threshold, and pre _ th is a predicted value threshold.
If Y isiAnd (1) the ith automobile type detection frame in the image is input into the classifier for re-identification, otherwise, the detection result is directly output.
And (5): identification of the judgment result
Simultaneously sending the detection frames needing to be identified again into the three classifiers, and comparing the output results of the three classifiers to obtain the final output of the classifiers, wherein the formula expression is as follows:
where cls _ cls represents the final classification result output by the classifier and cls _ pre represents the classifier confidence. cls _ cls1, cls _ cls2 and cls _ cls3 respectively represent classification results of the three classifiers, and cls _ pre1, cls _ pre2 and cls _ pre3 respectively represent confidence degrees of the three classifiers.
Whether the detection frame is removed is judged according to the result of the classifier and the result of yolov3 prediction, as shown in fig. 3, and the system detection result is refreshed and the next picture detection is performed, wherein the judgment formula is as follows:
wherein Y represents the final detection result, yo _ pre represents the yolov3 detection confidence result, yo _ area represents the yolov3 detection box, yo _ cls represents the yolov3 detection category, cls _ pre represents the classifier classification confidence, and cls _ cls represents the classifier category. 0 indicates that the detection box is deleted.
The yolov3 target detector and the classifier jointly classify the types of automobiles in the images, and the output of the whole system is optimized.
Claims (1)
1. A vehicle type identification method based on machine vision and deep learning is characterized by comprising the following steps:
step (1) automobile image acquisition
Acquiring an automobile image g (x, y) in a roadside or intersection area with a violation phenomenon by using a mobile platform of a vehicle-mounted digital camera, and storing the acquired image at a mobile platform storage end;
step (2) image preprocessing
Preprocessing the collected color images, respectively carrying out mean value filtering processing, and removing noise signals in the images, wherein the formula is as follows:
f (x, y) represents the image information after median filtering, g (x, y) represents the original information of the image, M is the number of pixel points in the region, and s is the value range of x and y in the region;
and (3) performing target detection on the preprocessed digital image by using a yolov3 network framework, and obtaining a primary detection frame and a predicted value:
(a) carrying out size normalization on the preprocessed images, and uniformly converting the acquired images into 416 × 416 size images by adopting an interpolation method; the processing formula is as follows:
wherein f1i,jThe (x, y) is the pixel information of the converted image, the f (u, v) is the pixel information of the original image, when the interpolation method is used for zooming the image, the position information of the pixel point position corresponding to the original image after conversion is found out, and the processing method is as follows:
u=x*(srcwidth/dstwidth)
v=y*(srcheight/dstheight)
wherein srcwidth and srgheight represent width and height values of the image after conversion, and dstwidth and dstight represent width and height values of the image before conversion;
(b) sending the normalized image size into yolov3 convolution network for several convolution and pooling operations; the formula for its convolution and pooling operations is as follows:
whereinIs the convolution sign, Y is the convolution output, a3×3The convolution kernel size is Y1 is the maximum pooling layer output, and h and w are the pooling frame height and width information;
(c) and (3) performing logistic regression operation on the feature frames subjected to the convolution pooling to obtain a primary prediction frame and a primary detection frame, wherein the loss expression is as follows:
Loss=Losslxy+Losslwh+Losslcls+Losslconf
of which L osslxyIndicating the location loss, L osslwhDenotes the dimension loss, L osslclsRepresents the class loss, L osslconfThe location loss is represented by the location of the location,
step (4) judging whether the detection frame and the classification result of yolov3 need to be sent to the classifier again for re-identification
After the primary detection result of yolov3 is obtained, it needs to judge whether the detection frame needs to be input into the classifier for predicting again according to the area and the threshold value of the detection frame, and the judgment formula is as follows:
wherein Y isiIf the judgment result of the ith detection target in the image is 1, the detection result is required to enter the classifier for re-detection, and if the judgment result is 0, the detection result is the final output result; yo _ area is the area of the prediction frame, and yo _ pre is the detection confidence of yolov 3; area _ th is a predicted frame area threshold, and pre _ th is a threshold;
if Y isiIf the number of the automobile type detection frames in the image is 1, the ith automobile type detection frame in the image is sent to a classifier for re-identification, otherwise, the detection result is directly output;
step (5) determining the judgment result
Simultaneously sending the detection frames needing to be identified again into the three classifiers, and comparing the output results of the three classifiers to obtain the final output of the classifiers, wherein the formula expression is as follows:
wherein cls _ cls represents the final classification result output by the classifier, and cls _ pre represents the confidence of the classifier; cls _ cls1, cls _ cls2 and cls _ cls3 respectively represent classification results of the three classifiers, and cls _ pre1, cls _ pre2 and cls _ pre3 respectively represent confidence degrees of the three classifiers;
after the output result of the classifier is obtained, the output result and the result of the target detector are jointly judged whether to remove the detection frame, the system detection result is refreshed, and the next picture detection is carried out, wherein the judgment formula is as follows:
wherein Y represents the final detection result, yo _ cls represents the yolov3 detection class, cls _ pre represents the classifier classification confidence, and cls _ cls represents the classifier class; 0 represents the deletion of the detection box;
the yolov3 target detector and the classifier jointly classify the types of automobiles in the images, and the output of the whole system is optimized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010204312.XA CN111507196A (en) | 2020-03-21 | 2020-03-21 | Vehicle type identification method based on machine vision and deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010204312.XA CN111507196A (en) | 2020-03-21 | 2020-03-21 | Vehicle type identification method based on machine vision and deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111507196A true CN111507196A (en) | 2020-08-07 |
Family
ID=71874150
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010204312.XA Pending CN111507196A (en) | 2020-03-21 | 2020-03-21 | Vehicle type identification method based on machine vision and deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111507196A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112686209A (en) * | 2021-01-25 | 2021-04-20 | 深圳市艾为智能有限公司 | Vehicle rear blind area monitoring method based on wheel identification |
CN112818814A (en) * | 2021-01-27 | 2021-05-18 | 北京市商汤科技开发有限公司 | Intrusion detection method and device, electronic equipment and computer readable storage medium |
CN113256568A (en) * | 2021-05-09 | 2021-08-13 | 长沙长泰智能装备有限公司 | Machine vision plate counting general system and method based on deep learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063594A (en) * | 2018-07-13 | 2018-12-21 | 吉林大学 | Remote sensing images fast target detection method based on YOLOv2 |
CN109635666A (en) * | 2018-11-16 | 2019-04-16 | 南京航空航天大学 | A kind of image object rapid detection method based on deep learning |
CN110009023A (en) * | 2019-03-26 | 2019-07-12 | 杭州电子科技大学上虞科学与工程研究院有限公司 | Wagon flow statistical method in wisdom traffic |
US20190377944A1 (en) * | 2018-06-08 | 2019-12-12 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and device for image processing, computer readable storage medium, and electronic device |
-
2020
- 2020-03-21 CN CN202010204312.XA patent/CN111507196A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190377944A1 (en) * | 2018-06-08 | 2019-12-12 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and device for image processing, computer readable storage medium, and electronic device |
CN109063594A (en) * | 2018-07-13 | 2018-12-21 | 吉林大学 | Remote sensing images fast target detection method based on YOLOv2 |
CN109635666A (en) * | 2018-11-16 | 2019-04-16 | 南京航空航天大学 | A kind of image object rapid detection method based on deep learning |
CN110009023A (en) * | 2019-03-26 | 2019-07-12 | 杭州电子科技大学上虞科学与工程研究院有限公司 | Wagon flow statistical method in wisdom traffic |
Non-Patent Citations (2)
Title |
---|
吕岳: ""多分类器组合的投票表决规则"", 《上海交通大学学报》, vol. 34, no. 5, pages 680 - 681 * |
李奇: ""基于深度学习的一阶目标检测算法应用研究"", pages 138 - 382 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112686209A (en) * | 2021-01-25 | 2021-04-20 | 深圳市艾为智能有限公司 | Vehicle rear blind area monitoring method based on wheel identification |
CN112818814A (en) * | 2021-01-27 | 2021-05-18 | 北京市商汤科技开发有限公司 | Intrusion detection method and device, electronic equipment and computer readable storage medium |
CN113256568A (en) * | 2021-05-09 | 2021-08-13 | 长沙长泰智能装备有限公司 | Machine vision plate counting general system and method based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110969160B (en) | License plate image correction and recognition method and system based on deep learning | |
CN109101924B (en) | Machine learning-based road traffic sign identification method | |
Abdullah et al. | YOLO-based three-stage network for Bangla license plate recognition in Dhaka metropolitan city | |
CN111368687A (en) | Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation | |
CN104778444B (en) | The appearance features analysis method of vehicle image under road scene | |
CN105809184B (en) | Method for real-time vehicle identification and tracking and parking space occupation judgment suitable for gas station | |
CN111507196A (en) | Vehicle type identification method based on machine vision and deep learning | |
CN106652468A (en) | Device and method for detection of violation of front vehicle and early warning of violation of vehicle on road | |
CN111340151B (en) | Weather phenomenon recognition system and method for assisting automatic driving of vehicle | |
CN113034378B (en) | Method for distinguishing electric automobile from fuel automobile | |
CN114299002A (en) | Intelligent detection system and method for abnormal road surface throwing behavior | |
CN112651293B (en) | Video detection method for road illegal spreading event | |
CN109948643A (en) | A kind of type of vehicle classification method based on deep layer network integration model | |
CN112699267B (en) | Vehicle type recognition method | |
CN111292432A (en) | Vehicle charging type distinguishing method and device based on vehicle type recognition and wheel axle detection | |
Dubuisson et al. | Object contour extraction using color and motion | |
KR100942409B1 (en) | Method for detecting a moving vehicle at a high speed | |
CN114359196A (en) | Fog detection method and system | |
CN116977995A (en) | Vehicle-mounted front license plate recognition method and system | |
CN116152758A (en) | Intelligent real-time accident detection and vehicle tracking method | |
CN113723258B (en) | Dangerous goods vehicle image recognition method and related equipment thereof | |
Yao et al. | Fuzzy c-means image segmentation approach for axle-based vehicle classification | |
CN114882469A (en) | Traffic sign detection method and system based on DL-SSD model | |
CN111401128A (en) | Method for improving vehicle recognition rate | |
CN113850112A (en) | Road condition identification method and system based on twin neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200807 |