CN110909666A - Night vehicle detection method based on improved YOLOv3 convolutional neural network - Google Patents

Night vehicle detection method based on improved YOLOv3 convolutional neural network Download PDF

Info

Publication number
CN110909666A
CN110909666A CN201911143330.5A CN201911143330A CN110909666A CN 110909666 A CN110909666 A CN 110909666A CN 201911143330 A CN201911143330 A CN 201911143330A CN 110909666 A CN110909666 A CN 110909666A
Authority
CN
China
Prior art keywords
image
convolutional neural
neural network
network
improved
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911143330.5A
Other languages
Chinese (zh)
Other versions
CN110909666B (en
Inventor
乔瑞萍
张连超
党祺玮
翟沛源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201911143330.5A priority Critical patent/CN110909666B/en
Publication of CN110909666A publication Critical patent/CN110909666A/en
Application granted granted Critical
Publication of CN110909666B publication Critical patent/CN110909666B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The invention discloses a night vehicle detection method based on an improved YOLOv3 convolutional neural network, and belongs to the field of auxiliary driving. The YOLOv3 convolutional neural network is made more accurate for small target detection by increasing the input image size, the meshing density, and multi-scale training. Therefore, the tail lamp with a small area can be accurately identified; meanwhile, aiming at complex light interference in a night driving environment, a channel attention mechanism-improved SE-Block module is added into the darknet53 feature extraction network. Weighting operation is carried out on the information with different importance degrees, so that the important information is strengthened, and the unimportant information is restrained; a new night vehicle data set is constructed for the night vehicle data set without an open source. Filling the gap in this respect. Finally, through the improvement of the YOLOv3 convolutional neural network, the vehicle identification method is more suitable for night vehicle identification. The night vehicle can be more accurately and quickly identified in the environment with small targets and complex light interference.

Description

Night vehicle detection method based on improved YOLOv3 convolutional neural network
[ technical field ] A method for producing a semiconductor device
The invention belongs to the field of auxiliary driving, relates to a night vehicle inspection method, and particularly relates to a night vehicle detection method based on an improved YOLOv3 convolutional neural network.
[ background of the invention ]
With the continuous development of deep learning in the field of target detection, real-time target detection by using a convolutional neural network has become a reality. The convolutional neural network has good effect on the speed and the accuracy of target detection under a specific environment. Compared with the traditional methods such as target recognition and machine learning. Convolutional neural networks have irreplaceable advantages in both speed and accuracy. Because the convolutional neural network has good resolving power for the nuances of different targets, the single network structure has an unsatisfactory effect on classifying the targets in different environments. That is, it is not ideal to use the network model for daytime vehicle recognition for nighttime vehicle recognition. Therefore, the convolutional neural network is correspondingly improved based on the characteristics of the driving environment at night.
In deep learning, the convolutional neural networks commonly used for image object detection include fast-RCNN, SSD series, and YOLO series. The YOLOv3 convolutional neural network can well balance the target recognition accuracy and the target recognition speed, and therefore the method is widely applied to target detection. Identifying nighttime vehicles using the YOLOv3 convolutional neural network presents several problems: firstly, the existing starting data set is mostly a daytime vehicle data set, and a network model trained by the daytime vehicle data set cannot effectively identify vehicles at night; and secondly, the whole characteristics of the vehicle are not obvious, and the details of the vehicle and the edge characteristics of the vehicle are lost due to insufficient light in the night environment. Thus, the vehicle identification is carried out by using the tail lamp with obvious characteristics. But the area of the tail lamp is small, and the YOLOv3 convolutional neural network is not friendly to small target detection; and thirdly, light backgrounds of advertising lamps, street lamps and the like in the night environment interfere with vehicle identification at night.
[ summary of the invention ]
The invention aims to overcome the defects of the prior art and provide a night vehicle detection method based on an improved YOLOv3 convolutional neural network. Aiming at two problems of small target detection and complex light background, a network structure suitable for vehicle detection at night is designed: the method comprises the steps that firstly, the accuracy of the small target detection is improved by a YOLOv3 convolution neural network through modes of increasing the size of an input image, increasing the grid division density, performing multi-scale training and the like; and secondly, by adding an improved SE-Block module, the YOLOv3 convolutional neural network obtains an attention mechanism, and weighting operation is carried out on important information and unimportant information, so that the target detection speed and the accuracy of the YOLOv3 convolutional neural network under a complex lighting background are improved.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose:
a night vehicle detection method based on an improved YOLOv3 convolutional neural network comprises the following steps:
step 1: collecting an image, carrying out equal-scale transformation on the input image, and filling black pixels into a square image;
step 2: randomly scaling the filled square image by 1-3 units, and performing multi-scale training;
and step 3: putting the randomly scaled square image into a dark net53 feature extraction network for feature extraction;
and 4, step 4: carrying out target positioning and identification on the feature map obtained by feature extraction in the step 3 through a full convolution network;
and 5: and screening the prediction frames with the intersection-to-parallel ratio larger than 0.5 by using the non-maximum value, and filtering redundant prediction frames of the same night vehicle.
The vehicle detection method of the invention is further improved in that:
in the step 1, the input image is subjected to equal-scale transformation, and then a square image is filled by using black pixels, specifically:
step 1-1, input image is subjected to equal-scale transformation
Let the length and width of the original image be h1、w1The long side l of the original imagemax=max(h1,w1) After the equal proportion transformation, the length and the width of the image are respectivelyh2、w2
Length h of image after equal proportion transformation2Comprises the following steps:
Figure BDA0002281528710000031
image width w after equal proportion transformation2Comprises the following steps:
Figure BDA0002281528710000032
step 1-2, filling the image after equal proportion transformation
Establishing a coordinate system by taking the central point of the image after equal proportion transformation as a coordinate origin and taking the length of one pixel as a unit length; f (x, y) represents the pixel value of the (x, y) point pixel in the coordinate system;
Figure BDA0002281528710000033
filling to obtain a square image with 832 pixel sides as YOLOv3 convolution of the input image of the neural network.
In the step 2, the image is subjected to multi-scale transformation, namely the side length of the square image filled in the step 1 is randomly scaled by 1-3 unit lengths through a nearest neighbor interpolation method, each unit length is 64 pixels, and finally a square image with the side length of 832 +/-n multiplied by 64 and n not less than 1 and not more than 3 is obtained, wherein n represents the number of the side length of the picture randomly scaled unit lengths.
In the step 3, the darknet53 feature extraction network is characterized in that an improved SE-Block module is added into the residual mapping of the residual network module; the improved SE-Block module comprises the following specific steps:
step 3-1, obtaining a characteristic diagram of c multiplied by h multiplied by w after convolution, wherein c represents the number of characteristic channels, h represents the height of the characteristic diagram, and w represents the width of the characteristic diagram;
step 3-2, channel compression is carried out on each characteristic channel through the global pooling layer, FnRepresenting the real number u obtained after the nth characteristic channel passes through the global pooling layern(i, j) represents a feature value of the (i, j) coordinate point in the nth feature channel; global pooling operation:
Figure BDA0002281528710000041
step 3-3, c real numbers F are generated after channel compression1、F2...、FcChannel activation is carried out on the c real numbers through two full connection layers;
full connection layer c1Has a network size of
Figure BDA0002281528710000042
Delta denotes the excitation layer of the ReLU,
Figure BDA0002281528710000043
represents the weight obtained after channel activation through the first fully-connected layer:
Figure BDA0002281528710000044
full connection layer c2Has a network size of 1 x c, delta denotes the ReLU excitation layer,
Figure BDA0002281528710000045
represents the weight obtained after channel activation through the second fully-connected layer:
Figure BDA0002281528710000046
step 3-4, obtaining the weight after the channel activation
Figure BDA0002281528710000047
Normalizing the result sigma through a Sigmoid function:
Figure BDA0002281528710000048
step 3-5, adding 0.5 to the normalized result sigma to obtain the final weight
Figure BDA0002281528710000049
Figure BDA00022815287100000410
Step 3-6, the residual error of the residual error network is mapped to obtain a characteristic diagram
Figure BDA00022815287100000411
The improved SE-Block module obtains the channel weight of
Figure BDA00022815287100000412
The characteristic diagram obtained by adding the residual mapping of the improved SE-Block module is
Figure BDA00022815287100000413
The target positioning and identification in the step 4 specifically comprises the following steps:
step 4-1, after five times of downsampling, an input image with the size of 832 × 832 is changed into a first prediction layer feature map with the size of 104 × 104;
and 4-2, passing the feature map with the size of 104 multiplied by 104 through a full convolution layer with the channel number of 18, generating three prediction frames by each feature point, wherein each prediction frame comprises six data, namely the length w of the prediction frame, the width h of the prediction frame, the coordinates (x, y) of the center point of the prediction frame, the confidence coefficient of the prediction frame and the class probability of the prediction frame.
Compared with the prior art, the invention has the following beneficial effects:
the invention increases the size of an input image, increases the grid division density and performs multi-scale training on the basis of the original YOLOv3 convolutional neural network. The YOLOv3 convolutional neural network is more accurate for small target detection; secondly, an improved SE-Block module is provided and added into a feature extraction network of YOLOv3, so that the network can independently learn important information and background information. And (4) enhancing important information and suppressing background information. Therefore, the target detection speed and the target detection accuracy of the YOLOv3 under the interference of complex lamplight are improved.
[ description of the drawings ]
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic diagram of image filling according to the present invention;
FIG. 3 is a schematic diagram of the structure of an improved SE-Block module according to the present invention;
FIG. 4 is a diagram of a vehicle at night;
fig. 5 is a night car tail light recognition diagram.
[ detailed description ] embodiments
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments, and are not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Various structural schematics according to the disclosed embodiments of the invention are shown in the drawings. The figures are not drawn to scale, wherein certain details are exaggerated and possibly omitted for clarity of presentation. The shapes of various regions, layers and their relative sizes and positional relationships shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, according to actual needs.
In the context of the present disclosure, when a layer/element is referred to as being "on" another layer/element, it can be directly on the other layer/element or intervening layers/elements may be present. In addition, if a layer/element is "on" another layer/element in one orientation, then that layer/element may be "under" the other layer/element when the orientation is reversed.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 1, the night vehicle detection method based on the improved YOLOv3 convolutional neural network of the invention comprises the following steps:
in the training stage, four steps of image preprocessing, image multi-scale transformation, feature extraction and target positioning identification are required to be carried out to train and obtain a network model; and in the testing stage, image preprocessing, image multi-scale transformation, feature extraction, target positioning identification and redundant prediction frame filtering are carried out on the finally obtained network model.
Step 1, image preprocessing:
as shown in fig. 2, the input image is subjected to equal-scale transformation, and then a square image is filled with black pixels, specifically:
step 1-1, input image is subjected to equal-scale transformation
Let the length and width of the original image be h1、w1The long side l of the original imagemax=max(h1,w1) The length and width of the image after the equal proportion transformation are respectively h2、w2
Change in equal proportionChanged image length h2Comprises the following steps:
Figure BDA0002281528710000071
image width w after equal proportion transformation2Comprises the following steps:
Figure BDA0002281528710000072
step 1-2, filling the image after equal proportion transformation
And establishing a coordinate system by taking the central point of the image after the equal-proportion transformation as a coordinate origin and taking the length of one pixel as a unit length. f (x, y) represents the pixel value of the (x, y) point pixel in the coordinate system.
Figure BDA0002281528710000073
The filling results in a square image with a side length of 832 pixels as the input image of the YOLOv3 convolutional neural network.
Step 2, image multi-scale transformation
Performing multi-scale transformation on the square image obtained in the step 1, specifically:
a square image with a side length of 832 pixels is randomly scaled by nearest neighbor interpolation by 1-3 unit lengths. Each unit length is 64 pixels. Finally, a square image with the side length of 832 +/-n multiplied by 64 and n being more than or equal to 1 and less than or equal to 3 is obtained, wherein n represents the number of the side length of the picture randomly zoomed unit length.
Step 3, feature extraction
Performing feature extraction on the square image obtained in step 2 through an improved dark net53 feature extraction network, as shown in fig. 3, specifically as follows:
and 3-1, after the picture passes through the last residual error module, obtaining a characteristic diagram of c multiplied by h multiplied by w, wherein c represents the number of characteristic channels, h represents the height of the characteristic diagram, and w represents the width of the characteristic diagram.
Step 3-2, each characteristic channel is processed through a global pooling layerChannel compression, FnRepresenting the real number u obtained after the nth characteristic channel passes through the global pooling layernAnd (i, j) represents a characteristic value of the (i, j) coordinate point in the nth characteristic channel. Global pooling operation:
Figure BDA0002281528710000081
step 3-3, c real numbers (F) are generated after channel compression1、F2...、Fc) And performing channel activation on the c real numbers through two full connection layers.
Full connection layer c1Has a network size of
Figure BDA0002281528710000082
Delta denotes the excitation layer of the ReLU,
Figure BDA0002281528710000083
represents the weight obtained after channel activation through the first fully-connected layer:
Figure BDA0002281528710000084
full connection layer c2Has a network size of 1 x c, delta denotes the ReLU excitation layer,
Figure BDA0002281528710000085
represents the weight obtained after channel activation through the second fully-connected layer:
Figure BDA0002281528710000086
step 3-4, obtaining the weight after the channel activation
Figure BDA0002281528710000091
Normalizing the result sigma through a Sigmoid function:
Figure BDA0002281528710000092
step 3-5, adding 0.5 to the normalized result sigma to obtain the final weight
Figure BDA0002281528710000093
Figure BDA0002281528710000094
Step 3-6, the residual error of the residual error network is mapped to obtain a characteristic diagram
Figure BDA0002281528710000095
The improved SE-Block module obtains the channel weight of
Figure BDA0002281528710000096
The characteristic diagram obtained by adding the residual mapping of the improved SE-Block module is
Figure BDA0002281528710000097
Step 4, target positioning and identification
In step 4-1, after five times of downsampling, an input image with a size of 832 × 832 becomes a feature map of 104 × 104.
And 4-2, passing the feature map with the size of 104 multiplied by 104 through a full convolution layer with the channel number of 18, generating three prediction frames by each feature point, wherein each prediction frame comprises six data, namely the length w of the prediction frame, the width h of the prediction frame, the coordinates (x, y) of the center point of the prediction frame, the confidence coefficient of the prediction frame and the class probability of the prediction frame.
Step 5, filtering the redundant prediction box
And 5-1, selecting the detection box with the maximum confidence coefficient.
And 5-2, calculating the intersection ratio of other prediction frames and the prediction frame with the maximum confidence coefficient, and deleting the prediction frame if the intersection ratio of a certain prediction frame and the prediction frame with the maximum confidence coefficient is greater than a threshold value.
Through the steps, the vehicle at night can be identified and positioned. The night driving recorder captures the picture as shown in fig. 4. Firstly, carrying out equal-scale transformation and filling on the graph 4 according to the scheme shown in the graph 2 to obtain a square picture with 832 pixels; then, carrying out multi-scale transformation on the square picture through the step 2; putting the picture after multi-scale transformation into an improved dark net53 feature extraction network for feature extraction; and finally, positioning and identifying the tail lamp target through the steps 4 and 5. Fig. 5 shows the positioning effect of the night car tail lamp recognition, and the positioning and recognition results of the same car parking lamp are framed by the prediction frame with light label.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (5)

1. A night vehicle detection method based on an improved YOLOv3 convolutional neural network is characterized by comprising the following steps:
step 1: collecting an image, carrying out equal-scale transformation on the input image, and filling black pixels into a square image;
step 2: randomly scaling the filled square image by 1-3 units, and performing multi-scale training;
and step 3: putting the randomly scaled square image into a dark net53 feature extraction network for feature extraction;
and 4, step 4: carrying out target positioning and identification on the feature map obtained by feature extraction in the step 3 through a full convolution network;
and 5: and screening the prediction frames with the intersection-to-parallel ratio larger than 0.5 by using the non-maximum value, and filtering redundant prediction frames of the same night vehicle.
2. The night vehicle detection method based on the improved YOLOv3 convolutional neural network as claimed in claim 1, wherein the input image is subjected to equal scale transformation in step 1, and then is filled into a square image by using black pixels, specifically:
step 1-1, input image is subjected to equal-scale transformation
Let the length and width of the original image be h1、w1The long side l of the original imagemax=max(h1,w1) The length and width of the image after the equal proportion transformation are respectively h2、w2
Length h of image after equal proportion transformation2Comprises the following steps:
Figure FDA0002281528700000011
image width w after equal proportion transformation2Comprises the following steps:
Figure FDA0002281528700000012
step 1-2, filling the image after equal proportion transformation
Establishing a coordinate system by taking the central point of the image after equal proportion transformation as a coordinate origin and taking the length of one pixel as a unit length; f (x, y) represents the pixel value of the (x, y) point pixel in the coordinate system;
Figure FDA0002281528700000021
the filling results in a square image with a side length of 832 pixels as the input image of the YOLOv3 convolutional neural network.
3. The night vehicle detection method based on the improved YOLOv3 convolutional neural network as claimed in claim 1, wherein in the step 2, the image is subjected to multi-scale transformation, the side length of the square image filled in the step 1 is randomly scaled by 1-3 unit lengths through a nearest neighbor interpolation method, each unit length is 64 pixels, and finally a square image with the side length of 832 +/-nx64 and the side length of 1 ≦ n ≦ 3 is obtained, wherein n represents the number of the randomly scaled unit lengths of the side length of the picture.
4. The night vehicle detection method based on the improved YOLOv3 convolutional neural network as claimed in claim 1, wherein the darknet53 feature extraction network in step 3 is to add an improved SE-Block module to the residual mapping of the residual network module; the improved SE-Block module comprises the following specific steps:
step 3-1, obtaining a characteristic diagram of c multiplied by h multiplied by w after convolution, wherein c represents the number of characteristic channels, h represents the height of the characteristic diagram, and w represents the width of the characteristic diagram;
step 3-2, channel compression is carried out on each characteristic channel through the global pooling layer, FnRepresenting the real number u obtained after the nth characteristic channel passes through the global pooling layern(i, j) represents a feature value of the (i, j) coordinate point in the nth feature channel; global pooling operation:
Figure FDA0002281528700000022
step 3-3, c real numbers F are generated after channel compression1、F2…、FcChannel activation is carried out on the c real numbers through two full connection layers;
full connection layer c1Has a network size of
Figure FDA0002281528700000031
Delta denotes the excitation layer of the ReLU,
Figure FDA0002281528700000032
represents the weight obtained after channel activation through the first fully-connected layer:
Figure FDA0002281528700000033
full connection layer c2Has a network size of 1 x c, delta denotes the ReLU excitation layer,
Figure FDA0002281528700000034
represents the weight obtained after channel activation through the second fully-connected layer:
Figure FDA0002281528700000035
step 3-4, obtaining the weight after the channel activation
Figure FDA0002281528700000036
Normalizing the result sigma through a Sigmoid function:
Figure FDA0002281528700000037
step 3-5, adding 0.5 to the normalized result sigma to obtain the final weight
Figure FDA0002281528700000038
Figure FDA0002281528700000039
Step 3-6, the residual error of the residual error network is mapped to obtain a characteristic diagram
Figure FDA00022815287000000310
The improved SE-Block module obtains the channel weight of
Figure FDA00022815287000000311
The characteristic diagram obtained by adding the residual mapping of the improved SE-Block module is
Figure FDA00022815287000000312
5. The night vehicle detection method based on the improved YOLOv3 convolutional neural network as claimed in claim 1, wherein the target location and identification in step 4 is specifically:
step 4-1, after five times of downsampling, an input image with the size of 832 × 832 is changed into a first prediction layer feature map with the size of 104 × 104;
and 4-2, passing the feature map with the size of 104 multiplied by 104 through a full convolution layer with the channel number of 18, generating three prediction frames by each feature point, wherein each prediction frame comprises six data, namely the length w of the prediction frame, the width h of the prediction frame, the coordinates (x, y) of the center point of the prediction frame, the confidence coefficient of the prediction frame and the class probability of the prediction frame.
CN201911143330.5A 2019-11-20 2019-11-20 Night vehicle detection method based on improved YOLOv3 convolutional neural network Active CN110909666B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911143330.5A CN110909666B (en) 2019-11-20 2019-11-20 Night vehicle detection method based on improved YOLOv3 convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911143330.5A CN110909666B (en) 2019-11-20 2019-11-20 Night vehicle detection method based on improved YOLOv3 convolutional neural network

Publications (2)

Publication Number Publication Date
CN110909666A true CN110909666A (en) 2020-03-24
CN110909666B CN110909666B (en) 2022-10-25

Family

ID=69817968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911143330.5A Active CN110909666B (en) 2019-11-20 2019-11-20 Night vehicle detection method based on improved YOLOv3 convolutional neural network

Country Status (1)

Country Link
CN (1) CN110909666B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695448A (en) * 2020-05-27 2020-09-22 东南大学 Roadside vehicle identification method based on visual sensor
CN111815573A (en) * 2020-06-17 2020-10-23 科大智能物联技术有限公司 Coupling outer wall detection method and system based on deep learning
CN112132031A (en) * 2020-09-23 2020-12-25 平安国际智慧城市科技股份有限公司 Vehicle money identification method and device, electronic equipment and storage medium
CN112257786A (en) * 2020-10-23 2021-01-22 南京大量数控科技有限公司 Feature detection method based on combination of convolutional neural network and attention mechanism
CN112418358A (en) * 2021-01-14 2021-02-26 苏州博宇鑫交通科技有限公司 Vehicle multi-attribute classification method for strengthening deep fusion network
CN112418345A (en) * 2020-12-07 2021-02-26 苏州小阳软件科技有限公司 Method and device for quickly identifying fine-grained small target
CN112464806A (en) * 2020-11-27 2021-03-09 山东交通学院 Low-illumination vehicle detection and identification method and system based on artificial intelligence
CN112507929A (en) * 2020-12-16 2021-03-16 武汉理工大学 Vehicle body spot welding slag accurate detection method based on improved YOLOv3 network
CN112580536A (en) * 2020-12-23 2021-03-30 深圳市捷顺科技实业股份有限公司 High-order video vehicle and license plate detection method and device
TWI723823B (en) * 2020-03-30 2021-04-01 聚晶半導體股份有限公司 Object detection device and object detection method based on neural network
CN112884090A (en) * 2021-04-14 2021-06-01 安徽理工大学 Fire detection and identification method based on improved YOLOv3
CN113298021A (en) * 2021-06-11 2021-08-24 宿州学院 Mining area transport vehicle head and tail identification method and system based on convolutional neural network
CN113343837A (en) * 2021-06-03 2021-09-03 华南理工大学 Intelligent driving method, system, device and medium based on vehicle lamp language recognition
CN114565597A (en) * 2022-03-04 2022-05-31 昆明理工大学 Nighttime road pedestrian detection method based on YOLOv3-tiny-DB and transfer learning
CN116337087A (en) * 2023-05-30 2023-06-27 广州健新科技有限责任公司 AIS and camera-based ship positioning method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316010A (en) * 2017-06-13 2017-11-03 武汉理工大学 A kind of method for recognizing preceding vehicle tail lights and judging its state
US20180211121A1 (en) * 2017-01-25 2018-07-26 Ford Global Technologies, Llc Detecting Vehicles In Low Light Conditions
CN109214399A (en) * 2018-10-12 2019-01-15 清华大学深圳研究生院 A kind of improvement YOLOV3 Target Recognition Algorithms being embedded in SENet structure

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180211121A1 (en) * 2017-01-25 2018-07-26 Ford Global Technologies, Llc Detecting Vehicles In Low Light Conditions
CN107316010A (en) * 2017-06-13 2017-11-03 武汉理工大学 A kind of method for recognizing preceding vehicle tail lights and judging its state
CN109214399A (en) * 2018-10-12 2019-01-15 清华大学深圳研究生院 A kind of improvement YOLOV3 Target Recognition Algorithms being embedded in SENet structure

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIE HU ETC.: "Squeeze-and-Excitation Networks", 《ARXIV[CS.CV]》 *
徐诚极 等: "Attention-YOLO:引入注意力机制的 YOLO 检测算法", 《计算机工程与应用》 *
鞠默然 等: "改进的YOLO V3算法及其在小目标检测中的应用", 《光学学报》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI723823B (en) * 2020-03-30 2021-04-01 聚晶半導體股份有限公司 Object detection device and object detection method based on neural network
CN111695448A (en) * 2020-05-27 2020-09-22 东南大学 Roadside vehicle identification method based on visual sensor
CN111695448B (en) * 2020-05-27 2022-06-17 东南大学 Roadside vehicle identification method based on visual sensor
CN111815573A (en) * 2020-06-17 2020-10-23 科大智能物联技术有限公司 Coupling outer wall detection method and system based on deep learning
CN112132031A (en) * 2020-09-23 2020-12-25 平安国际智慧城市科技股份有限公司 Vehicle money identification method and device, electronic equipment and storage medium
CN112132031B (en) * 2020-09-23 2024-04-16 平安国际智慧城市科技股份有限公司 Vehicle style identification method and device, electronic equipment and storage medium
CN112257786A (en) * 2020-10-23 2021-01-22 南京大量数控科技有限公司 Feature detection method based on combination of convolutional neural network and attention mechanism
CN112464806A (en) * 2020-11-27 2021-03-09 山东交通学院 Low-illumination vehicle detection and identification method and system based on artificial intelligence
CN112418345A (en) * 2020-12-07 2021-02-26 苏州小阳软件科技有限公司 Method and device for quickly identifying fine-grained small target
CN112418345B (en) * 2020-12-07 2024-02-23 深圳小阳软件有限公司 Method and device for quickly identifying small targets with fine granularity
CN112507929B (en) * 2020-12-16 2022-05-13 武汉理工大学 Vehicle body spot welding slag accurate detection method based on improved YOLOv3 network
CN112507929A (en) * 2020-12-16 2021-03-16 武汉理工大学 Vehicle body spot welding slag accurate detection method based on improved YOLOv3 network
CN112580536A (en) * 2020-12-23 2021-03-30 深圳市捷顺科技实业股份有限公司 High-order video vehicle and license plate detection method and device
CN112418358A (en) * 2021-01-14 2021-02-26 苏州博宇鑫交通科技有限公司 Vehicle multi-attribute classification method for strengthening deep fusion network
CN112884090A (en) * 2021-04-14 2021-06-01 安徽理工大学 Fire detection and identification method based on improved YOLOv3
CN113343837A (en) * 2021-06-03 2021-09-03 华南理工大学 Intelligent driving method, system, device and medium based on vehicle lamp language recognition
CN113343837B (en) * 2021-06-03 2023-08-22 华南理工大学 Intelligent driving method, system, device and medium based on vehicle lamp language recognition
CN113298021A (en) * 2021-06-11 2021-08-24 宿州学院 Mining area transport vehicle head and tail identification method and system based on convolutional neural network
CN114565597A (en) * 2022-03-04 2022-05-31 昆明理工大学 Nighttime road pedestrian detection method based on YOLOv3-tiny-DB and transfer learning
CN116337087A (en) * 2023-05-30 2023-06-27 广州健新科技有限责任公司 AIS and camera-based ship positioning method and system

Also Published As

Publication number Publication date
CN110909666B (en) 2022-10-25

Similar Documents

Publication Publication Date Title
CN110909666B (en) Night vehicle detection method based on improved YOLOv3 convolutional neural network
CN110188705B (en) Remote traffic sign detection and identification method suitable for vehicle-mounted system
CN107563372B (en) License plate positioning method based on deep learning SSD frame
WO2022111219A1 (en) Domain adaptation device operation and maintenance system and method
CN111222396B (en) All-weather multispectral pedestrian detection method
CN112069868A (en) Unmanned aerial vehicle real-time vehicle detection method based on convolutional neural network
CN112183203A (en) Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN112488025B (en) Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion
CN111178451A (en) License plate detection method based on YOLOv3 network
CN112949633B (en) Improved YOLOv 3-based infrared target detection method
CN111695448A (en) Roadside vehicle identification method based on visual sensor
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN112149535B (en) Lane line detection method and device combining SegNet and U-Net
CN110599497A (en) Drivable region segmentation method based on deep neural network
CN113177560A (en) Universal lightweight deep learning vehicle detection method
CN113255837A (en) Improved CenterNet network-based target detection method in industrial environment
CN113723377A (en) Traffic sign detection method based on LD-SSD network
WO2023212997A1 (en) Knowledge distillation based neural network training method, device, and storage medium
CN114913498A (en) Parallel multi-scale feature aggregation lane line detection method based on key point estimation
CN113205107A (en) Vehicle type recognition method based on improved high-efficiency network
CN111540203B (en) Method for adjusting green light passing time based on fast-RCNN
CN114140672A (en) Target detection network system and method applied to multi-sensor data fusion in rainy and snowy weather scene
CN111738114A (en) Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN113297915A (en) Insulator recognition target detection method based on unmanned aerial vehicle inspection
CN117115770A (en) Automatic driving method based on convolutional neural network and attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant