Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a target detection method for a signal lamp according to an embodiment of the present invention, which is applicable to a situation of performing target detection on a signal lamp at an intersection. The method can be executed by the target detection device of the signal lamp provided by the embodiment of the invention, and the device can be realized in a software and/or hardware mode and can be integrated on electronic equipment.
Specifically, as shown in fig. 1, the method for detecting a target of a signal lamp provided in the embodiment of the present invention may include the following steps:
and S110, acquiring a target image to be detected.
The target image to be detected can be understood as an image of the road condition containing the signal lamp, which is shot by the image acquisition equipment at the position of the traffic road condition, the road section and the like. The image acquisition equipment can be arranged at positions of traffic intersections, road sections and the like, and can also be arranged on vehicles. Specifically, the device may be a camera, a video camera, a scanner, or other devices (mobile phones, tablet computers, etc.) with a photographing function.
The signal lamp, also called traffic signal lamp, is a signal lamp for traffic operation, and generally consists of red lamp, green lamp and yellow lamp. The red light indicates no traffic, the green light indicates permission, and the yellow light indicates warning. The traffic signal lamp at least comprises: motor vehicle signal lamp, non-motor vehicle signal lamp, pedestrian crossing signal lamp, direction indicator lamp (arrow signal lamp), lane signal lamp, flash warning signal lamp, road and railway crossing signal lamp.
Before target detection is performed on a signal lamp, a target image to be detected acquired by image acquisition equipment needs to be acquired. Illustratively, if the image acquisition device is arranged at a traffic intersection, a road section and the like, the image acquisition device can establish communication connection with the image acquisition device in a wireless connection mode and is used for acquiring a target image to be detected acquired by the image acquisition device; if the image acquisition equipment is arranged at the vehicle end, the target image to be detected acquired by the vehicle-mounted image acquisition equipment can be directly acquired through the vehicle control bus. Of course, the present application only exemplifies a wireless connection, and a person skilled in the art may adjust the manner of establishing a communication connection with the image capturing device according to actual needs, which should not be construed as limiting the present application.
And S120, determining the category and/or the position of the signal lamp in the target image based on the signal lamp detection model. The signal lamp detection model is obtained based on target detection network training.
In this embodiment, the target detection of the signal lamp is implemented based on a YOLO (You Only need to see Once) v3 algorithm. YOLOv3 is the third version of the YOLO series of target detection algorithms, and the main improvements compared with the previous algorithms are: the network structure is adjusted, multi-scale features are used for object detection, and the object classification replaces softmax with logistic. Wherein, the logistic classifier is modeled by taking Bernoulli distribution as a model, and can be used for classifying into two categories; while the softmax classifier is modeled as a polynomial distribution, it can be classified into a number of mutually exclusive classes. Yolov3 predicts object classes without using softmax, instead using logistic output to predict, which can support multi-labeled objects (e.g. one person has two labels "woman" and "human"). Therefore, the accuracy of YOLOv3 is significantly improved compared to previous YOLO series algorithms, especially for small targets.
YOLOv3 divides the image into grids of S × S, and the grid where the target center is located is responsible for completing the prediction of the target. In order to complete the detection of the C-type target, each grid needs to predict B bounding boxes and C conditional class probabilities, and outputs confidence information representing whether the bounding boxes contain the target or not and outputting the accuracy of the bounding boxes.
In the basic aspect of image feature extraction, YOLOv3 adopts a network structure called dark network-53 (containing 53 convolutional layers), which uses the residual network approach to set shortcut links between layers. On the basis of the detection of fine-grained features by YOLOv2, YOLOv3 further adopts feature maps (y1, y2 and y3) with 3 different scales for object detection. The deep convolution is performed for multiple times through three operation steps of conv _ layer (convolution layer, 5-layer convolution + 1-layer normalization + 1-layer activation), conv _ block (convolution block, 1-layer convolution + 1-layer normalization + 1-layer activation) and conv (1-layer convolution), and three feature maps y1, y2 and y3 of different scales are obtained. As the number and scale of the output feature maps change, the size of the prior box also needs to be adjusted accordingly.
At present, a batch random gradient descent algorithm is mostly adopted for optimizing a deep learning model, besides a gradient, two factors of a batch size and a learning rate directly determine the weight updating of the model, and the two factors are the most important parameters influencing the performance convergence of the model from the viewpoint of optimization. The learning rate directly affects the convergence state of the model, and the batch size affects the generalization performance of the model.
Setting a large batch size can reduce training time and improve stability, but can result in reduced model generalization capability. In this embodiment, when the batch size is set, the batch size of the signal lamp detection model is set to 32 in compromise among training time, stability, and generalization capability.
In order to enable the gradient descent method to have good performance, the value of the learning rate needs to be set within a proper range. The learning rate determines how fast the parameter moves to the optimal value. If the learning rate is too large, the optimal value is likely to be crossed; on the contrary, if the learning rate is too small, the optimization efficiency may be too low, and the algorithm may not converge for a long time. Therefore, the learning rate is critical to the performance of the algorithm.
During the training process of the signal lamp detection model, the fluctuation range of the loss function of the signal lamp detection model can be determined. In order to enable the gradient descent method to have better performance, the learning rate of the signal lamp detection model can be reduced under the condition that the fluctuation amplitude of the loss function is greater than the first amplitude threshold value; under the condition that the fluctuation amplitude of the loss function is smaller than the second amplitude threshold value, the learning rate of the signal lamp detection model can be improved; wherein the first amplitude threshold is greater than the second amplitude threshold.
YOLOv3 uses K-means (K-average) algorithm to cluster in the real box (ground route) of all samples in the training set, obtains the width and height (size of prior box) with representative shape, sets 3 prior boxes for each down-sampling scale, and clusters out 9 prior boxes with sizes in total. However, the specific a priori frames are the most suitable, and different numbers of a priori frames can be applied to the model in an experimental manner, and then the optimal set of a priori frames between the complexity and the high recall rate of the model is found, and finally the optimal 9 a priori frames are obtained. For YOLOv3, the output is a feature map of 3 scales, 13 × 13, 26 × 26, and 52 × 52, corresponding to 9 prior frames, and each scale divides 3 prior frames equally.
Illustratively, in the COCO dataset these 9 prior boxes are: (10*13),(16*30),(33*3),(30*61),(62*45),(59*119),(116*90),(156*198),(373*326). In assignment, larger prior boxes (116 × 90), (156 × 198), (373 × 326) were applied on the smallest 13 × 13 signature (with the largest receptive field), suitable for detecting larger objects. Medium a priori boxes (30 x 61), (62 x 45), (59 x 119) were applied on medium 26 x 26 signatures (medium receptive field), suitable for detecting medium sized subjects. Smaller a priori boxes (10 x 13), (16 x 30), (33 x 23) are applied on the larger 52 x 52 signature (smaller field), suitable for detecting smaller objects.
In the embodiment, when signal lamp data are trained, appropriate network parameters are set according to the characteristics of the signal lamp data, the parameters of the signal lamps in the sample image are subjected to clustering analysis by using a K-means clustering algorithm, and the parameters of a prior frame in a signal lamp detection model are determined according to a clustering result; where the parameter is a size and/or a shape, the size may include an aspect ratio.
And respectively detecting and identifying the signal lamp on three obtained feature maps of y1, y2 and y3 with different scales by utilizing 9 prior frames obtained by clustering by using a K-means algorithm in advance, and respectively predicting parameters of 3 different prior frames on each feature map.
In order to improve the detection accuracy, in an optional implementation manner, different clustering numbers K can be selected, a K-means algorithm is used for carrying out clustering analysis on signal lamp data, and a K value is selected according to an Average intersection ratio (Avg IOU) of a real frame and a prediction frame along with a change curve of the K value; with the increase of the clustering number k, the average intersection ratio tends to be smooth, the larger the k value is, the smaller the difference between the real frame and the prediction frame is, the faster the training convergence speed is, and the higher the detection precision is.
Further, in this embodiment, the maximum size of the signal lamp in the sample image may be determined, and the maximum size may be used as the upper limit of the size of the bounding box of the signal lamp detection model. Illustratively, if a large number of objects in each image are to be trained, a parameter max (upper size limit) of 200 or more is added in the last layer of a cfg (control flow graph) file.
According to the technical scheme of the embodiment, the target image to be detected is obtained; and determining the type and/or position of a signal lamp in the target image based on a signal lamp detection model, wherein the signal lamp detection model is obtained based on target detection network training. The technical scheme of this embodiment can solve the lower problem of detection accuracy and efficiency among the prior art, avoids night, rainy day, haze, has the influence of difficult scenes such as sheltering from to signal lamp target detection, improves the target detection efficiency and the accuracy of signal lamp, provides a new thinking for the target detection of signal lamp.
Example two
Fig. 2 is a flowchart of a target detection method for a signal lamp according to a second embodiment of the present invention, which is further optimized based on the above embodiment, and provides a specific description of how to perform target detection on the signal lamp.
Specifically, as shown in fig. 2, the method includes:
s201, preparing a data set, classifying and labeling the images, and preprocessing the images.
S202, selecting a frame and creating a model, and reading in network training parameters by using a train _ detector.
S203, download deep learning neural network, download YOLOv3.weights (weight of YOLOv 3), install dark network, and Makefile configuration.
S204, preparing a training data set, augmenting the data, clipping 416 × 416, and dividing the data set into test.txt (test set), train.txt (training set), val.txt (validation set), and train.txt (training validation set).
And S205, generating a trail file and a val file, and generating a trail path, a test path and a val path.
S206, downloading weights pre-trained on Imagenet (picture network), modifying cfg/voc.data, modifying network parameters and initializing the weights.
And S207, creating a folder backup below the dark network folder, and modifying the data/voc.
And S208, modifying the super parameter, modifying cfg/yolov3-voc.cfg, analyzing the data.cfg file, and extracting a training picture path.
Wherein the hyper parameters to be modified include learning rate and batch size. Specifically, the learning rate may be adjusted to 0.01. Further, the learning rate can be adjusted according to the fluctuation range of the loss function in the training process: reducing the learning rate 1/5-1/10 of the signal lamp detection model under the condition that the fluctuation amplitude of the loss function is larger than a first amplitude threshold value; under the condition that the fluctuation amplitude of the loss function is smaller than a second amplitude threshold value, the learning rate of the signal lamp detection model is improved; wherein the first amplitude threshold is greater than the second amplitude threshold. In this embodiment, the batch size may be reduced from 64 to 32, which may increase the preamble propagation speed and improve the training efficiency. It can be understood that 64 pictures are loaded into the memory, and each loading is carried out 4 times by 16 forward propagation; 32 pictures are loaded into the memory, the accumulated loss of forward propagation is reduced, and the utilization rate of the video memory is reduced. An upper limit on the image size may also be obtained based on the clustering results.
S209, loading the network structure initialization weight value.
And S210, reading the training picture.
After the training picture is read, preprocessing such as enhancing and cutting needs to be performed on the picture for subsequent training.
And S211, training.
During training, 32 pictures are extracted at a time according to the batch size.
S212, forward reasoning and reverse reasoning.
Wherein, when the forward reasoning and the backward reasoning are carried out, the loss value update weight is recorded.
And S213, extracting features, and extracting nine prior frames with different sizes.
After extracting features from the basic network of the dark net53, nine prior frames with three different sizes (13 × 13, 26 × 26, 52 × 52) are extracted. And the characteristic graph is processed through a residual error network, so that the learning difficulty is reduced.
S214, upsampling and multi-scale fusion.
S215, processing the output of the network, and acquiring a required vector according to a set threshold value.
Where the output bounding box of the network is typically represented by a set of 5 or more element vectors. The first 4 elements represent center point coordinates center _ x (center point x coordinate), center _ y (center point y coordinate), width (width), and height (height) of the object, and the 5 th element represents the confidence with which the bounding box encloses the object. The remaining elements are the confidence levels (i.e., object types) associated with each class. This box is assigned to the class to which it scores the highest, and the highest score of the bounding box is also referred to as the confidence. If the confidence of the box is less than a given threshold, then this bounding box is deleted and no further processing is considered.
S216, using a Non-Maximum Suppression (NMS) algorithm to remove the duplicate, screening a prediction box, and assigning a class label and a confidence score.
The number of overlapping frames can be reduced by non-maximally suppressing frames whose confidence level is equal to or greater than the confidence level threshold. The non-maximum suppression is controlled by a non-maximum suppression threshold, and if this value is set too low, e.g. 0.1, overlapping objects of the same or different classes may not be detected. But if set too high, e.g. 1, then multiple frames of the same object may be obtained.
And S217, drawing a prediction frame.
Draw the bounding box filtered by the non-maximum suppression on the input frame and assign its category label and confidence score.
According to the technical scheme of the embodiment, the specific situation introduction of target detection on the signal lamp is given, and the target image to be detected is obtained; and determining the type and/or position of a signal lamp in the target image based on a signal lamp detection model, wherein the signal lamp detection model is obtained based on target detection network training. The technical scheme of this embodiment can solve the lower problem of detection accuracy and efficiency among the prior art, avoids night, rainy day, haze, has the influence of difficult scenes such as sheltering from to signal lamp target detection, improves the target detection efficiency and the accuracy of signal lamp, provides a new thinking for the target detection of signal lamp.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a target detection apparatus for a signal lamp according to a third embodiment of the present invention, where the apparatus is suitable for executing the target detection method for a signal lamp according to the third embodiment of the present invention, and can improve target detection efficiency and accuracy of the signal lamp. As shown in fig. 3, the apparatus includes an image acquisition module 310 and a category position determination module 320.
The image acquiring module 310 is configured to acquire a target image to be detected; a category position determination module 320, configured to determine a category and/or a position of a signal lamp in the target image based on the signal lamp detection model; the signal lamp detection model is obtained based on target detection network training.
According to the technical scheme of the embodiment, the target image to be detected is obtained; and determining the type and/or position of a signal lamp in the target image based on a signal lamp detection model, wherein the signal lamp detection model is obtained based on target detection network training. The technical scheme of this embodiment can solve the lower problem of detection accuracy and efficiency among the prior art, avoids night, rainy day, haze, has the influence of difficult scenes such as sheltering from to signal lamp target detection, improves the target detection efficiency and the accuracy of signal lamp, provides a new thinking for the target detection of signal lamp.
Preferably, the apparatus further comprises: the device comprises a fluctuation amplitude determination module, a learning rate reduction module and a learning rate improvement module. The signal lamp detection model comprises a fluctuation amplitude determination module, a signal lamp detection model generation module and a signal lamp detection model generation module, wherein the fluctuation amplitude determination module is used for determining the loss function fluctuation amplitude of the signal lamp detection model in the training process of the signal lamp detection model; the learning rate reduction module is used for reducing the learning rate of the signal lamp detection model under the condition that the fluctuation amplitude of the loss function is larger than a first amplitude threshold value; the learning rate improving module is used for improving the learning rate of the signal lamp detection model under the condition that the fluctuation amplitude of the loss function is smaller than a second amplitude threshold value; wherein the first amplitude threshold is greater than the second amplitude threshold.
Accordingly, in the training process of the signal lamp detection model, the batch size of the signal lamp detection model is 32.
Preferably, the apparatus further comprises: the parameter clustering module is used for clustering parameters of the signal lamps in the sample image and determining parameters of a prior frame in the signal lamp detection model according to a clustering result; wherein the parameter is a size and/or a shape.
Preferably, the apparatus further comprises: and the maximum size determining module is used for determining the maximum size of the signal lamp in the sample image and taking the maximum size as the upper limit of the size of the boundary box of the signal lamp detection model.
The target detection device for the signal lamp provided by the embodiment of the invention can execute the target detection method for the signal lamp provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 4 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 4, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, to implement the target detection method of the signal lamp provided by the embodiment of the present invention.
EXAMPLE five
The fifth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the target detection method for a signal lamp provided in any embodiment of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.