CN112183382A - Unmanned traffic light detection and classification method and device - Google Patents

Unmanned traffic light detection and classification method and device Download PDF

Info

Publication number
CN112183382A
CN112183382A CN202011056163.3A CN202011056163A CN112183382A CN 112183382 A CN112183382 A CN 112183382A CN 202011056163 A CN202011056163 A CN 202011056163A CN 112183382 A CN112183382 A CN 112183382A
Authority
CN
China
Prior art keywords
traffic light
coordinate system
coordinates
classification
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011056163.3A
Other languages
Chinese (zh)
Inventor
陈海波
武玉琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenlan Artificial Intelligence Shenzhen Co Ltd
Original Assignee
Shenlan Artificial Intelligence Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenlan Artificial Intelligence Shenzhen Co Ltd filed Critical Shenlan Artificial Intelligence Shenzhen Co Ltd
Priority to CN202011056163.3A priority Critical patent/CN112183382A/en
Publication of CN112183382A publication Critical patent/CN112183382A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting and classifying unmanned traffic lights, wherein the method comprises the following steps: acquiring a traffic light image, and performing data enhancement processing on the traffic light image; training the classification network by using the traffic light image subjected to data enhancement processing to obtain a predicted traffic light state; acquiring map coordinate system coordinates of a real traffic light in a high-precision map, and converting the map coordinate system coordinates to obtain pixel coordinate system coordinates of the traffic light; calculating the area intersection ratio of the real traffic light and the predicted traffic light; when the area intersection ratio is larger than a preset threshold value, determining that the predicted state of the traffic light is a correct traffic light; and performing classification network prediction according to the correct traffic light. Therefore, the high-precision map is introduced on the basis of carrying out target detection classification on the traffic lights by using the convolutional neural network, the detection classification precision of the traffic lights is effectively improved, and the path planning error caused by the traffic light target detection classification error in automatic driving is avoided.

Description

Unmanned traffic light detection and classification method and device
Technical Field
The invention relates to the technical field of target detection, in particular to a method for detecting and classifying unmanned traffic lights, a device for detecting and classifying unmanned traffic lights, computer equipment, a non-transitory computer-readable storage medium and a computer program product.
Background
The current traffic light detection only uses a convolutional neural network for target detection, and the traffic light identification is wrong, for example, false alarm may occur in the target detection, which results in wrong planning of automatic driving.
Disclosure of Invention
The invention aims to solve the technical problems and provides a method for detecting and classifying the unmanned traffic lights, which introduces a high-precision map on the basis of using a convolutional neural network to detect and classify the targets of the traffic lights, effectively improves the detection and classification precision of the traffic lights and avoids the path planning error caused by the error detection and classification of the targets of the traffic lights in automatic driving.
The technical scheme adopted by the invention is as follows:
a method for detecting and classifying unmanned traffic lights is characterized by comprising the following steps: acquiring a traffic light image, and performing data enhancement processing on the traffic light image; training a classification network by using the traffic light image subjected to data enhancement processing to obtain a predicted state of the traffic light; acquiring map coordinate system coordinates of a real traffic light in a high-precision map, and converting the map coordinate system coordinates to obtain pixel coordinate system coordinates of the traffic light; calculating an area intersection ratio of the real traffic light and the predicted traffic light; when the area intersection ratio is larger than a preset threshold value, determining that the predicted state of the traffic light is a correct traffic light; and performing classification network prediction according to the correct traffic light.
According to an embodiment of the present invention, converting the map coordinate system coordinates to obtain the pixel coordinate system coordinates of the traffic light includes: multiplying the coordinates of the real traffic lights in the high-precision map under the map coordinate system by a map coordinate system-to-vehicle body coordinate system transformation matrix to obtain the coordinates of the real traffic lights under a vehicle body coordinate system; multiplying the coordinates of the real traffic light under the vehicle body coordinate system by a vehicle body coordinate system-to-camera coordinate system transformation matrix to obtain the coordinates of the real traffic light under the camera coordinate system; and multiplying the coordinates of the real traffic light under the camera coordinate system by the internal reference matrix of the camera to obtain the coordinates of the real traffic light under the pixel coordinate system.
According to one embodiment of the present invention, acquiring a traffic light image comprises: collecting images containing traffic lights, and respectively carrying out target detection labeling and classification labeling to form a training data set; inputting the image containing the traffic light into a target detection network, and performing data enhancement processing; training the target detection network by using the training data after data enhancement processing, and predicting the coordinates of the traffic lights; and acquiring the traffic light image according to the coordinates of the traffic light.
According to one embodiment of the invention, the data enhancement processing is carried out on the traffic light image, and comprises the following steps: and performing data enhancement processing on the traffic light image by adopting one or more of median filtering, image sharpening, rotation, mirror image and brightness adjustment.
According to an embodiment of the present invention, the above unmanned traffic light detection and classification method further includes: and when the area intersection ratio is smaller than or equal to a preset threshold value, determining that the predicted state of the traffic light is a false alarm.
According to one embodiment of the invention, the target detection network is centret and the classification network is ResNet 18.
The invention also provides a device for detecting and classifying the unmanned traffic lights, which comprises the following components: the data processing module is used for acquiring a traffic light image and performing data enhancement processing on the traffic light image; the training module is used for training the classification network by using the traffic light image subjected to data enhancement processing to obtain the predicted state of the traffic light; the acquisition module is used for acquiring the map coordinate system coordinates of the real traffic lights in the high-precision map; the conversion module is used for converting the coordinate of the map coordinate system to obtain the coordinate of the pixel coordinate system of the traffic light; a calculation module for calculating an area intersection ratio of the real traffic light and the predicted traffic light; the determining module is used for determining that the predicted state of the traffic light is the correct traffic light when the area intersection ratio is larger than a preset threshold value; and the prediction module is used for carrying out classification network prediction according to the correct traffic light.
The invention also provides computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the program, the unmanned traffic light detection and classification method is realized.
The invention also proposes a non-transitory computer-readable storage medium on which a computer program is stored which, when executed by a processor, implements the above-mentioned unmanned traffic light detection classification method.
The invention also proposes a computer program product in which the instructions, when executed by a processor, perform the above-mentioned unmanned traffic light detection classification method.
The invention has the beneficial effects that:
the invention introduces the high-precision map on the basis of carrying out target detection classification on the traffic lights by using the convolutional neural network, effectively improves the detection classification precision of the traffic lights and avoids the path planning error caused by the traffic light target detection classification error in automatic driving.
Drawings
FIG. 1 is a flow chart of a method for unmanned traffic light detection and classification in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart of a method for unmanned traffic light detection classification in accordance with one embodiment of the present invention;
fig. 3 is a block diagram of an apparatus for detecting and classifying an unmanned traffic light according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a method for detecting and classifying an unmanned traffic light according to an embodiment of the present invention.
As shown in fig. 1, the unmanned traffic light detection and classification method of the present invention may include the steps of:
and S1, acquiring the traffic light image and performing data enhancement processing on the traffic light image.
According to one embodiment of the present invention, acquiring a traffic light image comprises: collecting images containing traffic lights, and respectively carrying out target detection labeling and classification labeling to form a training data set; inputting images containing traffic lights into a target detection network, and performing data enhancement processing; training the target detection network by using the training data after the data enhancement processing, and predicting the coordinates of the traffic lights; and acquiring a traffic light image according to the coordinates of the traffic light. The labeling mode of the image containing the traffic light is manual labeling. In one embodiment of the invention, the target detection network may be a centret.
Specifically, an image including a traffic light is obtained, and target measurement labeling and classification labeling are performed to form a training data set, the image including the traffic light is input into a target detection network, and data enhancement processing is performed, for example, by one or more random combination methods of median filtering, image sharpening, rotation, mirroring, brightness adjustment and RGB color channel transformation. Training the target detection network by using the training data after the data enhancement post-processing can predict the coordinates of the traffic lights, and then extracting the traffic lights from the original image according to the coordinates of the traffic lights, so that the images of the traffic lights can be obtained.
Further, according to an embodiment of the present invention, the data enhancement processing of the traffic light image includes: and performing data enhancement processing on the traffic light image by adopting one or more of median filtering, image sharpening, rotation, mirror image and brightness adjustment.
That is to say, it is necessary to perform classification prediction on the traffic light image, that is, it is predicted whether the traffic light is red light, yellow light or green light, so that when the traffic light image is subjected to data enhancement processing, on the basis of not changing the original color of the image, the traffic light image is subjected to data enhancement processing by adopting one or more combination modes of median filtering, image sharpening, rotation, mirroring and brightness adjustment according to the quality of the traffic light image, so as to obtain an image meeting the requirement of classification network training.
And S2, training the classification network by using the traffic light image after the data enhancement processing to obtain the predicted state of the traffic light. In one embodiment of the invention, the classification network may be ResNet 18.
And S3, acquiring the map coordinate system coordinates of the real traffic lights in the high-precision map, and converting the map coordinate system coordinates to obtain the pixel coordinate system coordinates of the traffic lights.
According to one embodiment of the invention, the converting the coordinates of the map coordinate system to obtain the coordinates of the pixel coordinate system of the traffic light comprises the following steps: multiplying the coordinates of the real traffic lights in the high-precision map under the map coordinate system by a vehicle body coordinate system transformation matrix of the map coordinate system to obtain the coordinates of the real traffic lights under a vehicle body coordinate system; multiplying the coordinates of the real traffic light under the vehicle body coordinate system by the vehicle body coordinate system to be converted into a camera coordinate system transformation matrix to obtain the coordinates of the real traffic light under the camera coordinate system; and multiplying the coordinates of the real traffic light under the camera coordinate system by the internal reference matrix of the camera to obtain the coordinates of the real traffic light under the pixel coordinate system.
And the vehicle body coordinate system is converted into a camera coordinate system transformation matrix according to the actual condition. The internal reference matrix is obtained by transforming the 3D camera coordinates to the 2D homogeneous image coordinates, and specific parameter matrices can be recorded in the prior art and are not described herein again.
And S4, calculating the area intersection ratio of the real traffic light and the predicted traffic light. The intersection ratio is a result obtained by dividing a portion where two regions overlap by a set portion of the two regions. The real traffic light area and the predicted traffic light area are divided into areas consisting of respective coordinates and an origin.
And S5, when the area sum ratio is larger than a preset threshold value, determining the predicted state of the traffic light as the correct traffic light. The preset threshold may be calibrated according to actual conditions, for example, the preset threshold may be 0.7.
According to one embodiment of the present invention, the predicted status of the traffic light is determined to be a false alarm when the area intersection ratio is less than or equal to a preset threshold.
That is, when the area cross ratio is greater than a preset threshold (e.g., 0.7), the degree of overlap between the predicted traffic light state and the passing of the real traffic light is considered to be high, and the predicted traffic light is considered to be the correct traffic light; a false alarm is considered when the area intersection ratio is less than or equal to a preset threshold (e.g., 0.7).
And S6, performing classification network prediction according to the correct traffic light.
That is, the traffic light image is extracted from the original image through the correct coordinates of the traffic light and sent to the classification network to judge the shape and color of the traffic light.
As a specific example of the present invention, as shown in fig. 2, the unmanned traffic light detection and classification method of the present invention may include the steps of:
and S101, performing data enhancement processing on the image containing the traffic light, and inputting the image to a target detection network for training.
S102, predicting the coordinates of the traffic lights and acquiring images of the traffic lights.
And S103, performing data enhancement processing on the image of the traffic light.
And S104, training the classification network by using the traffic light image after the enhancement processing to obtain the predicted traffic light state.
And S105, acquiring the map coordinate system coordinates of the real traffic lights in the high-precision map.
S106, converting the coordinates of the real traffic light in the map coordinate system into the coordinates of the real traffic light in the vehicle body coordinate system;
s107, converting the coordinates of the real traffic light under the vehicle body coordinate system into the coordinates of the real traffic light under the camera coordinate system;
and S108, converting the coordinates of the real traffic light in the camera coordinate system into the coordinates of the real traffic light in the pixel coordinate system.
And S109, calculating the area intersection ratio of the real traffic light and the predicted traffic light.
S110, judging whether the area intersection ratio is larger than 0.7. If yes, executing step S111; if not, step S112 is performed.
And S111, extracting the traffic light from the original image through correct coordinates of the traffic light, and sending the traffic light to a classification network to judge the shape and the color of the traffic light.
And S112, predicting the state of the traffic light to be a false alarm.
In conclusion, the unmanned traffic light detection and classification method introduces the high-precision map on the basis of using the convolutional neural network to detect and classify the targets of the traffic lights, effectively improves the detection and classification precision of the traffic lights, and avoids the path planning error caused by the traffic light target detection and classification error in automatic driving.
Fig. 3 is a block diagram of an apparatus for detecting and classifying an unmanned traffic light according to an embodiment of the present invention.
As shown in fig. 3, the unmanned traffic light detection and classification apparatus according to an embodiment of the present invention may include: the system comprises a data processing module 10, a training module 20, an acquisition module 30, a conversion module 40, a calculation module 50, a determination module 60 and a prediction module 70.
The data processing module 10 is used for acquiring the traffic light image and performing data enhancement processing on the traffic light image. The training module 20 is configured to train the classification network using the traffic light image after the data enhancement processing to obtain a predicted traffic light state. The acquisition module 30 is used to acquire the map coordinate system coordinates of the real traffic lights in the high-precision map. The conversion module 40 is configured to convert the coordinates of the map coordinate system to obtain the coordinates of the pixel coordinate system of the traffic light. The calculation module 50 is used to calculate the area intersection ratio of the real traffic light and the predicted traffic light. The determination module 60 is configured to determine that the predicted status of the traffic light is the correct traffic light when the area intersection ratio is greater than a preset threshold. The prediction module 70 is used to make classification network predictions based on the correct traffic light.
According to an embodiment of the present invention, the converting module 40 converts the coordinates of the map coordinate system to obtain the coordinates of the pixel coordinate system of the traffic light, including: multiplying the coordinates of the real traffic lights in the high-precision map under the map coordinate system by a vehicle body coordinate system transformation matrix of the map coordinate system to obtain the coordinates of the real traffic lights under a vehicle body coordinate system; multiplying the coordinates of the real traffic light under the vehicle body coordinate system by the vehicle body coordinate system to be converted into a camera coordinate system transformation matrix to obtain the coordinates of the real traffic light under the camera coordinate system; and multiplying the coordinates of the real traffic light under the camera coordinate system by the internal reference matrix of the camera to obtain the coordinates of the real traffic light under the pixel coordinate system.
According to an embodiment of the present invention, the data processing module 10 acquires a traffic light image, and is specifically configured to collect an image including a traffic light, and perform target detection labeling and classification labeling respectively to form a training data set; inputting images containing traffic lights into a target detection network, and performing data enhancement processing; training the target detection network by using the training data after the data enhancement processing, and predicting the coordinates of the traffic lights; and acquiring a traffic light image according to the coordinates of the traffic light.
According to an embodiment of the present invention, the data processing module 10 performs data enhancement processing on the traffic light image, specifically, performs data enhancement processing on the traffic light image by using one or more of median filtering, image sharpening, rotation, mirroring, and brightness adjustment.
According to an embodiment of the present invention, the determining module 60 is further configured to determine that the predicted status of the traffic light is a false alarm when the area intersection ratio is less than or equal to the preset threshold.
According to one embodiment of the invention, the target detection network is centret and the classification network is ResNet 18.
It should be noted that details not disclosed in the unmanned traffic light detection and classification apparatus according to the embodiment of the present invention refer to details disclosed in the unmanned traffic light detection and classification method according to the embodiment of the present invention, and are not repeated herein.
In summary, the unmanned traffic light detection and classification device introduces the high-precision map on the basis of using the convolutional neural network to carry out target detection and classification on the traffic lights, effectively improves the detection and classification precision of the traffic lights, and avoids path planning errors caused by the traffic light target detection and classification errors in automatic driving.
The invention further provides a computer device corresponding to the embodiment.
The computer device of the embodiment of the invention comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, and when the processor executes the computer program, the unmanned traffic light detection and classification method according to the embodiment of the invention can be realized.
According to the computer device of the embodiment of the invention, when the processor executes the computer program stored on the memory, the traffic light image is firstly obtained, and the data enhancement processing is carried out on the traffic light image; training the classification network by using the traffic light image subjected to data enhancement processing to obtain a predicted traffic light state; acquiring map coordinate system coordinates of a real traffic light in a high-precision map, and converting the map coordinate system coordinates to obtain pixel coordinate system coordinates of the traffic light; calculating the area intersection ratio of the real traffic light and the predicted traffic light; when the area intersection ratio is larger than a preset threshold value, determining that the predicted state of the traffic light is a correct traffic light; and carrying out classification network prediction according to the correct traffic light, thereby introducing a high-precision map on the basis of carrying out target detection classification on the traffic light by using a convolutional neural network, effectively improving the detection classification precision of the traffic light, and avoiding path planning errors caused by the error detection classification of the traffic light target in automatic driving.
The invention also provides a non-transitory computer readable storage medium corresponding to the above embodiment.
A non-transitory computer-readable storage medium of an embodiment of the present invention has stored thereon a computer program that, when executed by a processor, can implement the unmanned traffic light detection and classification method according to the above-described embodiment of the present invention.
According to the non-transitory computer-readable storage medium of an embodiment of the present invention, when the processor executes the computer program stored thereon, first, a traffic light image is acquired, and data enhancement processing is performed on the traffic light image; training the classification network by using the traffic light image subjected to data enhancement processing to obtain a predicted traffic light state; acquiring map coordinate system coordinates of a real traffic light in a high-precision map, and converting the map coordinate system coordinates to obtain pixel coordinate system coordinates of the traffic light; calculating the area intersection ratio of the real traffic light and the predicted traffic light; when the area intersection ratio is larger than a preset threshold value, determining that the predicted state of the traffic light is a correct traffic light; and carrying out classification network prediction according to the correct traffic light, thereby introducing a high-precision map on the basis of carrying out target detection classification on the traffic light by using a convolutional neural network, effectively improving the detection classification precision of the traffic light, and avoiding path planning errors caused by the error detection classification of the traffic light target in automatic driving.
The present invention also provides a computer program product corresponding to the above embodiments.
The instructions in the computer program product of the embodiment of the present invention, when executed by the processor, may perform the unmanned traffic light detection and classification method according to the above-described embodiment of the present invention.
According to the computer program product of the embodiment of the invention, when the processor executes the instruction, firstly, the traffic light image is obtained, and data enhancement processing is carried out on the traffic light image; training the classification network by using the traffic light image subjected to data enhancement processing to obtain a predicted traffic light state; acquiring map coordinate system coordinates of a real traffic light in a high-precision map, and converting the map coordinate system coordinates to obtain pixel coordinate system coordinates of the traffic light; calculating the area intersection ratio of the real traffic light and the predicted traffic light; when the area intersection ratio is larger than a preset threshold value, determining that the predicted state of the traffic light is a correct traffic light; and carrying out classification network prediction according to the correct traffic light, thereby introducing a high-precision map on the basis of carrying out target detection classification on the traffic light by using a convolutional neural network, effectively improving the detection classification precision of the traffic light, and avoiding path planning errors caused by the error detection classification of the traffic light target in automatic driving.
In the description of the present invention, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. The meaning of "plurality" is two or more unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A method for detecting and classifying unmanned traffic lights is characterized by comprising the following steps:
acquiring a traffic light image, and performing data enhancement processing on the traffic light image;
training a classification network by using the traffic light image subjected to data enhancement processing to obtain a predicted state of the traffic light;
acquiring map coordinate system coordinates of a real traffic light in a high-precision map, and converting the map coordinate system coordinates to obtain pixel coordinate system coordinates of the traffic light;
calculating an area intersection ratio of the real traffic light and the predicted traffic light;
when the area intersection ratio is larger than a preset threshold value, determining that the predicted state of the traffic light is a correct traffic light;
and performing classification network prediction according to the correct traffic light.
2. The unmanned traffic light detection and classification method of claim 1, wherein converting the map coordinate system coordinates to obtain pixel coordinate system coordinates of the traffic light comprises:
multiplying the coordinates of the real traffic lights in the high-precision map under the map coordinate system by a map coordinate system-to-vehicle body coordinate system transformation matrix to obtain the coordinates of the real traffic lights under a vehicle body coordinate system;
multiplying the coordinates of the real traffic light under the vehicle body coordinate system by a vehicle body coordinate system-to-camera coordinate system transformation matrix to obtain the coordinates of the real traffic light under the camera coordinate system;
and multiplying the coordinates of the real traffic light under the camera coordinate system by the internal reference matrix of the camera to obtain the coordinates of the real traffic light under the pixel coordinate system.
3. The unmanned traffic light detection and classification method of claim 1, wherein obtaining a traffic light image comprises:
collecting images containing traffic lights, and respectively carrying out target detection labeling and classification labeling to form a training data set;
inputting the image containing the traffic light into a target detection network, and performing data enhancement processing;
training the target detection network by using the training data after data enhancement processing, and predicting the coordinates of the traffic lights;
and acquiring the traffic light image according to the coordinates of the traffic light.
4. The unmanned traffic light detection and classification method according to claim 1, wherein the data enhancement processing of the traffic light image comprises:
and performing data enhancement processing on the traffic light image by adopting one or more of median filtering, image sharpening, rotation, mirror image and brightness adjustment.
5. The unmanned traffic light detection and classification method according to claim 1, further comprising:
and when the area intersection ratio is smaller than or equal to the preset threshold value, determining that the predicted state of the traffic light is a false alarm.
6. The unmanned traffic light detection and classification method of claim 3, wherein the target detection network is CenterNet and the classification network is ResNet 18.
7. An unmanned traffic light detection and classification device, comprising:
the data processing module is used for acquiring a traffic light image and performing data enhancement processing on the traffic light image;
the training module is used for training the classification network by using the traffic light image subjected to data enhancement processing to obtain the predicted state of the traffic light;
the acquisition module is used for acquiring the map coordinate system coordinates of the real traffic lights in the high-precision map;
the conversion module is used for converting the coordinate of the map coordinate system to obtain the coordinate of the pixel coordinate system of the traffic light;
a calculation module for calculating an area intersection ratio of the real traffic light and the predicted traffic light;
the determining module is used for determining that the predicted state of the traffic light is the correct traffic light when the area intersection ratio is larger than a preset threshold value;
and the prediction module is used for carrying out classification network prediction according to the correct traffic light.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, implements the unmanned traffic light detection classification method according to any of claims 1-6.
9. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the unmanned traffic light detection classification method according to any of claims 1-6.
10. A computer program product, characterized in that instructions in the computer program product, when executed by a processor, perform the unmanned traffic light detection classification method according to any of claims 1-6.
CN202011056163.3A 2020-09-30 2020-09-30 Unmanned traffic light detection and classification method and device Pending CN112183382A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011056163.3A CN112183382A (en) 2020-09-30 2020-09-30 Unmanned traffic light detection and classification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011056163.3A CN112183382A (en) 2020-09-30 2020-09-30 Unmanned traffic light detection and classification method and device

Publications (1)

Publication Number Publication Date
CN112183382A true CN112183382A (en) 2021-01-05

Family

ID=73947037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011056163.3A Pending CN112183382A (en) 2020-09-30 2020-09-30 Unmanned traffic light detection and classification method and device

Country Status (1)

Country Link
CN (1) CN112183382A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906617A (en) * 2021-03-08 2021-06-04 济南大学 Driver abnormal behavior identification method and system based on hand detection
CN113177522A (en) * 2021-05-24 2021-07-27 的卢技术有限公司 Traffic light detection and identification method used in automatic driving scene

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305475A (en) * 2017-03-06 2018-07-20 腾讯科技(深圳)有限公司 A kind of traffic lights recognition methods and device
CN109583415A (en) * 2018-12-11 2019-04-05 兰州大学 A kind of traffic lights detection and recognition methods merged based on laser radar with video camera
CN109949594A (en) * 2019-04-29 2019-06-28 北京智行者科技有限公司 Real-time traffic light recognition method
CN110543814A (en) * 2019-07-22 2019-12-06 华为技术有限公司 Traffic light identification method and device
CN110688992A (en) * 2019-12-09 2020-01-14 中智行科技有限公司 Traffic signal identification method and device, vehicle navigation equipment and unmanned vehicle
CN110794405A (en) * 2019-10-18 2020-02-14 北京全路通信信号研究设计院集团有限公司 Target detection method and system based on camera and radar fusion
CN111444810A (en) * 2020-03-23 2020-07-24 东软睿驰汽车技术(沈阳)有限公司 Traffic light information identification method, device, equipment and storage medium
CN111582189A (en) * 2020-05-11 2020-08-25 腾讯科技(深圳)有限公司 Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
CN111695546A (en) * 2020-06-28 2020-09-22 北京京东乾石科技有限公司 Traffic signal lamp identification method and device for unmanned vehicle

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305475A (en) * 2017-03-06 2018-07-20 腾讯科技(深圳)有限公司 A kind of traffic lights recognition methods and device
CN109583415A (en) * 2018-12-11 2019-04-05 兰州大学 A kind of traffic lights detection and recognition methods merged based on laser radar with video camera
CN109949594A (en) * 2019-04-29 2019-06-28 北京智行者科技有限公司 Real-time traffic light recognition method
CN110543814A (en) * 2019-07-22 2019-12-06 华为技术有限公司 Traffic light identification method and device
CN110794405A (en) * 2019-10-18 2020-02-14 北京全路通信信号研究设计院集团有限公司 Target detection method and system based on camera and radar fusion
CN110688992A (en) * 2019-12-09 2020-01-14 中智行科技有限公司 Traffic signal identification method and device, vehicle navigation equipment and unmanned vehicle
CN111444810A (en) * 2020-03-23 2020-07-24 东软睿驰汽车技术(沈阳)有限公司 Traffic light information identification method, device, equipment and storage medium
CN111582189A (en) * 2020-05-11 2020-08-25 腾讯科技(深圳)有限公司 Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
CN111695546A (en) * 2020-06-28 2020-09-22 北京京东乾石科技有限公司 Traffic signal lamp identification method and device for unmanned vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906617A (en) * 2021-03-08 2021-06-04 济南大学 Driver abnormal behavior identification method and system based on hand detection
CN113177522A (en) * 2021-05-24 2021-07-27 的卢技术有限公司 Traffic light detection and identification method used in automatic driving scene

Similar Documents

Publication Publication Date Title
CN109508580B (en) Traffic signal lamp identification method and device
CN109948684B (en) Quality inspection method, device and equipment for laser radar point cloud data labeling quality
CN110135307B (en) Traffic sign detection method and device based on attention mechanism
CN111862228B (en) Occlusion detection method, system, computer device and readable storage medium
CN113096130B (en) Method and device for detecting object defects
CN112183382A (en) Unmanned traffic light detection and classification method and device
CN108830131B (en) Deep learning-based traffic target detection and ranging method
CN111047615A (en) Image-based line detection method and device and electronic equipment
CN112508950B (en) Anomaly detection method and device
Somawirata et al. Road detection based on the color space and cluster connecting
CN112132131A (en) Measuring cylinder liquid level identification method and device
JP2020160840A (en) Road surface defect detecting apparatus, road surface defect detecting method, road surface defect detecting program
CN114565895A (en) Security monitoring system and method based on intelligent society
CN112070750A (en) Leather product defect detection method and device
CN111008956B (en) Beam bottom crack detection method, system, device and medium based on image processing
CN114910891A (en) Multi-laser radar external parameter calibration method based on non-overlapping fields of view
CN114966631A (en) Fault diagnosis and processing method and device for vehicle-mounted laser radar, medium and vehicle
CN113554645A (en) Industrial anomaly detection method and device based on WGAN
CN117292277A (en) Insulator fault detection method based on binocular unmanned aerial vehicle system and deep learning
CN111832418A (en) Vehicle control method, device, vehicle and storage medium
CN114648736B (en) Robust engineering vehicle identification method and system based on target detection
CN115909285A (en) Radar and video signal fused vehicle tracking method
CN113505860B (en) Screening method and device for blind area detection training set, server and storage medium
CN115512098A (en) Electronic bridge inspection system and inspection method
CN114998889A (en) Intelligent identification method and system for immersive three-dimensional image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination