CN112861748B - Traffic light detection system and method in automatic driving - Google Patents

Traffic light detection system and method in automatic driving Download PDF

Info

Publication number
CN112861748B
CN112861748B CN202110199699.9A CN202110199699A CN112861748B CN 112861748 B CN112861748 B CN 112861748B CN 202110199699 A CN202110199699 A CN 202110199699A CN 112861748 B CN112861748 B CN 112861748B
Authority
CN
China
Prior art keywords
camera
traffic light
coordinate system
map
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110199699.9A
Other languages
Chinese (zh)
Other versions
CN112861748A (en
Inventor
苏畅
周路翔
王洪尧
张旸
陈诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AutoCore Intelligence Technology Nanjing Co Ltd
Original Assignee
AutoCore Intelligence Technology Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AutoCore Intelligence Technology Nanjing Co Ltd filed Critical AutoCore Intelligence Technology Nanjing Co Ltd
Priority to CN202110199699.9A priority Critical patent/CN112861748B/en
Publication of CN112861748A publication Critical patent/CN112861748A/en
Application granted granted Critical
Publication of CN112861748B publication Critical patent/CN112861748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a traffic light detection system and method in automatic driving, including obtaining the coordinate under the map coordinate system of the goal of preliminary detection; calibrating the relation of a camera coordinate system of the camera relative to a vehicle body coordinate system; acquiring vehicle body attitude information in real time, and calculating current attitude information of a camera; determining the position of a traffic light; sending the cut pictures only containing the traffic lights, and then carrying out color classification to obtain the colors of the traffic lights; the invention combines the positioning information and the map information, achieves the purpose of identifying the color and the remaining seconds of the traffic light by training a target detection model and a classification model aiming at the traffic light, and uses an inference engine to accelerate the model so as to achieve the purpose of real-time processing on embedded equipment, thereby better solving the problem of poor robustness of the traditional traffic light detection method.

Description

Traffic light detection system and method in automatic driving
Technical Field
The invention relates to a traffic light detection system and method in automatic driving, and belongs to the technical field of intelligent auxiliary driving.
Background
The automatic detection of the position and state of a traffic light ahead during driving is an important technology in advanced assistant driving and unmanned driving. The detection of traffic lights is often difficult due to the complex traffic scene, the drastically changing illumination, and the resolution of the camera.
At present, in a traditional detection method for traffic lights, an image is processed through operations such as threshold segmentation and morphological transformation to obtain an object region of interest in the image, then the regions are processed through specific priori knowledge such as region connectivity, length-width ratio, shape, relative position and the like, layer by layer screening is performed, the region where the traffic lights are located is finally obtained, and then the color of the traffic lights is judged through setting a color threshold or utilizing a special color space. Therefore, it is of great significance to seek a brand-new traffic light detection mode.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method provides a brand-new traffic light detection mode, and solves the problem that the traditional traffic light detection method in automatic driving is poor in robustness.
In order to solve the technical problems, the technical scheme provided by the invention is as follows: a traffic light detection system in automatic driving comprises a map loading module, a data processing module and a data processing module, wherein the map loading module is used for describing 3d position information of a pre-detected target in a space by establishing a map coordinate system; the sensing module comprises a camera and a laser radar, is connected with the map loading module, correspondingly converts the coordinates of the pre-detection target under a map coordinate system into the coordinates of a camera coordinate system under the camera through the external parameters of the camera and the laser radar, and correspondingly converts the coordinates of the camera coordinate system into the coordinates of an image coordinate system through the internal parameters of the camera; the positioning module is connected with the map loading module and the sensing module and is used for acquiring the posture information of the vehicle body in real time and calibrating the relation of a camera coordinate system relative to the vehicle body coordinate system, and calculating the current posture information of the camera according to the calibrated relation of the camera coordinate system relative to the vehicle body coordinate system; an ROI area detection module connected with the map loading module, the sensing module and the positioning module, for receiving internal and external reference information of a camera, pose information of a laser radar in a map coordinate system, image information shot by the camera and vector map information in the map loading module, converting the 3d position information of a traffic light in the map coordinate system into 2d coordinate information in the image coordinate system according to the received internal and external reference information of the camera and the pose information of the laser radar in the map coordinate system after acquiring the 3d position information of the traffic light in the map coordinate system, calculating the distance from the camera to the traffic light according to the pose information expressed as the 2d coordinate after the current conversion of the camera, screening out the traffic light meeting the requirement through a preset first threshold value, calculating the difference value between the yaw angle of the traffic light and the yaw angle of the camera, and re-screening the traffic light through a preset second threshold value, extracting the position of the traffic light in the image; the cutting module is connected with the ROI area detection module and the map loading module and cuts out a picture only containing traffic lights; the sending module is connected with the cutting module and used for sending the cut picture only containing traffic lights to the signal light color judging module; the signal lamp color distinguishing module is used for receiving the traffic light pictures and inputting the traffic light pictures into a pre-trained mobilenetv2 classification model to classify colors; and the planning module is connected with the signal lamp color judging module, receives the id and the category information of the traffic lights, and is used for planning the vehicle running track after being coded.
As a preferable aspect of the traffic light detection system in automatic driving according to the present invention, wherein: the system also comprises an object detection module which is connected with the ROI area detection module and used for accurately detecting the position of the traffic light in the image by utilizing a pre-trained object detection model yolov 3.
In order to solve the technical problems, the invention also provides the technical scheme that: a traffic light detection method in automatic driving comprises the following steps: the map loading module loads map information and acquires coordinates of a pre-detection target in a map coordinate system by using 3d position information of the pre-detection target in a vector map marking space; calibrating the relation of a camera coordinate system of a camera in the sensing module relative to a vehicle body coordinate system; acquiring vehicle body posture information in real time through a positioning module, and calculating current posture information of a camera according to the relationship between a calibrated camera coordinate system and a vehicle body coordinate system; the ROI area detection module receives internal and external reference information of a camera, pose information of a laser radar in a map coordinate system, image information shot by the camera and vector map information, acquires 3d position information of a traffic light in the map coordinate system, converts the 3d position information of the traffic light into 2d coordinate information in the image coordinate system according to the received internal and external reference information of the camera and the pose information of the laser radar in the map coordinate system, calculates the distance from the camera to the traffic light according to the pose information expressed as the 2d coordinate after the current conversion of the camera, judges and removes traffic lights at non-current intersections side by side, calculates the difference between the yaw angle of the traffic light and the yaw angle of the camera, judges and removes the lights of a transverse lane and a lane side by side, and then determines the position of the opposite traffic light; the sending module sends the picture which is cut by the cutting module and only contains the traffic lights to the signal light color distinguishing module to carry out color classification through a pre-trained mobilentv 2 classification model, and the colors of the traffic lights are obtained; and the id and the category information code of the traffic light are sent to a planning module to plan the vehicle running track.
As a preferable aspect of the traffic light detection method in automatic driving according to the present invention, wherein: the vehicle body posture information acquired in real time comprises the current position and orientation information of the vehicle under a map coordinate system.
As a preferable aspect of the traffic light detection method in automatic driving according to the present invention, wherein: the method for judging and removing the traffic lights at the non-current intersection comprises the following steps: presetting a threshold value; and comparing the acquired distance between the camera and the traffic light with the first threshold value, and when the distance between the camera and the traffic light is smaller than the threshold value, reserving the traffic light, otherwise, removing the traffic light.
As a preferable aspect of the traffic light detection method in automatic driving according to the present invention, wherein: the threshold is 100 meters.
As a preferable aspect of the traffic light detection method in automatic driving according to the present invention, wherein: the lamp for judging and removing the transverse lane and the opposite lane comprises the following steps: presetting a second threshold; comparing the calculated difference between the yaw angle of the traffic light and the yaw angle of the camera, and when the difference between the yaw angle of the red and green light and the yaw angle of the camera is smaller than the second threshold value, keeping the traffic light, otherwise, excluding the traffic light.
As a preferable aspect of the traffic light detection method in automatic driving according to the present invention, wherein: when the ROI area detection module extracts the position of a traffic light, the ROI range is expanded, and then detection is carried out through a pre-trained target detection model yolov 3.
The invention has the technical effects that: the traffic light detection method provided by the invention combines the positioning information and the map information, achieves the purpose of identifying the color and the remaining seconds of the traffic light by training the target detection model and the classification model aiming at the traffic light, and uses the inference engine to accelerate the model so as to achieve the purpose of real-time processing on the embedded equipment, thereby better solving the problem of poor robustness of the traditional traffic light detection method.
Drawings
The invention will be further explained with reference to the drawings.
FIG. 1 is a flow chart of a detection method provided by the present invention;
FIG. 2 is a block diagram of a detection system provided by the present invention;
FIG. 3 is an overall block diagram of the detection system provided by the present invention;
fig. 4 and 5 are diagrams of scene detection original drawings and corresponding detection effects by using the invention, respectively;
fig. 6 and 7 are respectively a second scene detection original image and a corresponding second detection effect image using the present invention;
fig. 8 and 9 are diagrams of a third scene detection original image and a corresponding third detection effect image according to the present invention.
Detailed Description
Examples
Considering that the traditional detection method for the traffic light is poor in robustness, and many parameters need to be adjusted.
Therefore, referring to fig. 1 and fig. 3 to 9, the present invention provides a traffic light detection method in automatic driving, including the following steps:
s1, the map loading module 100 loads map information and obtains the coordinate of the pre-detected target in a map coordinate system by using the 3d position information of the pre-detected target in the vector map mark space;
the 3d information of the pre-detection target is marked artificially in the map (the pre-detection target refers to a plurality of targets marked in the map, such as pedestrians, signs, buildings, horse routes, traffic lights, and the like).
S2, calibrating the relation between the camera coordinate system of the camera in the sensing module 200 and the vehicle body coordinate system;
the camera is fixed after being fixed relative to the position of the vehicle body, so that the relation between a camera coordinate system and the vehicle body coordinate system can be calibrated in advance, the vehicle body posture information can be acquired in real time through the positioning module 300, and the current posture information of the camera can be calculated according to the calibrated relation between the camera coordinate system and the vehicle body coordinate system.
S3, acquiring the posture information of the vehicle body in real time through the positioning module 300, and calculating the current posture information of the camera according to the calibrated relation between the camera coordinate system and the vehicle body coordinate system;
it should be noted that the body posture information autonomously acquired by the positioning module 300 in real time includes the current position and orientation information of the vehicle in the map coordinate system.
And the conversion relation from the camera coordinate system to the map coordinate system is calculated here, the positioning module 300 can give a conversion matrix from the laser radar coordinate system to the map coordinate system, the camera can be calibrated in advance relative to the conversion matrix of the laser radar, and the conversion matrix from the camera coordinate system to the map coordinate system can be obtained by multiplying the two matrixes.
S4, the ROI area detection module 400 receives internal and external reference information of a camera, pose information of a laser radar in a map coordinate system, image information shot by the camera and vector map information, 3d position information of a traffic light in the map coordinate system is obtained, the 3d position information of the traffic light is converted into 2d coordinate information in the image coordinate system according to the received internal and external reference information of the camera and the pose information of the laser radar in the map coordinate system, the distance from the camera to the traffic light is calculated according to the pose information expressed as the 2d coordinate after the current conversion of the camera, traffic lights at non-current intersections are judged and removed, the difference value between the yaw angle of the traffic light and the yaw angle of the camera is calculated, and the position of the traffic light is determined after lights of a transverse lane and an opposite lane are removed;
it should be noted that: the map contains 3d information of the object, while the image contains only 2d information of the object. The invention is converted from a 3d position (a traffic light position in a map) to a 2d position (a traffic light position in an image), and the conversion needs internal reference of a camera, external reference of the camera and a laser radar, and the specific conversion relationship is as follows:
firstly, coordinates of a camera coordinate system are equal to coordinates of an external parameter matrix map coordinate system;
secondly, an image coordinate system is equal to an internal reference matrix camera coordinate system;
the camera has internal and external parameters, and the external parameters express a conversion relation under different coordinate systems, namely a relation between a camera coordinate system and a laser radar coordinate system, which is a conversion matrix from a map coordinate system to the camera coordinate system and is obtained by multiplying the conversion matrix between the camera and the laser radar and the pose of the laser radar in the map coordinate system; the reference expresses how to convert a 3d point in the camera coordinate system into a 2d point in the image coordinate system.
Further, the step of judging and removing the traffic lights of the non-current intersection comprises the following steps:
presetting a first threshold;
and comparing the acquired distance between the camera and the traffic light with the first threshold value, and when the distance between the camera and the traffic light is smaller than the first threshold value, reserving the traffic light, otherwise, removing the traffic light.
Wherein, obtaining the distance from the camera to the traffic light comprises: and calculating the Euclidean distance according to the coordinate information in the map and the like and the coordinate of the current camera in the map coordinate system by the following formula, and when the calculated distance is smaller than a threshold value, considering that the current traffic light is the traffic light which can be seen by the camera, and reserving the traffic light, so that the traffic light which is not at the current intersection and the like are eliminated.
The formula is as follows:
Figure BDA0002947691960000041
wherein (x)1,y1) Position information of traffic lights described in a map file; (x)2,y2) Is the coordinate position of the camera in the map at the current moment.
Preferably, the threshold is 100 meters.
Further, the determination and elimination of the lamps of the lateral lane and the opposite lane includes the steps of:
presetting a second threshold;
comparing the calculated difference between the yaw angle of the traffic light and the yaw angle of the camera, when the difference between the yaw angle of the traffic light and the yaw angle of the camera is smaller than a second threshold value, keeping the traffic light, otherwise, excluding the traffic light.
Wherein comparing the difference between the yaw angle of the traffic light and the yaw angle of the camera comprises:
calculating the yaw angle of a traffic light in a map coordinate system according to the position of the light in the map;
according to the formula: t1-yaw ═ atan2(y, x), the yaw angle of the traffic light is calculated, where (y, x) is the position of the light described in the map file, and t1-yaw represents the yaw angle of the traffic light;
specifically, the method comprises the following steps:
Figure BDA0002947691960000051
solving the yaw angle of the camera under a map coordinate system, which specifically comprises the following steps:
and I, obtaining a vector z which is (0,0,1) in a map coordinate system: z-map-M x z, where M is a rotation matrix from the camera coordinate system to the map coordinate system;
and II, calculating the yaw angle of the camera pose in a map coordinate system: camera-yaw-atan 2(z-map.y, z-map.x), where (y, x) is the position information of the camera at this moment in the map coordinate system;
III calculating the angle difference between t1-yaw and camera-yaw: by calculating the dot product of the vector v 1: (std:: cos (t1-yaw), std:: sin (t1-yaw)) and the vector v 2: (std:: cos (camera-yaw), std:: sin (camera-yaw));
it should be noted that: the dot product of the vectors is the degree of similarity between the vectors, the angle between the traffic light and the camera is calculated as: and d, acos (v1.dot (v2)), and the traffic light is considered as the traffic light visible to the camera at the current position only when the diff-angle is smaller than a second threshold value.
Preferably, the second threshold is set to 90 degrees.
S5, the sending module 600 sends the picture which only contains the traffic lights and is cut by the cutting module 500 to the signal light color distinguishing module 700 for color classification through a pre-trained mobilenetv2 classification model, and the colors of the traffic lights are obtained;
and S6, coding the id and the category information of the traffic lights and sending the coded traffic lights to the planning module 800 for planning the running track of the vehicle.
Furthermore, in view of the problem of positioning accuracy, there may be a certain deviation in the position of the traffic light where the given ROI is difficult to frame out accurately, so that the method further includes expanding the range of the ROI by the ROI region detection module 400 when extracting the position of the traffic light, and then detecting the position by the pre-trained target detection model yolov 3. The method comprises the following specific steps:
i data preparation
A test vehicle provided with a camera is started, and traffic light pictures are collected on a test road. After a certain number of pictures are collected, data enhancement processing is carried out on the collected pictures, including changing contrast, changing brightness, Gaussian blur, Gaussian noise, overturning and the like, the number of data sets is increased, and the positions of traffic lights are marked through a marking tool to form a marking file.
II, operating a training program of yolov3, wherein the input of the program is the training picture and the corresponding label file in the step 1. After a certain number of iterations, the training of the model is completed when loss no longer decreases.
And III, loading the model, receiving the camera picture, and operating reasoning to obtain a reasoning result, wherein the reasoning result is position information of the traffic light in the image.
What needs to be additionally stated is that: the yolov3 training program operated by the method improves a yolov 3-based target detection model, and specifically comprises the steps of deleting two yolo layers by taking darknet53 as a feature extractor, concentrating training on one yolo layer, concentrating the training on the yolo layer of the 106 th layer after deleting the yolo layers of the 82 th layer and the 94 th layer, increasing the up-sampling rate of the up-sampling layer of the 97 th layer to 4 times of up-sampling, changing the 98 th layer by connecting the output of the 97 th layer with the output of the 11 th layer, adding an spp layer to perform pooling on feature maps at different scales, and then connecting the feature maps, so that the fusion of local features and global features at the feature map level is completed.
Referring to fig. 4 to 9, it can be seen that the precision positioning of the detection is better by using the present invention compared to the conventional detection technique through three specific comparisons.
The method combines high-precision map information and a target detection algorithm based on deep learning, can more accurately position the position of the traffic light needing attention in the image, and can obtain better accuracy and robustness by using a method for classifying the colors of the traffic light in a deep learning mode compared with the traditional image processing algorithm, thereby being suitable for more different scenes and light conditions.
Example 2
Referring to fig. 2 and 3, in order to solve the problem of poor robustness of the conventional detection, the present invention further provides a traffic light detection system in automatic driving, including:
the map loading module 100 is used for describing 3d position information of a pre-detection target in a space by establishing a map coordinate system;
the sensing module 200 comprises a camera and a laser radar, is connected with the map loading module 100, correspondingly converts the coordinates of the pre-detection target under the map coordinate system into the coordinates of the camera coordinate system under the camera through the external parameters of the camera and the laser radar, and correspondingly converts the coordinates of the camera coordinate system into the coordinates of the image coordinate system through the internal parameters of the camera;
the positioning module 300 is connected with the map loading module 100 and the sensing module 200, and is used for acquiring the posture information of the vehicle body and calibrating the relation of the camera coordinate system relative to the vehicle body coordinate system in real time and calculating the current posture information of the camera according to the calibrated relation of the camera coordinate system relative to the vehicle body coordinate system;
the ROI area detection module 400 is connected to the map loading module 100, the sensing module 200, and the positioning module 300, and configured to receive inside and outside reference information of a camera, pose information of a laser radar in a map coordinate system, image information photographed by the camera, and vector map information in the map loading module 100, convert 3d position information of a traffic light in the map coordinate system into 2d coordinate information in the image coordinate system according to the received inside and outside reference information of the camera and pose information of the laser radar in the map coordinate system after obtaining the 3d position information of the traffic light in the map coordinate system, calculate a distance from the camera to the traffic light according to pose information expressed as 2d coordinates after the camera is currently converted, screen out a traffic light meeting a requirement by a preset first threshold, screen out a traffic light meeting the requirement by calculating a difference between a yaw angle of the traffic light and a yaw angle of the camera, and screen the traffic light again by a preset second threshold, extracting the position of the traffic light in the image;
the cutting module 500 is connected with the ROI area detection module 400 and the map loading module 100, and cuts out a picture only containing traffic lights;
the sending module 600 is connected to the clipping module 500, and is configured to send the clipped picture that only includes traffic lights to the signal light color distinguishing module 700;
the signal light color distinguishing module 700 is used for receiving the traffic light pictures and inputting the traffic light pictures into a pre-trained mobilenetv2 classification model to classify colors;
and the planning module 800 is connected with the signal lamp color judging module 700, receives the id and the category information of the traffic lights, and is used for planning the vehicle running track after being coded.
Further, an object detection module 900 is included, connected to the ROI area detection module 400, for accurately detecting the position of the traffic light in the image by using the pre-trained object detection model yolov 3.
The method combines high-precision map information and a target detection algorithm based on deep learning, can more accurately position the position of the traffic light needing attention in the image, and can obtain better accuracy and robustness compared with the traditional image processing algorithm by using a method for classifying the colors of the traffic light in a deep learning mode, thereby being suitable for more different scenes and light conditions.
The present invention is not limited to the specific technical solutions described in the above embodiments, and the present invention may have other embodiments in addition to the above embodiments. It will be understood by those skilled in the art that various changes, substitutions of equivalents, and alterations can be made without departing from the spirit and scope of the invention.

Claims (8)

1. A traffic light detection system in autonomous driving, comprising:
the map loading module (100) is used for describing 3d position information of the pre-detection target in the space by establishing a map coordinate system;
the sensing module (200) comprises a camera and a laser radar, is connected with the map loading module (100), correspondingly converts the coordinates of the pre-detection target under a map coordinate system into the coordinates of a camera coordinate system under the camera through the external parameters of the camera and the laser radar, and correspondingly converts the coordinates of the camera coordinate system into the coordinates of an image coordinate system through the internal parameters of the camera;
the positioning module (300) is connected with the map loading module (100) and the sensing module (200) and is used for acquiring the posture information of the vehicle body in real time and calibrating the relation between the camera coordinate system and the vehicle body coordinate system and calculating the current posture information of the camera according to the calibrated relation between the camera coordinate system and the vehicle body coordinate system;
the ROI area detection module (400) is connected with the map loading module (100), the sensing module (200) and the positioning module (300) and is used for receiving internal and external reference information of a camera, pose information of a laser radar in a map coordinate system, image information shot by the camera and vector map information in the map loading module (100), converting the 3d position information of the traffic light into 2d coordinate information in the image coordinate system according to the received internal and external reference information of the camera and the pose information of the laser radar in the map coordinate system after acquiring the 3d position information of the traffic light in the map coordinate system, calculating the distance from the camera to the traffic light according to the pose information expressed as the 2d coordinate after the current conversion of the camera, screening out the traffic light meeting the requirements through a preset first threshold value, and calculating the difference value between the yaw angle of the traffic light and the yaw angle of the camera, the traffic lights are screened again through a preset second threshold value, and the positions of the traffic lights in the image are extracted;
the cutting module (500) is connected with the ROI area detection module (400) and the map loading module (100) and cuts out pictures only containing traffic lights;
the sending module (600) is connected with the cutting module (500) and is used for sending the cut picture only containing traffic lights to the signal light color distinguishing module (700);
the signal lamp color distinguishing module (700) is used for receiving the pictures of the traffic lights and inputting the pictures into a pre-trained mobilenetv2 classification model for color classification;
the planning module (800) is connected with the signal lamp color judging module (700), receives the id and the category information of the traffic lamp, and is used for planning the vehicle running track after being coded;
the specific method for calculating the difference between the yaw angle of the traffic light and the yaw angle of the camera is as follows:
calculating the yaw angle of the traffic light according to the formula t1-yaw = atan2(y, x), wherein (y, x) is the position of the light described in the map file, and t1-yaw represents the yaw angle of the traffic light;
acquiring the yaw angle of the camera under a map coordinate system, which specifically comprises the following steps:
obtaining a vector of the vector z = (0,0,1) in the map coordinate system: z-map = M x z, where M is a rotation matrix from the camera coordinate system to the map coordinate system;
acquiring the yaw angle of the camera pose in a map coordinate system: camera-yaw = atan2(z-map.y, z-map.x), where (y, x) is the position information of the camera at the moment in the map coordinate system;
obtaining the angular difference between t1-yaw and camera-yaw: a dot product of the vector v1= (std:: cos (t1-yaw), std:: sin (t1-yaw)) and the vector v2= (std:: cos (camera-yaw), std:: sin (camera-yaw)) is obtained.
2. The traffic light detection system in automatic driving according to claim 1, characterized in that: the system also comprises an object detection module (900) connected with the ROI area detection module (400) and used for accurately detecting the position of the traffic light in the image by utilizing a pre-trained object detection model yolov 3.
3. A traffic light detection method in automatic driving based on the traffic light detection system of claim 1, characterized by comprising the steps of:
the map loading module (100) loads map information, and acquires the coordinates of a pre-detection target in a map coordinate system by using the 3d position information of the pre-detection target in a vector map marking space;
calibrating the relation of a camera coordinate system of a camera in the sensing module (200) relative to a vehicle body coordinate system;
acquiring vehicle body posture information in real time through a positioning module (300), and calculating the current posture information of a camera according to the calibrated relation between a camera coordinate system and a vehicle body coordinate system;
the ROI area detection module (400) receives internal and external reference information of a camera, pose information of a laser radar in a map coordinate system, image information shot by the camera and vector map information, obtains 3d position information of a traffic light in the map coordinate system, converts the 3d position information of the traffic light into 2d coordinate information in the image coordinate system according to the received internal and external reference information of the camera and the pose information of the laser radar in the map coordinate system, calculates the distance from the camera to the traffic light according to the posture information expressed as the 2d coordinate after the current conversion of the camera, judges and removes traffic lights at non-current intersections, calculates the difference between the yaw angle of the traffic light and the yaw angle of the camera, judges and removes lights of a transverse lane and an opposite lane, and then determines the position of the traffic light;
the sending module (600) sends the picture which is cut by the cutting module (500) and only contains the traffic lights to the traffic light color distinguishing module (700) for color classification through a pre-trained mobilenetv2 classification model, and the colors of the traffic lights are obtained;
and the id and the category information code of the traffic light are sent to a planning module (800) to plan the vehicle running track.
4. The traffic light detection method in automatic driving according to claim 3, characterized in that: the vehicle body posture information acquired in real time comprises the current position and orientation information of the vehicle under a map coordinate system.
5. The method for detecting a traffic light in automatic driving according to claim 3 or 4, wherein the step of judging and removing a traffic light at a non-current intersection comprises the steps of:
presetting a first threshold;
and comparing the acquired distance between the camera and the traffic light with the first threshold value, and when the distance between the camera and the traffic light is smaller than the first threshold value, reserving the traffic light, otherwise, removing the traffic light.
6. The method for detecting a traffic light in automatic driving according to claim 5, characterized in that: the threshold is 100 meters.
7. The method for detecting a traffic light in autonomous driving according to claim 3 or 4, wherein determining the lights excluding the lateral lane and the opposite lane includes the steps of:
presetting a second threshold;
comparing the calculated difference between the yaw angle of the traffic light and the yaw angle of the camera, when the difference between the yaw angle of the traffic light and the yaw angle of the camera is smaller than the second threshold value, keeping the traffic light, otherwise, excluding the traffic light.
8. The traffic light detection method in automatic driving according to claim 7, characterized in that: when the ROI area detection module (400) extracts the position of a traffic light, the ROI range is expanded, and then detection is carried out through a pre-trained target detection model yolov 3.
CN202110199699.9A 2021-02-22 2021-02-22 Traffic light detection system and method in automatic driving Active CN112861748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110199699.9A CN112861748B (en) 2021-02-22 2021-02-22 Traffic light detection system and method in automatic driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110199699.9A CN112861748B (en) 2021-02-22 2021-02-22 Traffic light detection system and method in automatic driving

Publications (2)

Publication Number Publication Date
CN112861748A CN112861748A (en) 2021-05-28
CN112861748B true CN112861748B (en) 2022-07-12

Family

ID=75989892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110199699.9A Active CN112861748B (en) 2021-02-22 2021-02-22 Traffic light detection system and method in automatic driving

Country Status (1)

Country Link
CN (1) CN112861748B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113452904B (en) * 2021-06-18 2022-11-04 北京三快在线科技有限公司 Traffic signal lamp detection method and device
CN113763731B (en) * 2021-09-28 2022-12-06 苏州挚途科技有限公司 Method and system for reconstructing traffic light information of road intersection by high-precision map
CN114743176A (en) * 2022-04-12 2022-07-12 中国第一汽车股份有限公司 Detection method and detection device for special traffic lights
CN114694123B (en) * 2022-05-30 2022-09-27 阿里巴巴达摩院(杭州)科技有限公司 Traffic signal lamp sensing method, device, equipment and storage medium
CN116363624A (en) * 2023-02-07 2023-06-30 辉羲智能科技(上海)有限公司 Intelligent traffic light identification device with controllable vehicle-mounted angle
CN117496486B (en) * 2023-12-27 2024-03-26 安徽蔚来智驾科技有限公司 Traffic light shape recognition method, readable storage medium and intelligent device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489324B (en) * 2013-09-22 2015-09-09 北京联合大学 A kind of based on unpiloted real-time dynamic traffic light detection identification method
CN105489035B (en) * 2015-12-29 2018-03-30 大连楼兰科技股份有限公司 Apply the method that traffic lights are detected in active driving technology

Also Published As

Publication number Publication date
CN112861748A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN112861748B (en) Traffic light detection system and method in automatic driving
CN105930819B (en) Real-time city traffic lamp identifying system based on monocular vision and GPS integrated navigation system
CN107235044B (en) A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior
CN109945858B (en) Multi-sensing fusion positioning method for low-speed parking driving scene
CN111626217A (en) Target detection and tracking method based on two-dimensional picture and three-dimensional point cloud fusion
CN105260699B (en) A kind of processing method and processing device of lane line data
CN101929867B (en) Clear path detection using road model
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
CN114413881B (en) Construction method, device and storage medium of high-precision vector map
CN112396650A (en) Target ranging system and method based on fusion of image and laser radar
Hu et al. A multi-modal system for road detection and segmentation
EP2574958A1 (en) Road-terrain detection method and system for driver assistance systems
CN109583415A (en) A kind of traffic lights detection and recognition methods merged based on laser radar with video camera
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
CN108594244B (en) Obstacle recognition transfer learning method based on stereoscopic vision and laser radar
Ma et al. Crlf: Automatic calibration and refinement based on line feature for lidar and camera in road scenes
CN106446785A (en) Passable road detection method based on binocular vision
US20220083792A1 (en) Method and device for providing data for creating a digital map
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
Joy et al. Real time road lane detection using computer vision techniques in python
Liu et al. Real-time traffic light recognition based on smartphone platforms
CN115409965A (en) Mining area map automatic generation method for unstructured roads
CN114648549A (en) Traffic scene target detection and positioning method fusing vision and laser radar
CN110909656A (en) Pedestrian detection method and system with integration of radar and camera
CN118038396A (en) Three-dimensional perception method based on millimeter wave radar and camera aerial view fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 210012 room 401-404, building 5, chuqiaocheng, No. 57, Andemen street, Yuhuatai District, Nanjing, Jiangsu Province

Patentee after: AUTOCORE INTELLIGENT TECHNOLOGY (NANJING) Co.,Ltd.

Address before: 211800 building 12-289, 29 buyue Road, Qiaolin street, Pukou District, Nanjing City, Jiangsu Province

Patentee before: AUTOCORE INTELLIGENT TECHNOLOGY (NANJING) Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 12th Floor, Building 5, Jieyuan Financial City, No. 55 Andemen Street, Yuhuatai District, Nanjing City, Jiangsu Province, China 210012

Patentee after: AUTOCORE INTELLIGENT TECHNOLOGY (NANJING) Co.,Ltd.

Country or region after: China

Address before: 210012 room 401-404, building 5, chuqiaocheng, No. 57, Andemen street, Yuhuatai District, Nanjing, Jiangsu Province

Patentee before: AUTOCORE INTELLIGENT TECHNOLOGY (NANJING) Co.,Ltd.

Country or region before: China