CN109949594B - Real-time traffic light identification method - Google Patents

Real-time traffic light identification method Download PDF

Info

Publication number
CN109949594B
CN109949594B CN201910354808.2A CN201910354808A CN109949594B CN 109949594 B CN109949594 B CN 109949594B CN 201910354808 A CN201910354808 A CN 201910354808A CN 109949594 B CN109949594 B CN 109949594B
Authority
CN
China
Prior art keywords
traffic light
information
image
road
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910354808.2A
Other languages
Chinese (zh)
Other versions
CN109949594A (en
Inventor
李慧慧
熊祺
张放
李晓飞
王肖
张德兆
霍舒豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Idriverplus Technologies Co Ltd
Original Assignee
Beijing Idriverplus Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Idriverplus Technologies Co Ltd filed Critical Beijing Idriverplus Technologies Co Ltd
Priority to CN201910354808.2A priority Critical patent/CN109949594B/en
Publication of CN109949594A publication Critical patent/CN109949594A/en
Application granted granted Critical
Publication of CN109949594B publication Critical patent/CN109949594B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a real-time traffic light identification method, which comprises the following steps: acquiring traffic light information, and fusing the traffic light information with an original map to generate a labeled map; respectively associating Road id of a corresponding Road section and Lane id of a Lane in front of a stop line with traffic light information to generate a first and/or second association information table; determining Road id of a Road section and/or Lane id of a Lane according to the current position of the vehicle and the labeled map; according to the Road id of the Road section, the first associated information table and/or the Lane id of the Lane and the second associated information table, when the distance from the vehicle to the stop line is within a preset range, judging whether traffic light information exists in the Road section in front of the stop line, and if so, acquiring absolute position information of a lamp plate frame in a marking map and converting the absolute position information into pixel position information in a first image; and detecting and classifying the first ROI according to a deep learning algorithm to obtain the state of the traffic light corresponding to the id of the traffic light. Thus, the accuracy and speed of visual recognition are improved.

Description

Real-time traffic light identification method
Technical Field
The invention relates to the technical field of data processing, in particular to a real-time traffic light identification method.
Background
The intelligent driving technology is a hot topic in recent years, and brings subversive changes in the fields of relieving traffic jam, improving road safety, reducing air pollution and the like. Traffic light identification is an essential important component of an intelligent driving system, and correct identification of traffic light signals plays a key role in outdoor safe navigation of the intelligent driving system. Therefore, how to quickly and accurately identify the position and color of the traffic light and how to reasonably decide on/off of the traffic light by the intelligent driving system becomes a key direction of attention of researchers.
At present, the identification methods of traffic lights which are applied more can be mainly divided into a convenient method based on a communication protocol and a traffic light identification method based on a map and positioning.
A convenient method based on a communication protocol is simply and commonly used as V2X, which refers to establishing communication between a traffic light and an autonomous vehicle. Firstly, a transmitting source is arranged on the traffic light, the transmitting source can transmit the state information of the traffic light to the outside, a receiving party receives the signal transmitted by the traffic light in real time according to a communication interface which is arranged in advance, and finally, the automatic driving unmanned vehicle starts and stops according to the actual road condition after receiving the color signal. The communication distance of the V2X is up to one thousand meters, and the traffic light information can be accurately transmitted to the vehicle without any interference. However, it requires a large amount of hardware support, resulting in too high cost to be widely applied in actual environment scenarios.
The method for identifying the traffic light based on the map and the positioning improves the reliability of the state identification of the traffic light by means of an accurate 3D map and a self-positioning technology, and mainly comprises three parts, namely high-precision map marking, interesting (ROI) area generation and visual traffic light detection and classification. Firstly, marking the absolute position of the traffic light in a map, then converting the absolute position marked in the map into an image area, extracting an ROI area only related to the traffic light in the image captured by the vehicle-mounted camera, and finally, detecting and classifying the visual traffic light in the ROI area. However, the difficulty of the traffic light identification method based on the map and the positioning is real-time performance and robustness, and the existing algorithm cannot completely meet the real-time performance and the accuracy robustness of the platform. Secondly, the existing algorithm does not define a reasonable map protocol, a real-time map query mode and a start-stop decision, and a single camera is limited in breadth or distance, so that the method cannot meet the detection visual field and range of the traffic lights in the driving process, and the problem of boundary crossing frequently exists.
Disclosure of Invention
The embodiment of the invention aims to provide a real-time traffic light identification method so as to solve the problems of low robustness and border crossing during traffic light identification in the prior art.
In order to solve the above problem, in a first aspect, the present invention provides a real-time traffic light identification method, including:
acquiring traffic light information; the traffic light information comprises an identifier id of the traffic light and absolute position information of a lamp panel frame of the traffic light;
fusing the traffic light information and the original map to generate a labeled map; the labeling map comprises a plurality of roads; each of the roads includes a plurality of section information; each of the Road section information includes Road id of a Road section and a plurality of lane information; each of the Lane information includes Lane id of a Lane;
associating Road id of a corresponding Road section in front of a stop line with the traffic light information to generate a first associated information table, and/or associating Lane id of a corresponding Lane in front of the stop line with the traffic light information to generate a second associated information table;
acquiring the current position and a first image of a vehicle during running; the first image comprises an image of a traffic light acquired by a first image acquisition device;
determining the Roadid of a road section and/or the Lane id of a Lane corresponding to the current position according to the current position of the vehicle and the labeling map;
when the distance between the current position and the stop line is within a preset detection range, judging whether traffic light information exists in the Road section in front of the stop line or not according to the Road id of the Road section corresponding to the current position and the first associated information table, and/or according to the Lane id of the Lane of the Road section corresponding to the current position and the second associated information table;
when the traffic light information exists, acquiring absolute position information of the lamp plate frame in the marking map;
converting the absolute position information into pixel position information of the lamp panel frame in the first image;
determining a first ROI (region of interest) according to the pixel position information and the image of the traffic light acquired by the first image acquisition device;
and detecting and classifying the first ROI according to a deep learning algorithm to obtain the state of the traffic light corresponding to the id of the traffic light.
In a possible implementation manner, the position information of the lamp panel frame of the traffic light includes longitude and latitude of four corner points of the lamp panel frame of the traffic light and height of each corner point from the ground.
In one possible implementation, the first image acquisition device is a tele camera.
In one possible implementation, the method further includes:
acquiring a second image when acquiring the current position of the vehicle and the first image; the second image includes an image of a traffic light captured with a second image capture device.
In one possible implementation, the method further includes:
when the pixel position information of the lamp frame in the first image is out of range;
converting absolute position information of the lamp plate frame into pixel position information of the lamp plate frame in the second image;
determining a second ROI according to the pixel position information of the lamp plate frame in the second image and the image of the traffic light acquired by the second image acquisition device;
and detecting and classifying the second ROI according to a deep learning algorithm to obtain the state of the traffic light corresponding to the id of the traffic light.
In one possible implementation, the second image acquisition device is a short-focus camera.
In one possible implementation, the method further includes, after the step of:
acquiring current speed information and acceleration information of a vehicle;
calculating the distance between the current position of the vehicle and the stop line according to the current position of the vehicle and the marking map;
and calculating the state of the vehicle at the intersection according to the speed information, the acceleration information, the distance and the state of the traffic light.
In a second aspect, the invention provides an apparatus comprising a memory for storing a program and a processor for performing the method of any of the first aspects.
In a third aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method according to any one of the first aspect.
In a fourth aspect, the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of the first aspects.
By applying the real-time traffic light identification method provided by the invention, only two high-definition camera sensors are needed, so that the cost is low; the sensor is simple to install, and the combination of the long coke and the short coke completely meets the detection range and distance of the traffic light; the map query mode can acquire traffic light signals at any position; the combination of high-definition map labeling and the visual detection range improves the accuracy of visual identification, improves the speed, and can completely meet the real-time requirements of unmanned logistics vehicles, unmanned sweeper vehicles and unmanned passenger vehicles in the market on the identification of traffic lights; the reasonable starting and stopping decision not only relieves the loss of the unmanned vehicle, but also provides comfortable experience for passengers.
Drawings
Fig. 1 is a schematic flow chart of a real-time traffic light identification method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of absolute position information of a lamp panel frame according to a first embodiment of the present invention;
fig. 3 is a schematic diagram of road section information according to a first embodiment of the present invention;
FIG. 4A is a schematic diagram of a first ROI corresponding to a Road id according to an embodiment of the present invention;
fig. 4B is a schematic diagram of a first ROI corresponding to Lane id according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a lamp-changing shutdown strategy according to an embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be further noted that, for the convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a schematic flow chart of a real-time traffic light identification method according to an embodiment of the present invention. The method is used in a vehicle, in particular in an autonomous vehicle. As shown in fig. 1, the method comprises the steps of:
step 101, acquiring traffic light information; the traffic light information includes an identification id of the traffic light and absolute position information of a lamp panel frame of the traffic light.
Specifically, the absolute position information of the lamp panel frame of the traffic light includes longitude and latitude of four corner points of the lamp panel frame of the traffic light, and a height of each corner point from the ground.
In the map collection stage, the map collection device may be used to collect a map, for example and without limitation, the map collection device may be an autonomous vehicle, an intelligent robot, or other autonomous vehicles having a map collection function, such as an autonomous logistics vehicle, an autonomous sweeping vehicle, or other autonomous vehicles capable of collecting a map when performing work of the job.
By way of example, the position information of the lamp frame of the traffic light may be acquired by using a Global Positioning System (GPS) or a Global Navigation Satellite System (GNSS) on the map acquisition device. The collected absolute position information of the traffic light panel frame is shown in fig. 2.
Step 102, carrying out fusion processing on the traffic light information and an original map to generate a labeled map; the labeling map comprises a plurality of roads; each road includes a plurality of link information; each Road section information comprises Road id of a Road section and a plurality of lane information; each Lane information includes Lane id of the Lane.
Specifically, the traffic light information may be labeled into the original map in a protocol, for example, the information may be named as id of the traffic light, and longitude, latitude and height information of each corner point of a lamp panel frame of the traffic light and a center of the corner point may be labeled into the original map in the form of coordinates x, y and z.
And 103, associating the Road id of the Road section corresponding to the stop line with the traffic light information to generate a first associated information table, and/or associating the Lane id corresponding to the stop line with the traffic light information to generate a second associated information table.
Specifically, on one long Road, a plurality of pieces of Road section (Road) information may be included, each piece of Road section information including Road id of the Road section and a plurality of pieces of Lane (Lane) information, each piece of Lane information including Lane id of the Lane. In the plurality of pieces of Road section information, the Road section corresponding to the stop line (stopline) is associated with the traffic light information and the Road id of the Road section, so that one associated information table, which is called a first associated information table, can be obtained.
The Road id may be associated with one or more traffic lights, mainly depending on the nature of the traffic intersection, assuming that the traffic intersection is an intersection, and when a vehicle passes through the intersection, there are three lights, and the Road id where the vehicle is located is associated with three lights, for example, but not limited thereto, the first association information table may refer to table 1.
Figure BDA0002045059030000061
TABLE 1
Assuming that the vehicle is at the intersection and there are three lanes, the Lane id is associated with the traffic light information, for example and without limitation, and the second associated information table is shown in table 2:
Figure BDA0002045059030000062
TABLE 2
Therefore, the associated information table is obtained through the map collecting equipment, and when the map collecting equipment does not automatically drive the vehicle, the annotated map and the associated information table can be obtained from the server, and the annotated map and the associated information table sent by other map collecting equipment can also be received.
104, acquiring the current position and a first image of the vehicle during running; the first image includes an image of a traffic light captured with a first image capture device.
In one embodiment, when the vehicle is running, the current position can be acquired through a GPS, and a plurality of frames of images are acquired through a first image acquisition device of the vehicle.
Subsequently, the vehicle can utilize its own processing unit to perform noise reduction, filtering and other processing on each frame of image to obtain a processed image, and according to the property of the traffic intersection, when one or more images of traffic lights exist in a certain frame of processed image, the frame of image is called as a first image.
In another embodiment, when the vehicle is in a driving process, a second image acquisition device of the vehicle can be used for acquiring a frame image, a processed image is obtained based on the same noise reduction, filtering and other processing, and according to the property of the traffic intersection, when one or more images of traffic lights exist in a certain frame of processed image, the frame image is called as a second image.
Subsequently, the selection of the traffic scene can be performed by the first image and the second image.
The first image acquisition device is a long-focus camera, and the second image acquisition device is a short-focus camera.
And 105, determining the Road id of the Road section and/or Lane id of the Lane corresponding to the current position according to the current position of the vehicle and the labeling map.
Wherein the current position of the vehicle comprises the longitude and latitude of the vehicle in the world coordinate system, and the position may be embodied in the form of coordinates such as (longitude, latitude).
Specifically, in the labeled map, a road is composed of a plurality of road segments, each road segment is a section of the road, and the acquired image includes the image information of the traffic light during the driving process of the vehicle.
According to the position of the vehicle, inquiring in the labeling map, determining a Road section corresponding to the position of the vehicle, and determining the Road id of the Road section, or determining the Lane id of the Road section, or determining the Road id and the Lane id of the Road section.
And step 106, when the distance between the current position and the stop line is within a preset detection range, judging whether the Road section in front of the stop line has traffic light information or not according to the Road id of the Road section corresponding to the current position and the first associated information table, and/or according to the Lane id of the Lane of the Road section corresponding to the current position and the second associated information table.
Specifically, during the driving of the vehicle, the traffic light signal is acquired within a specified range, for example, within 100 meters.
In one embodiment, when the length of the last Road segment is less than 100 meters, the vehicle can only obtain the traffic light signal within the range of 50 meters, therefore, a Road-crossing (Road) advanced searching method may be adopted, that is, according to the Road id of the Road segment corresponding to the current position of the vehicle, whether the Road id of the Road segment in front of the current position of the vehicle is in the first correlation information table is searched, if the Road id in front of the current position of the vehicle is not in the first correlation information table, the vehicle continues to drive forwards, and if the Road id in front of the current position of the vehicle is in the first correlation information table, step 107 is executed.
Or, the Lane id of the road segment corresponding to the current position of the vehicle may be queried, whether the Lane id of the road segment ahead of the current position of the vehicle is in the second related information table is searched, if the Lane id of the road segment ahead of the current position of the vehicle is not in the second related information table, the vehicle continues to travel forward, and if the Lane id of the road segment ahead of the current position of the vehicle is in the second related information table, step 107 is executed.
In another embodiment, when the vehicle receives the traffic light signal immediately before the stop line, the vehicle may have already rushed out of the stop line by a small distance due to inertia to stop, and the vehicle is at the intersection, but the traffic light information is not identified on the Road segment of the intersection, in order to solve this problem, a certain distance may be detected forward, that is, whether the Road id of the last Road segment of the current Road segment is in the first correlation information table is queried according to the Road id of the Road segment corresponding to the current position of the vehicle, and if the Road id of the last Road segment of the current Road segment is in the first correlation information table, step 107 is executed.
Or, that is, according to the Lane id of the road segment corresponding to the current position of the vehicle, it is queried whether the Lane id of the last road segment of the current road segment is in the second association information table, and if so, step 107 is executed.
Referring to fig. 3, if the Road id of the Road segment corresponding to the current position of the vehicle is 88, the Road id of the next Road segment of the current Road segment is queried, as can be seen from the figure: the next Road segment of Road id 88 is Road id 89. It is determined whether traffic light information exists in Road id 89, and as can be seen from fig. 3, Road id 89 corresponds to a stop line and traffic light information exists, so that when the vehicle is in Road id 88, the traffic light information can be acquired.
And if the Road ad id of the Road section corresponding to the current position of the vehicle is the Road id 111, inquiring the first associated information table, wherein the Road id 111 is not in the associated information table. At this time, the Road id of the previous Road segment of the Road id 111 may be queried, and whether the Road id 89 of the previous Road segment has traffic light information may be determined, as can be seen from fig. 3, the Road id 89 corresponds to a stop line, and the traffic light information exists, so that when the vehicle is in the Road id 111, the traffic light information may also be acquired.
Correspondingly, when the Lane id and the second associated information table are used to determine whether traffic light information exists, the determination is similar to the determination of whether traffic light information exists through the Road id and the first associated information table, and details are not repeated here.
And step 107, acquiring absolute position information of the lamp panel frame in the marking map when the traffic light information exists.
Specifically, when the traffic light information exists, the absolute position information of the traffic light in the labeling map may be acquired from the first or second associated information table.
Wherein, when the determination is made through the Road id and the first correlation information table, the number of the traffic lights at this time may be one or more, see fig. 4A. When the judgment is made by Lane id and the second association information table, the number of traffic lights at this time is one, see fig. 4B. When the judgment is made by combining both, the two results in fig. 4A and 4B can be obtained simultaneously.
And step 108, converting the absolute position information into pixel position information of the lamp panel frame in the first image.
Specifically, coordinate conversion can be performed through internal parameters (optical center, focal length, principal point, and the like) and external parameters (rotation matrix, translation vector, and the like) of the telephoto camera, so that absolute position information of the lamp panel frame in the annotation map is converted into a position of the lamp panel frame in an image coordinate system, that is, the absolute position information is converted into pixel position information in the first image. The specific conversion method is the prior art and is not described herein.
And step 109, determining a first ROI according to the pixel position information and the image of the traffic light acquired by the first image acquisition device.
Because the generated labeling map has measurement errors and errors in the coordinate conversion process, a projection two-dimensional frame generated by the lamp frame in the first image cannot be directly used as a position frame of the traffic light, and the projection two-dimensional frame is positioned near the actual position of the traffic light on the first image. A larger rectangle may be selected as the first ROI region centered on the projected two-dimensional frame.
Specifically, the long-focus camera and the short-focus camera are adopted in the method, when the ROI is selected, the long-focus camera is preferentially selected, see fig. 4A and 4B, but when the projection is out of range, the long-focus camera is switched to the short-focus camera.
Wherein, the projection out-of-range means that the projection frame is not within the image range. For example, after a series of coordinate conversions are performed on the position information of the lamp frame in the traffic light information, the coordinates in the first image are obtained, assuming that the first image is resolved to 1280x720, that is, the range of x is 0-1279, and the range of y is 0-719, if the lamp frame is subjected to the coordinate conversion, the obtained coordinates are not in the range of the first image, which is called as projection out-of-range. Alternatively, the boundary size is defined as 100 pixels, i.e. the range of x is changed to 100-. The size of the boundary can be defined according to the requirement of the boundary.
In the driving process of the vehicle, when the number of the obtained traffic lights is multiple, for example, 3, through the first associated information table, the switching strategy of the long-focus camera and the short-focus camera may be set as follows: when one projection in the projection frames of the 3 traffic lights is out of range, switching to the short-focus camera; or, the projection frames of the 3 traffic lights are switched to the short-focus camera only when the projections are out of range. The specific switching strategy may be set according to actual needs, which is not limited in the present application.
And step 110, detecting and classifying the first ROI according to a deep learning algorithm to obtain the state of the traffic light corresponding to the id of the traffic light.
Specifically, the deep learning network can be trained in advance, a large number of training pictures are prepared in advance, the traffic lights and the color states in the pictures are marked, the pictures and corresponding marked information are used as the input of the deep learning network, the weight parameters in the network are subjected to recursive training, and the trained model can directly detect and classify the traffic lights in the pictures which are not marked.
There are four types of traffic light conditions: red, yellow, green and black.
Similarly, when the pixel position information of the lamp panel frame in the first image is out of range;
converting the absolute position information of the lamp plate frame into pixel position information of the lamp plate frame in a second image;
determining a second ROI according to the pixel position information of the lamp panel frame in the second image and the image of the traffic light acquired by the second image acquisition device;
and detecting and classifying the second ROI according to a deep learning algorithm to obtain the state of the traffic light corresponding to the id of the traffic light.
Further, after step 110, the method further includes:
acquiring current speed information and acceleration information of a vehicle;
calculating the distance between the current position of the vehicle and the stop line according to the current position of the vehicle and the marking map;
and calculating the state of the vehicle at the intersection according to the speed information, the acceleration information, the distance and the state of the traffic light.
Specifically, when the vehicle is an automatic driving vehicle, the speed information of the vehicle can be acquired through a wheel speed meter, the acceleration information of the vehicle can be acquired through an acceleration sensor, and the distance between the vehicle and a stop line can be calculated according to the current position of the vehicle and the position of the stop line.
Subsequently, can decide parking time and position according to the state of the traffic light that receives and the distance of car distance stopline, at this moment, can solve two problems: 1. the rapid high precision of traffic light detection and classification provides guarantee for real-time efficient decision making, the automatic driving vehicle rapidly responds to the color change of the traffic light, the traffic light is started immediately when turning green, and compared with manual operation, no redundant time delay exists. 2. Corresponding measures are also taken for the decision-making aiming at the problem that the vehicle possibly stops to the stop line, and the measure also ensures the comfort level (not sudden braking) of the vehicle when the vehicle meets the red light. The intelligent driving automobile positioning real-time monitoring system comprises a wireless monitoring system, a wireless monitoring system and a wireless monitoring system, wherein the wireless monitoring system is used for monitoring the distance between a self automobile and a lane line, the driving speed and the acceleration of the self automobile in real time, when a green light just turns into a yellow light, whether the self automobile can drive over a stop line within 3s of the yellow light is calculated, if yes, the self automobile directly drives over the stop line when meeting the yellow light, and otherwise, the self automobile is. Referring to fig. 5, when the distance from the stop line is 15m, the traffic light is changed into a yellow light, the acceleration is 0.5m/s, the driving speed is 50km/h, and 2.22s is needed to drive over the stop line through calculation, namely the yellow light can safely drive over the stop line before being changed into a red light, and the traffic light does not stop. The decision can not only prevent the self-vehicle from suddenly stopping and crossing the stop line, but also ensure very comfortable parking feeling.
By applying the real-time traffic light identification method provided by the embodiment of the invention, only two high-definition camera sensors are needed, so that the cost is low; the sensor is simple to install, and the combination of the long coke and the short coke completely meets the detection range and distance of the traffic light; the map query mode can acquire traffic light signals at any position; the combination of high-definition map labeling and the visual detection range improves the accuracy of visual identification, improves the speed, and can completely meet the real-time requirements of unmanned logistics vehicles, unmanned sweeper vehicles and unmanned passenger vehicles in the market on the identification of traffic lights; the reasonable starting and stopping decision not only relieves the loss of the unmanned vehicle, but also provides comfortable experience for passengers.
The second embodiment of the invention provides equipment which comprises a memory and a processor, wherein the memory is used for storing programs, and the memory can be connected with the processor through a bus. The memory may be a non-volatile memory such as a hard disk drive and a flash memory, in which a software program and a device driver are stored. The software program is capable of performing various functions of the above-described methods provided by embodiments of the present invention; the device drivers may be network and interface drivers. The processor is used for executing a software program, and the software program can realize the method provided by the embodiment of the invention when being executed.
A third embodiment of the present invention provides a computer program product including instructions, which, when the computer program product runs on a computer, causes the computer to execute the method provided in the first embodiment of the present invention.
The fourth embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method provided in the first embodiment of the present invention is implemented.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A method of real-time traffic light identification, the method comprising:
acquiring traffic light information; the traffic light information comprises an identifier id of the traffic light and absolute position information of a lamp panel frame of the traffic light;
fusing the traffic light information and the original map to generate a labeled map; the labeling map comprises a plurality of roads; each of the roads includes a plurality of section information; each of the Road section information includes Road id of a Road section and a plurality of lane information; each of the Lane information includes Lane id of a Lane;
associating Road id of a corresponding Road section in front of a stop line with the traffic light information to generate a first associated information table, and/or associating Lane id of a corresponding Lane in front of the stop line with the traffic light information to generate a second associated information table;
acquiring the current position and a first image of a vehicle during running; the first image comprises an image of a traffic light acquired by a first image acquisition device;
according to the current position of the vehicle and the labeling map, determining Road id of a Road section and/or Lane id of a Lane corresponding to the current position;
when the distance between the current position and the stop line is within a preset detection range, judging whether traffic light information exists in the Road section in front of the stop line or not according to the Road id of the Road section corresponding to the current position and the first associated information table, and/or according to the Laneid of the lane of the Road section corresponding to the current position and the second associated information table; when the vehicle is positioned at the intersection, whether the Road id of the previous Road section of the current Road section is in the first association information table or not is inquired according to the Road id of the Road section corresponding to the current position of the vehicle, or whether the Laneid of the previous Road section of the Road section corresponding to the current position of the vehicle is in the second association table or not is inquired;
when the traffic light information exists, acquiring absolute position information of the lamp plate frame in the marking map;
converting the absolute position information into pixel position information of the lamp panel frame in the first image;
when the pixel position information is within the range of the first image, determining a first ROI (region of interest) according to the pixel position information and the image of the traffic light acquired by the first image acquisition device; the range of the first image is determined according to a first image resolution, or the first image resolution and a boundary;
according to a deep learning algorithm, detecting and classifying the first ROI to obtain the state of the traffic light corresponding to the id of the traffic light;
wherein the method further comprises:
acquiring a second image when acquiring the current position of the vehicle and the first image; the second image comprises an image of a traffic light acquired by a second image acquisition device;
when the pixel position information of the lamp frame in the first image is out of range;
converting absolute position information of the lamp plate frame into pixel position information of the lamp plate frame in the second image;
determining a second ROI according to the pixel position information of the lamp plate frame in the second image and the image of the traffic light acquired by the second image acquisition device;
and detecting and classifying the second ROI according to a deep learning algorithm to obtain the state of the traffic light corresponding to the id of the traffic light.
2. The method of claim 1, wherein the position information of the lamp panel frame of the traffic lamp comprises longitude and latitude of four corner points of the lamp panel frame of the traffic lamp and height of each corner point from the ground.
3. The method of claim 1, wherein the first image acquisition device is a tele camera.
4. The method of claim 1, wherein the second image acquisition device is a short focus camera.
5. The method of claim 1, further comprising, after the method:
acquiring current speed information and acceleration information of a vehicle;
calculating the distance between the current position of the vehicle and the stop line according to the current position of the vehicle and the marking map;
and calculating the state of the vehicle at the intersection according to the speed information, the acceleration information, the distance and the state of the traffic light.
6. A computer program product comprising instructions for causing a computer to perform the method of any one of claims 1 to 5 when the computer program product is run on the computer.
7. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method of any one of claims 1-5.
CN201910354808.2A 2019-04-29 2019-04-29 Real-time traffic light identification method Active CN109949594B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910354808.2A CN109949594B (en) 2019-04-29 2019-04-29 Real-time traffic light identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910354808.2A CN109949594B (en) 2019-04-29 2019-04-29 Real-time traffic light identification method

Publications (2)

Publication Number Publication Date
CN109949594A CN109949594A (en) 2019-06-28
CN109949594B true CN109949594B (en) 2020-10-27

Family

ID=67016595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910354808.2A Active CN109949594B (en) 2019-04-29 2019-04-29 Real-time traffic light identification method

Country Status (1)

Country Link
CN (1) CN109949594B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543814B (en) * 2019-07-22 2022-05-10 华为技术有限公司 Traffic light identification method and device
CN112880692B (en) * 2019-11-29 2024-03-22 北京市商汤科技开发有限公司 Map data labeling method and device and storage medium
CN112991791B (en) * 2019-12-13 2022-07-26 上海商汤临港智能科技有限公司 Traffic information identification and intelligent driving method, device, equipment and storage medium
CN112639813A (en) * 2020-02-21 2021-04-09 华为技术有限公司 Automatic driving control method, information processing method, device and system
CN111444810A (en) * 2020-03-23 2020-07-24 东软睿驰汽车技术(沈阳)有限公司 Traffic light information identification method, device, equipment and storage medium
CN111582189B (en) * 2020-05-11 2023-06-23 腾讯科技(深圳)有限公司 Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
CN112183382A (en) * 2020-09-30 2021-01-05 深兰人工智能(深圳)有限公司 Unmanned traffic light detection and classification method and device
JP7287373B2 (en) * 2020-10-06 2023-06-06 トヨタ自動車株式会社 MAP GENERATION DEVICE, MAP GENERATION METHOD AND MAP GENERATION COMPUTER PROGRAM
CN112327855A (en) * 2020-11-11 2021-02-05 东软睿驰汽车技术(沈阳)有限公司 Control method and device for automatic driving vehicle and electronic equipment
CN112614365B (en) * 2020-12-14 2022-07-15 北京三快在线科技有限公司 Electronic map processing method and device
CN112991290B (en) * 2021-03-10 2023-12-05 阿波罗智联(北京)科技有限公司 Image stabilizing method and device, road side equipment and cloud control platform
CN113178079B (en) * 2021-04-06 2022-08-23 青岛以萨数据技术有限公司 Marking system, method and storage medium for signal lamp and lane line
CN113177522A (en) * 2021-05-24 2021-07-27 的卢技术有限公司 Traffic light detection and identification method used in automatic driving scene
CN114299716B (en) * 2021-12-27 2023-04-25 北京世纪高通科技有限公司 Method, device, storage medium and equipment for associating time information of signal lamps
WO2023197215A1 (en) * 2022-04-13 2023-10-19 北京小米移动软件有限公司 Information transmission method and apparatus, and storage medium
CN115098606B (en) * 2022-05-30 2023-06-16 九识智行(北京)科技有限公司 Traffic light query method and device for unmanned vehicle, storage medium and equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103680177A (en) * 2013-12-03 2014-03-26 上海交通大学 Intelligent vehicle speed prompting driving system based on mobile phone
CN105930819A (en) * 2016-05-06 2016-09-07 西安交通大学 System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system
CN106504554A (en) * 2016-09-30 2017-03-15 乐视控股(北京)有限公司 The method and device of identification traffic light status information
CN107618510A (en) * 2016-07-13 2018-01-23 罗伯特·博世有限公司 For the method and apparatus at least one driving parameters for changing vehicle during traveling
CN108305475A (en) * 2017-03-06 2018-07-20 腾讯科技(深圳)有限公司 A kind of traffic lights recognition methods and device
CN108706009A (en) * 2017-03-31 2018-10-26 株式会社斯巴鲁 The drive-control system of vehicle
CN109492507A (en) * 2017-09-12 2019-03-19 百度在线网络技术(北京)有限公司 The recognition methods and device of the traffic light status, computer equipment and readable medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103680177A (en) * 2013-12-03 2014-03-26 上海交通大学 Intelligent vehicle speed prompting driving system based on mobile phone
CN105930819A (en) * 2016-05-06 2016-09-07 西安交通大学 System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system
CN107618510A (en) * 2016-07-13 2018-01-23 罗伯特·博世有限公司 For the method and apparatus at least one driving parameters for changing vehicle during traveling
CN106504554A (en) * 2016-09-30 2017-03-15 乐视控股(北京)有限公司 The method and device of identification traffic light status information
CN108305475A (en) * 2017-03-06 2018-07-20 腾讯科技(深圳)有限公司 A kind of traffic lights recognition methods and device
CN108706009A (en) * 2017-03-31 2018-10-26 株式会社斯巴鲁 The drive-control system of vehicle
CN109492507A (en) * 2017-09-12 2019-03-19 百度在线网络技术(北京)有限公司 The recognition methods and device of the traffic light status, computer equipment and readable medium

Also Published As

Publication number Publication date
CN109949594A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN109949594B (en) Real-time traffic light identification method
CN108305475B (en) Traffic light identification method and device
US11657604B2 (en) Systems and methods for estimating future paths
CN113781808B (en) Method and system for passing of internet-connected automatic driving vehicle at traffic light intersection
CN111291676B (en) Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
CN111695546B (en) Traffic signal lamp identification method and device for unmanned vehicle
CN109583415B (en) Traffic light detection and identification method based on fusion of laser radar and camera
EP3647734A1 (en) Automatic generation of dimensionally reduced maps and spatiotemporal localization for navigation of a vehicle
WO2021057134A1 (en) Scenario identification method and computing device
CN104217615A (en) System and method for preventing pedestrians from collision based on vehicle-road cooperation
JP2002083297A (en) Object recognition method and object recognition device
WO2015129175A1 (en) Automated driving device
CN108594244B (en) Obstacle recognition transfer learning method based on stereoscopic vision and laser radar
CN110751693B (en) Method, apparatus, device and storage medium for camera calibration
CN113029187A (en) Lane-level navigation method and system fusing ADAS fine perception data
CN117111085A (en) Automatic driving automobile road cloud fusion sensing method
JP4848644B2 (en) Obstacle recognition system
JP2018073275A (en) Image recognition device
CN114495066A (en) Method for assisting backing
CN116783462A (en) Performance test method of automatic driving system
CN112298211A (en) Automatic pedestrian yielding driving scheme based on 5G grading decision
CN111210411A (en) Detection method of vanishing points in image, detection model training method and electronic equipment
CN116892949A (en) Ground object detection device, ground object detection method, and computer program for ground object detection
CN113611008B (en) Vehicle driving scene acquisition method, device, equipment and medium
CN111174796B (en) Navigation method based on single vanishing point, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee after: Beijing Idriverplus Technology Co.,Ltd.

Address before: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee before: Beijing Idriverplus Technology Co.,Ltd.