CN112699773A - Traffic light identification method and device and electronic equipment - Google Patents
Traffic light identification method and device and electronic equipment Download PDFInfo
- Publication number
- CN112699773A CN112699773A CN202011577703.2A CN202011577703A CN112699773A CN 112699773 A CN112699773 A CN 112699773A CN 202011577703 A CN202011577703 A CN 202011577703A CN 112699773 A CN112699773 A CN 112699773A
- Authority
- CN
- China
- Prior art keywords
- traffic light
- traffic
- light group
- traffic lights
- lights
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The application discloses a traffic light identification method and device and electronic equipment, and relates to the technical field of artificial intelligence such as automatic driving and intelligent traffic. The specific implementation scheme is as follows: acquiring a target image acquired in the running process of a vehicle, wherein the target image comprises image data of M traffic lights; identifying attribute information of the M traffic lights based on the image data of the M traffic lights; clustering the M traffic lights according to distance information among the M traffic lights to obtain N traffic light groups, wherein the distance information is determined based on the image data of the M traffic lights, and N is a positive integer less than or equal to M; and determining a first traffic light group from the N traffic light groups based on the attribute information of the M traffic lights, wherein the first traffic light group is a traffic light group corresponding to a vehicle driving route. According to the technology of the application, the problem that the recognition efficiency is low in the traffic light recognition technology is solved, and the recognition efficiency of the traffic light is improved.
Description
Technical Field
The application relates to the technical field of artificial intelligence, in particular to the technical field of automatic driving and intelligent traffic, and specifically relates to a traffic light identification method, a traffic light identification device and electronic equipment.
Background
With the progress of society and the development of economy, traffic light identification technology is widely used. The recognition of the traffic light information can enhance the perception of the driver side to the road and enhance the judgment capability and early warning awareness of the driver side to the road condition in front on the driving route.
At present, the traffic light identification process usually combines auxiliary information such as a global positioning system or a map to identify traffic light information.
Disclosure of Invention
The disclosure provides a traffic light identification method and device and electronic equipment.
According to a first aspect of the present disclosure, there is provided a traffic light identification method, comprising:
acquiring a target image acquired in the running process of a vehicle, wherein the target image comprises image data of M traffic lights, and M is a positive integer;
identifying attribute information of the M traffic lights based on the image data of the M traffic lights;
clustering the M traffic lights according to distance information among the M traffic lights to obtain N traffic light groups, wherein the distance information is determined based on the image data of the M traffic lights, and N is a positive integer less than or equal to M;
and determining a first traffic light group from the N traffic light groups based on the attribute information of the M traffic lights, wherein the first traffic light group is a traffic light group corresponding to a vehicle driving route.
According to a second aspect of the present disclosure, there is provided a traffic light identification device comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a target image acquired in the running process of a vehicle, the target image comprises image data of M traffic lights, and M is a positive integer;
the identification module is used for identifying the attribute information of the M traffic lights based on the image data of the M traffic lights;
the clustering module is used for clustering the M traffic lights according to distance information among the M traffic lights to obtain N traffic light groups, wherein the distance information is determined based on the image data of the M traffic lights, and N is a positive integer less than or equal to M;
and the determining module is used for determining a first traffic light group from the N traffic light groups based on the attribute information of the M traffic lights, wherein the first traffic light group is a traffic light group corresponding to a vehicle driving route.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform any one of the methods of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product capable of performing any one of the methods of the first aspect when the computer program product is run on an electronic device.
According to the technology of the application, the problem that the recognition efficiency is low in the traffic light recognition technology is solved, and the recognition efficiency of the traffic light is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic flow chart diagram of a traffic light identification method according to a first embodiment of the present application;
FIG. 2 is a schematic illustration of the detection and correction of an image to be recognized;
FIG. 3 is a flow chart illustrating a specific example of a traffic light identification method according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a traffic light identification device according to a second embodiment of the present application;
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
First embodiment
As shown in fig. 1, the present application provides a traffic light identification method, comprising the steps of:
step S101: the method comprises the steps of obtaining a target image collected in the running process of a vehicle, wherein the target image comprises image data of M traffic lights, and M is a positive integer.
In the embodiment, the traffic light identification method relates to the technical field of artificial intelligence such as automatic driving and intelligent transportation, and can be widely applied to various scenes such as green light prompting during parking waiting, deceleration parking early warning when a green light changes into a red light, overspeed red light running early warning and the like.
In practical use, the traffic light identification method according to the embodiment of the present application may be executed by the traffic light identification device according to the embodiment of the present application. The traffic light identification device of the embodiment of the application can be configured in the electronic equipment to execute the traffic light identification method of the embodiment of the application. The electronic device may be an in-vehicle device.
The traffic light recognition device may include a camera, the target image may be an image to be recognized collected by the camera in the traffic light recognition device during the driving process of the vehicle, and the target image may also be an image obtained by processing the image to be recognized collected by the camera in the traffic light recognition device during the driving process of the vehicle, which is not limited specifically herein.
The target image may include image data of M traffic lights, and the image data of M traffic lights may be data in the form of block areas, that is, due to the color and shape attributes of traffic lights, the image data of a block area which is usually consistent in color and has a certain shape may be determined as one traffic light.
The target image may be acquired in various manners, for example, an image to be recognized including image data of a traffic light collected by a camera in the traffic light recognition apparatus may be used as the target image. For another example, the image to be recognized may be obtained, and the area of the image to be recognized, which includes the traffic light, may be intercepted to obtain the target image.
Step S102: identifying attribute information of the M traffic lights based on the image data of the M traffic lights.
In this step, the attribute information of the traffic light may include three types, which are color attribute information, shape attribute information, and orientation attribute information, the color attribute information may include no light (the traffic light may be in a bad or non-on state), red, green, and yellow, the shape attribute information may include a pie, a time number, a straight line, a left turn, a right turn, a u-turn, a pedestrian, and a non-motor vehicle, and the orientation attribute information may include a front side, a side, and a back side.
Attribute information of the M traffic lights may be identified using a deep learning model, such as a neural network, based on the image data of the M traffic lights. Specifically, the target image may be input to a neural network, and the neural network may perform a class determination of colors, shapes, and orientations of traffic lights based on image data of the M traffic lights marked in the target image using geometric features of the traffic lights (such as a shape of a traffic light is generally a pie, an arrow, or a pattern indicating pedestrians and vehicle lanes, etc.) to identify attribute information of the M traffic lights.
Step S103: and clustering the M traffic lights according to the distance information among the M traffic lights to obtain N traffic light groups, wherein the distance information is determined based on the image data of the M traffic lights, and N is a positive integer less than or equal to M.
In this step, since the image data of the traffic lights is a block area, that is, the image data of the M traffic lights can be divided into M block areas, the distance information between the M traffic lights can be determined by determining the distance information between the M block areas in the target image. The distance information between two block areas can represent the distance information between two corresponding traffic lights, and the larger the distance represented by the distance information between two block areas is, the larger the distance between two corresponding traffic lights is.
The M traffic lights can be clustered according to the distance information among the M traffic lights, and the clustering principle can be that the traffic lights which are closer to each other are clustered together, and the traffic lights which are farther from each other are classified.
When clustering is performed, a distance threshold value can be set, two traffic lights with the distance represented by the distance information being close to the distance threshold value are clustered together to form a traffic light group, and two traffic lights with the distance represented by the distance information being far from the distance threshold value are separated, so that N traffic light groups can be generated finally. As shown in fig. 2, the target image includes three traffic lights, which are a traffic light 201, a traffic light 202, and a traffic light 203, because the distance between the traffic light 202 and the traffic light 202 is relatively short, the traffic light 201 and the traffic light 202 may be clustered together to generate a traffic light group 204, and because the traffic light 203 is relatively far away from both the traffic light 201 and the traffic light 202, the traffic light 201 and the traffic light 202 may be clustered into a group separately to generate another traffic light group.
Step S104: and determining a first traffic light group from the N traffic light groups based on the attribute information of the M traffic lights, wherein the first traffic light group is a traffic light group corresponding to a vehicle driving route.
In this step, the first traffic light group may be a traffic light group corresponding to the driving route of the vehicle, that is, a traffic light group on the driving route of the vehicle that the driver should pay attention to, that is, the first traffic light group includes a traffic light indicating how the driver should drive on the driving route of the vehicle. For example, the traffic light group that the driver usually needs to pay attention to on the driving route of the vehicle is the traffic light group with the front orientation attribute information, that is, the orientation attribute information of the first traffic light group is the front.
A first traffic light group may be determined from the N traffic light groups based on the attribute information of the M traffic lights. And there may be multiple ways to determine the first traffic light group from the N traffic light groups based on the attribute information of the M traffic lights.
For example, based on that the driver usually does not pay attention to the traffic lights with the orientation attribute information of the side and the front on the driving route of the vehicle, the color attribute information is the traffic light without lighting, and the shape attribute information is also not paid attention to the traffic lights of pedestrians and non-motor vehicles, the traffic lights which are not paid attention to by the driver on the driving route of the vehicle in the M traffic lights can be determined based on the attribute information of the M traffic lights, and the traffic light group including the traffic lights which are not paid attention to by the driver on the driving route of the vehicle can be filtered from the N traffic light groups.
Then, the first traffic light group may be selected from the remaining traffic light groups according to a preset rule, for example, in a case that a traffic intersection is complicated, a driver usually needs to pay attention to the traffic light groups including a relatively large number of traffic lights, and at this time, the traffic light group with the largest number of traffic lights may be selected according to the number of traffic lights in the traffic light groups, that is, the traffic light group with the largest number of traffic lights is selected as the first traffic light group. For another example, the traffic light group that the driver needs to pay attention to is located at a position with a relatively high distance from the ground, and in this case, the traffic light group may be selected according to the distance between the traffic light in the traffic light group and the ground, that is, the traffic light group with the highest distance between the traffic light and the ground is selected as the first traffic light group.
For example, when a traffic intersection is complicated, the N traffic light groups may be sorted in order of the largest number, the traffic light group with the larger number may be arranged at the front, and the traffic light group with the smaller number may be arranged at the back. For another example, the N traffic light groups may be sorted according to the orientation of the traffic lights in the traffic light groups, where the traffic light group with the traffic light oriented to the front is arranged at the front, and the traffic light group with the traffic light oriented to the side and the back is arranged at the back. For example, the N traffic light groups may be sorted in order of the distance between the traffic lights in the traffic light group and the ground from high to low. Alternatively, the traffic lights may be ranked comprehensively from number, orientation, and distance from the ground. The first traffic light group may then be determined as the top-ranked traffic light group.
In this embodiment, attribute information of M traffic lights in a target image is recognized, and the M traffic lights are clustered according to distance information between the M traffic lights to obtain N traffic light groups, and a first traffic light group is determined from the N traffic light groups based on the attribute information of the M traffic light groups, where the first traffic light group is a traffic light group that determines whether a driver moves forward or stops on a vehicle driving route. Therefore, the pure visual scheme can be adopted without the aid of auxiliary information such as a global positioning system, a map and the like, and the target traffic light set which needs to be paid attention by a driver of the vehicle driving route can be identified, so that the identification efficiency of the traffic light can be improved.
In addition, the dependence on global positioning system information, map information, roadside transmission signals and the like can be eliminated, and the traffic light is effective for each traditional traffic light of urban roads and is wide in popularization. Moreover, due to the characteristics of a pure visual scheme, the method is efficient, quick and low in cost.
Optionally, the step S104 specifically includes:
filtering a second traffic light group from the N traffic light groups based on the attribute information of the M traffic lights to obtain P traffic light groups, wherein the second traffic light group is a traffic light group irrelevant to a vehicle driving route, and P is a positive integer less than or equal to N;
a first traffic light group is determined from the P traffic light groups.
In this embodiment, in order to improve the processing efficiency and avoid the influence of the traffic light sets of some drivers on the driving route of the vehicle on the final result, a second traffic light set may be filtered from the N traffic light sets based on the attribute information of the M traffic lights, where the second traffic light set is a traffic light set unrelated to the driving route of the vehicle, for example, a traffic light set including traffic lights with shape attribute information of pedestrians and non-motor vehicles, and the traffic light set is generally a traffic light set indicating pedestrians and non-motor vehicles, and the drivers may filter the traffic light sets without paying attention. For another example, a traffic light group including traffic lights with side and rear facing attribute information generally indicates a side vehicle and an opposite vehicle, and the driver may not pay attention to the traffic lights and may filter the traffic lights.
Thereafter, a first traffic light group may be determined from the P traffic light groups. The manner of determining the first traffic light group from the P traffic light groups is similar to the manner of determining the first traffic light group from the N traffic light groups described above, and is not repeated here.
In this embodiment, the second traffic light group is filtered from the N traffic light groups based on the attribute information of the M traffic lights to obtain P traffic light groups, and the first traffic light group is determined from the P traffic light groups, so that the processing efficiency of determining the first traffic light group can be improved.
Optionally, the determining a first traffic light group from the P traffic light groups includes:
for each traffic light group in the P traffic light groups, acquiring L sequencing priorities of the traffic light groups corresponding to L dimensions respectively, wherein L is a positive integer;
determining the comprehensive sequencing priority of a target traffic light group based on L sequencing priorities of the target traffic light group, wherein the target traffic light group is any one of the P traffic light groups;
and determining the traffic light group with the highest comprehensive ranking priority in the P traffic light groups as the first traffic light group.
In this embodiment, L may be a positive integer, and hereinafter, L equal to 3 is exemplified as a detailed description.
The L dimensions may include the number, the orientation, and the height between the traffic lights and the ground, and the ranking priorities of the traffic light groups corresponding to the dimensions may be obtained for each of the P traffic light groups, and finally the L ranking priorities of each traffic light group may be obtained. For example, the traffic light group a is sorted according to the number dimension, the sorting priority is level 1 according to the orientation dimension, and the sorting priority is level 2 according to the height between the traffic light and the ground.
The comprehensive ranking priority of the target traffic light group may be determined based on L ranking priorities of the target traffic light group, which may be any one of the P traffic light groups. Specifically, the ranking priority of the target traffic light group may be the ranking priority of the target traffic light group when the L ranking priorities are all equal, and the comprehensive ranking priority of the target traffic light group may be determined based on the weights of the L ranking priorities and the L ranking priorities when the L ranking priorities are not equal.
In addition, the P traffic light groups can be sorted based on the distance between the traffic lights in the traffic light groups and the vehicles, and the sorting priority of the dimensionality is obtained. The farther the traffic lights in the traffic light group are from the vehicle, the lower the ranking priority level in the traffic light group, otherwise the higher the ranking. That is, the driver usually needs to pay attention to the traffic lights at a shorter distance and then pay attention to the traffic lights at a longer distance on the driving route of the vehicle. And then, the 3 dimensionalities of the sequencing priorities can be combined with the dimensionalities of the sequencing priorities to determine the comprehensive sequencing priorities of the traffic light group.
Then, the traffic light group with the highest comprehensive ranking priority in the P traffic light groups can be determined as the first traffic light group.
In this embodiment, for each of the P traffic light groups, L ranking priorities of the traffic light groups corresponding to L dimensions are obtained; determining the comprehensive sequencing priority of a target traffic light group based on the L sequencing priorities of the target traffic light group; and determining the traffic light group with the highest comprehensive ranking priority in the P traffic light groups as the first traffic light group. In this way, the aggregate ranking priority of the traffic light groups may be evaluated from one or more dimensions to determine a first traffic light group from the P traffic light groups based on the aggregate ranking priority, which may improve the identification accuracy of the first traffic light group.
Optionally, step S101 specifically includes:
acquiring an image to be identified, which is acquired by a camera in the driving process of a vehicle, wherein the image to be identified comprises image data of a traffic light;
and intercepting the area including the traffic lights in the image to be identified to obtain the target image.
In this embodiment, the target image may be an image obtained by processing an image to be recognized collected by a camera in the traffic light recognition device during the driving process of the vehicle.
Generally, in consideration of the actual situation of a driver during the driving of a vehicle, it is necessary to prompt the driver to drive in advance based on the recognition result of a traffic light, and therefore, it is necessary to start the recognition of the traffic light when the driver is at a certain distance from a traffic intersection. In the application scenario, the traffic light is far away from the vehicle, and the image data of the traffic light is distributed in a certain area in the image to be recognized generally on the position distribution in the acquired image to be recognized, and the area is not located at a position higher than the position in the image to be recognized generally.
Meanwhile, for a driver, a traffic light which needs to be paid attention to is not located at a position which is relatively close to the image to be recognized, when the traffic light is located at the position which is relatively close to the traffic light, driving prompt based on the traffic light is not needed, or the traffic light is located in other directions of other intersections and is not the traffic light which needs to be paid attention to on a vehicle driving route.
Based on the position distribution characteristics of the traffic lights which need to be paid attention to by the driver in the image to be recognized, the image to be recognized collected by the camera is not used as the input of the traffic light recognition, but the area of the image to be recognized, which comprises the traffic lights, is intercepted, and the target image is obtained and used as the input of the traffic light recognition. As shown in fig. 2, an area of the image to be recognized including the traffic light is an area 205, and an image obtained by intercepting the area 205 is the target image.
In the embodiment, based on the position distribution characteristics of the traffic lights which need to be paid attention to by the driver in the image to be recognized, the area, including the traffic lights, of the image to be recognized, which is acquired by the camera in the driving process of the vehicle, is intercepted, so that the target image is obtained. Due to the fact that image data are reduced, when the traffic light is identified based on the target image, the deep learning model can be compressed extremely, and therefore the traffic light can be identified efficiently and quickly.
Optionally, before the step S102, the method further includes:
detecting image data of traffic lights in the target image to obtain image data of Q traffic lights in the target image, wherein Q is a positive integer less than or equal to M;
and correcting the image data of the Q traffic lights in the target image based on the cluster clustering characteristics of the traffic lights on the vehicle driving route to obtain the image data of the M traffic lights in the target image.
In this embodiment, since the method is a pure visual scheme, after the target image is acquired and before the attribute information of the M traffic lights is identified based on the image data of the M traffic lights, the image data of the M traffic lights in the target image needs to be detected.
Specifically, a deep learning model such as a neural network may be used to detect the image data of the traffic lights in the target image, so as to obtain image data of Q traffic lights in the target image, where Q is a positive integer less than or equal to M.
In the detection stage, it may not always be necessary to completely detect the image data of the M traffic lights in the target image, and therefore, this stage may be used as a coarse positioning stage of the image data of the traffic lights in the target image.
In the fine positioning stage, the image data of the Q traffic lights in the target image can be corrected based on the cluster clustering characteristics of the traffic lights on the driving route of the vehicle, so as to obtain the image data of the M traffic lights in the target image.
Taking fig. 2 as an example, in the detection stage, only the image data of the traffic light 201 and the traffic light 203 may be detected through the deep learning model, and based on the cluster clustering feature of the traffic lights on the vehicle driving route, for example, at a traffic intersection of two lanes, two traffic lights are usually provided on a traffic light device for vehicle driving prompt, at this time, in the correction stage, the image data of the traffic light 202 may be corrected, and finally, the image data of the traffic light 201, the traffic light 202, and the traffic light 203 may be obtained.
In the correction stage, the image data of the Q traffic lights in the target image can be corrected based on a deep learning model such as a neural network, the cluster clustering characteristics of the traffic lights on the driving route of the vehicle and the image data of the Q traffic lights detected in the detection stage, and finally the image data of the M traffic lights in the target image is obtained.
In the embodiment, image data of traffic lights in the target image is detected through a detection stage to obtain image data of Q traffic lights in the target image, wherein Q is a positive integer less than or equal to M; and correcting the image data of the Q traffic lights in the target image by a correction stage based on the cluster clustering characteristics of the traffic lights on the vehicle driving route to obtain the image data of the M traffic lights in the target image. Therefore, the image data of the M traffic lights in the target image are detected in two stages, so that the detection accuracy and the detection sufficiency of the image data of the traffic lights in the target image can be improved, and the cluster clustering characteristics of the traffic lights on the driving route of the vehicle are utilized to correct the image data of the traffic lights in the target image, so that the detection efficiency can be improved.
Optionally, after step S104, the method further includes:
tracking the traffic light state of the traffic light in the first traffic light group in the target image to obtain a state transformation result of the traffic light in the first traffic light group;
and carrying out driving prompt based on the state change result of the traffic lights in the first traffic light group.
In this embodiment, after the first traffic light group is determined, based on the principle of high efficiency, a pure visual scheme may be adopted to continuously track the first traffic light group, so as to determine the state change result of the traffic lights in the first traffic light group, and prompt the driver for driving.
The traffic lights in each frame of target image collected in the driving process of the vehicle can be tracked by adopting a deep learning model such as a neural network, the traffic light state of the traffic lights in each frame of target image is identified, and the state transformation result of the traffic lights in the first traffic light group can be determined based on the state information of the traffic lights in the first traffic light group in the target images of the previous and next continuous frames. Wherein the traffic light status of the traffic light may refer to color attribute information of the traffic light.
For example, the first traffic light group includes two traffic lights, namely a traffic light for indicating a left turn and a traffic light for indicating a straight run. The color attribute information of the two traffic lights in the first traffic light group in the previously collected target image is respectively red and green, and the color attribute information of the two traffic lights in the first traffic light group in the currently collected target image is respectively red and yellow. At this time, the color attribute information of the traffic light for indicating straight running is changed, and the state change result of the traffic light for indicating straight running in the first traffic light group is changed from green to red.
In the embodiment, the state of the traffic light in the first traffic light group which determines to advance or stop and is to be focused by the driver on the vehicle driving route is accurately identified and tracked, so that the state change result of the traffic light in the first traffic light group can be obtained, and the driving prompt is carried out based on the state change result of the traffic light in the first traffic light group. Therefore, the dependence on global positioning system information, map information, roadside transmission signals and the like can be eliminated, the judgment capability and early warning awareness of the driver side on the road condition in front of the vehicle running route can be enhanced, and the running safety of the vehicle can be improved.
In order to more clearly illustrate the embodiments of the present application, the following describes in detail the overall process of the traffic light identification method of the embodiments of the present application.
Fig. 3 is a schematic flow chart of a specific example of a traffic light identification method in an embodiment of the present application, and as shown in fig. 3, an entire flow framework of the traffic light identification method includes five stages, namely a detection stage, a correction stage, a tracking stage, an identification stage, and a clustering stage, where the identification stage may include a first identification stage and a second identification stage, and specifically includes the following processes:
and acquiring an image to be identified at a certain frequency, and intercepting an area including a traffic light in the image to be identified to obtain a target image.
The image data of the traffic lights in the target image are detected in the detection stage, the rough positioning of the image data of the traffic lights in the target image is realized, the result of the detection stage is locally and accurately adjusted in the correction stage, the fine positioning effect is achieved, and the image data of the M traffic lights in the target image are finally obtained.
In the detection stage and the correction stage, the deep learning model can be used for detecting and correcting the image data of the traffic light in the target image, and in order to achieve the purpose of high efficiency, the detection and the correction can be performed once at fixed intervals in the acquired image.
Then, in the tracking stage, for each frame of acquired target image, the traffic light in the target image can be tracked.
In the first identification phase, the detected and tracked traffic light is identified, and the attribute information of the traffic light is determined.
In the clustering stage, the M traffic lights can be clustered according to the distance information among the M traffic lights to obtain N traffic light groups.
In a second identification phase, a first traffic light group is identified from the N traffic light groups.
Then, whether the state of the traffic light in the first traffic light group changes or not can be judged and prompted based on the state of the traffic light before the traffic light in the first traffic light group and the current state of the traffic light.
Second embodiment
As shown in fig. 4, the present application provides a traffic light identification device 400, comprising:
an obtaining module 401, configured to obtain a target image acquired during a vehicle driving process, where the target image includes image data of M traffic lights, and M is a positive integer;
an identifying module 402, configured to identify attribute information of the M traffic lights based on the image data of the M traffic lights;
a clustering module 403, configured to cluster the M traffic lights according to distance information between the M traffic lights to obtain N traffic light groups, where the distance information is determined based on image data of the M traffic lights, and N is a positive integer smaller than or equal to M;
a determining module 404, configured to determine a first traffic light group from the N traffic light groups based on the attribute information of the M traffic lights, where the first traffic light group is a traffic light group corresponding to a vehicle driving route.
Optionally, the determining module 404 includes:
the filtering unit is used for filtering a second traffic light group from the N traffic light groups based on the attribute information of the M traffic lights to obtain P traffic light groups, wherein the second traffic light group is a traffic light group irrelevant to a vehicle driving route, and P is a positive integer less than or equal to N;
and the determining unit is used for determining a first traffic light group from the P traffic light groups.
Optionally, the determining unit is specifically configured to, for each of the P traffic light groups, obtain L sequencing priorities of the traffic light groups corresponding to L dimensions, respectively, where L is a positive integer; determining the comprehensive sequencing priority of a target traffic light group based on L sequencing priorities of the target traffic light group, wherein the target traffic light group is any one of the P traffic light groups; and determining the traffic light group with the highest comprehensive ranking priority in the P traffic light groups as the first traffic light group.
Optionally, the obtaining module 401 includes:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be identified, which is acquired by a camera in the driving process of a vehicle, and the image to be identified comprises image data of a traffic light;
and the intercepting unit is used for intercepting the area including the traffic lights in the image to be identified to obtain the target image.
Optionally, the apparatus further comprises:
the detection module is used for detecting the image data of the traffic lights in the target image to obtain the image data of Q traffic lights in the target image, wherein Q is a positive integer less than or equal to M;
and the correction module is used for correcting the image data of the Q traffic lights in the target image based on the cluster clustering characteristics of the traffic lights on the vehicle driving route to obtain the image data of the M traffic lights in the target image.
Optionally, the apparatus further comprises:
the tracking module is used for tracking the traffic light state of the traffic light in the first traffic light group in the target image so as to obtain a state transformation result of the traffic light in the first traffic light group;
and the prompting module is used for carrying out driving prompting based on the state change result of the traffic lights in the first traffic light group.
The traffic light identification device 400 provided by the application can realize each process realized by the traffic light identification method embodiment, and can achieve the same beneficial effects, and is not repeated here for avoiding repetition.
There is also provided, in accordance with an embodiment of the present application, an electronic device, a readable storage medium, and a computer program product.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM502, and the RAM503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, and the like. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 performs the respective methods and processes described above, such as the traffic light identification method. For example, in some embodiments, the traffic light identification method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM502 and/or the communication unit 509. When the computer program is loaded into the RAM503 and executed by the computing unit 501, one or more steps of the traffic lamp identification method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the traffic light identification method by any other suitable method (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more editing languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (15)
1. A traffic light identification method, comprising:
acquiring a target image acquired in the running process of a vehicle, wherein the target image comprises image data of M traffic lights, and M is a positive integer;
identifying attribute information of the M traffic lights based on the image data of the M traffic lights;
clustering the M traffic lights according to distance information among the M traffic lights to obtain N traffic light groups, wherein the distance information is determined based on the image data of the M traffic lights, and N is a positive integer less than or equal to M;
and determining a first traffic light group from the N traffic light groups based on the attribute information of the M traffic lights, wherein the first traffic light group is a traffic light group corresponding to a vehicle driving route.
2. The method of claim 1, wherein the determining a first traffic light group from the N traffic light groups based on the attribute information of the M traffic lights comprises:
filtering a second traffic light group from the N traffic light groups based on the attribute information of the M traffic lights to obtain P traffic light groups, wherein the second traffic light group is a traffic light group irrelevant to a vehicle driving route, and P is a positive integer less than or equal to N;
a first traffic light group is determined from the P traffic light groups.
3. The method of claim 2, wherein said determining a first traffic light group from said P traffic light groups comprises:
for each traffic light group in the P traffic light groups, acquiring L sequencing priorities of the traffic light groups corresponding to L dimensions respectively, wherein L is a positive integer;
determining the comprehensive sequencing priority of a target traffic light group based on L sequencing priorities of the target traffic light group, wherein the target traffic light group is any one of the P traffic light groups;
and determining the traffic light group with the highest comprehensive ranking priority in the P traffic light groups as the first traffic light group.
4. The method of claim 1, wherein the acquiring of the target image captured during the driving of the vehicle comprises:
acquiring an image to be identified, which is acquired by a camera in the driving process of a vehicle, wherein the image to be identified comprises image data of a traffic light;
and intercepting the area including the traffic lights in the image to be identified to obtain the target image.
5. The method of claim 1, before identifying attribute information for the M traffic lights based on the image data for the M traffic lights, further comprising:
detecting image data of traffic lights in the target image to obtain image data of Q traffic lights in the target image, wherein Q is a positive integer less than or equal to M;
and correcting the image data of the Q traffic lights in the target image based on the cluster clustering characteristics of the traffic lights on the vehicle driving route to obtain the image data of the M traffic lights in the target image.
6. The method of claim 1, after determining a first traffic light group from the N traffic light groups based on the attribute information of the M traffic lights, further comprising:
tracking the traffic light state of the traffic light in the first traffic light group in the target image to obtain a state transformation result of the traffic light in the first traffic light group;
and carrying out driving prompt based on the state change result of the traffic lights in the first traffic light group.
7. A traffic light identification device comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a target image acquired in the running process of a vehicle, the target image comprises image data of M traffic lights, and M is a positive integer;
the identification module is used for identifying the attribute information of the M traffic lights based on the image data of the M traffic lights;
the clustering module is used for clustering the M traffic lights according to distance information among the M traffic lights to obtain N traffic light groups, wherein the distance information is determined based on the image data of the M traffic lights, and N is a positive integer less than or equal to M;
and the determining module is used for determining a first traffic light group from the N traffic light groups based on the attribute information of the M traffic lights, wherein the first traffic light group is a traffic light group corresponding to a vehicle driving route.
8. The apparatus of claim 7, wherein the means for determining comprises:
the filtering unit is used for filtering a second traffic light group from the N traffic light groups based on the attribute information of the M traffic lights to obtain P traffic light groups, wherein the second traffic light group is a traffic light group irrelevant to a vehicle driving route, and P is a positive integer less than or equal to N;
and the determining unit is used for determining a first traffic light group from the P traffic light groups.
9. The device according to claim 8, wherein the determining unit is specifically configured to, for each of the P traffic light groups, obtain L sorting priorities of the traffic light groups corresponding to L dimensions, respectively, where L is a positive integer; determining the comprehensive sequencing priority of a target traffic light group based on L sequencing priorities of the target traffic light group, wherein the target traffic light group is any one of the P traffic light groups; and determining the traffic light group with the highest comprehensive ranking priority in the P traffic light groups as the first traffic light group.
10. The apparatus of claim 7, wherein the means for obtaining comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be identified, which is acquired by a camera in the driving process of a vehicle, and the image to be identified comprises image data of a traffic light;
and the intercepting unit is used for intercepting the area including the traffic lights in the image to be identified to obtain the target image.
11. The apparatus of claim 7, further comprising:
the detection module is used for detecting the image data of the traffic lights in the target image to obtain the image data of Q traffic lights in the target image, wherein Q is a positive integer less than or equal to M;
and the correction module is used for correcting the image data of the Q traffic lights in the target image based on the cluster clustering characteristics of the traffic lights on the vehicle driving route to obtain the image data of the M traffic lights in the target image.
12. The apparatus of claim 7, further comprising:
the tracking module is used for tracking the traffic light state of the traffic light in the first traffic light group in the target image so as to obtain a state transformation result of the traffic light in the first traffic light group;
and the prompting module is used for carrying out driving prompting based on the state change result of the traffic lights in the first traffic light group.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product for performing the method of any one of claims 1-6 when the computer program product is run on an electronic device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011577703.2A CN112699773B (en) | 2020-12-28 | 2020-12-28 | Traffic light identification method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011577703.2A CN112699773B (en) | 2020-12-28 | 2020-12-28 | Traffic light identification method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112699773A true CN112699773A (en) | 2021-04-23 |
CN112699773B CN112699773B (en) | 2023-09-01 |
Family
ID=75512271
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011577703.2A Active CN112699773B (en) | 2020-12-28 | 2020-12-28 | Traffic light identification method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112699773B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114333380A (en) * | 2021-12-13 | 2022-04-12 | 重庆长安汽车股份有限公司 | Traffic light identification method and system based on camera and V2x, and vehicle |
CN117152718A (en) * | 2023-10-31 | 2023-12-01 | 广州小鹏自动驾驶科技有限公司 | Traffic light response method, device, vehicle and computer readable storage medium |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104408944A (en) * | 2014-11-10 | 2015-03-11 | 天津市市政工程设计研究院 | Lamp group based mixed traffic flow signal timing optimization method |
US20160371552A1 (en) * | 2014-03-10 | 2016-12-22 | Nissan Motor Co., Ltd. | Traffic Light Detecting Device and Traffic Light Detecting Method |
US20170186314A1 (en) * | 2015-12-28 | 2017-06-29 | Here Global B.V. | Method, apparatus and computer program product for traffic lane and signal control identification and traffic flow management |
US20180012088A1 (en) * | 2014-07-08 | 2018-01-11 | Nissan Motor Co., Ltd. | Traffic Light Detection Device and Traffic Light Detection Method |
US20180137379A1 (en) * | 2015-04-08 | 2018-05-17 | Nissan Motor Co., Ltd. | Traffic Light Detection Device and Traffic Light Detection Method |
CN108305475A (en) * | 2017-03-06 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of traffic lights recognition methods and device |
US20180307925A1 (en) * | 2017-04-20 | 2018-10-25 | GM Global Technology Operations LLC | Systems and methods for traffic signal light detection |
US20190087673A1 (en) * | 2017-09-15 | 2019-03-21 | Baidu Online Network Technology (Beijing) Co., Ltd | Method and apparatus for identifying traffic light |
CN110543814A (en) * | 2019-07-22 | 2019-12-06 | 华为技术有限公司 | Traffic light identification method and device |
CN110706494A (en) * | 2019-10-30 | 2020-01-17 | 北京百度网讯科技有限公司 | Control method, device, equipment and storage medium for automatic driving vehicle |
CN110930715A (en) * | 2019-11-21 | 2020-03-27 | 浙江大华技术股份有限公司 | Method and system for identifying red light running of non-motor vehicle and violation processing platform |
CN111079563A (en) * | 2019-11-27 | 2020-04-28 | 北京三快在线科技有限公司 | Traffic signal lamp identification method and device, electronic equipment and storage medium |
CN111292531A (en) * | 2020-02-06 | 2020-06-16 | 北京百度网讯科技有限公司 | Tracking method, device and equipment of traffic signal lamp and storage medium |
CN111428647A (en) * | 2020-03-25 | 2020-07-17 | 浙江浙大中控信息技术有限公司 | Traffic signal lamp fault detection method |
CN111627233A (en) * | 2020-06-09 | 2020-09-04 | 上海商汤智能科技有限公司 | Method and device for coordinating passing route for vehicle |
CN111627241A (en) * | 2020-05-27 | 2020-09-04 | 北京百度网讯科技有限公司 | Method and device for generating vehicle queuing information |
CN111661054A (en) * | 2020-05-08 | 2020-09-15 | 东软睿驰汽车技术(沈阳)有限公司 | Vehicle control method, device, electronic device and storage medium |
US20200312127A1 (en) * | 2017-10-23 | 2020-10-01 | Bayerische Motoren Werke Aktiengesellschaft | Method and Apparatus for Determining Driving Strategy of a Vehicle |
US20200353932A1 (en) * | 2018-06-29 | 2020-11-12 | Beijing Sensetime Technology Development Co., Ltd. | Traffic light detection method and apparatus, intelligent driving method and apparatus, vehicle, and electronic device |
CN112016344A (en) * | 2019-05-28 | 2020-12-01 | 深圳市商汤科技有限公司 | State detection method and device of signal indicator lamp and driving control method and device |
-
2020
- 2020-12-28 CN CN202011577703.2A patent/CN112699773B/en active Active
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160371552A1 (en) * | 2014-03-10 | 2016-12-22 | Nissan Motor Co., Ltd. | Traffic Light Detecting Device and Traffic Light Detecting Method |
US20180012088A1 (en) * | 2014-07-08 | 2018-01-11 | Nissan Motor Co., Ltd. | Traffic Light Detection Device and Traffic Light Detection Method |
CN104408944A (en) * | 2014-11-10 | 2015-03-11 | 天津市市政工程设计研究院 | Lamp group based mixed traffic flow signal timing optimization method |
US20180137379A1 (en) * | 2015-04-08 | 2018-05-17 | Nissan Motor Co., Ltd. | Traffic Light Detection Device and Traffic Light Detection Method |
US20170186314A1 (en) * | 2015-12-28 | 2017-06-29 | Here Global B.V. | Method, apparatus and computer program product for traffic lane and signal control identification and traffic flow management |
CN108305475A (en) * | 2017-03-06 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of traffic lights recognition methods and device |
US20180307925A1 (en) * | 2017-04-20 | 2018-10-25 | GM Global Technology Operations LLC | Systems and methods for traffic signal light detection |
US20190087673A1 (en) * | 2017-09-15 | 2019-03-21 | Baidu Online Network Technology (Beijing) Co., Ltd | Method and apparatus for identifying traffic light |
US20200312127A1 (en) * | 2017-10-23 | 2020-10-01 | Bayerische Motoren Werke Aktiengesellschaft | Method and Apparatus for Determining Driving Strategy of a Vehicle |
US20200353932A1 (en) * | 2018-06-29 | 2020-11-12 | Beijing Sensetime Technology Development Co., Ltd. | Traffic light detection method and apparatus, intelligent driving method and apparatus, vehicle, and electronic device |
CN112016344A (en) * | 2019-05-28 | 2020-12-01 | 深圳市商汤科技有限公司 | State detection method and device of signal indicator lamp and driving control method and device |
CN110543814A (en) * | 2019-07-22 | 2019-12-06 | 华为技术有限公司 | Traffic light identification method and device |
CN110706494A (en) * | 2019-10-30 | 2020-01-17 | 北京百度网讯科技有限公司 | Control method, device, equipment and storage medium for automatic driving vehicle |
CN110930715A (en) * | 2019-11-21 | 2020-03-27 | 浙江大华技术股份有限公司 | Method and system for identifying red light running of non-motor vehicle and violation processing platform |
CN111079563A (en) * | 2019-11-27 | 2020-04-28 | 北京三快在线科技有限公司 | Traffic signal lamp identification method and device, electronic equipment and storage medium |
CN111292531A (en) * | 2020-02-06 | 2020-06-16 | 北京百度网讯科技有限公司 | Tracking method, device and equipment of traffic signal lamp and storage medium |
CN111428647A (en) * | 2020-03-25 | 2020-07-17 | 浙江浙大中控信息技术有限公司 | Traffic signal lamp fault detection method |
CN111661054A (en) * | 2020-05-08 | 2020-09-15 | 东软睿驰汽车技术(沈阳)有限公司 | Vehicle control method, device, electronic device and storage medium |
CN111627241A (en) * | 2020-05-27 | 2020-09-04 | 北京百度网讯科技有限公司 | Method and device for generating vehicle queuing information |
CN111627233A (en) * | 2020-06-09 | 2020-09-04 | 上海商汤智能科技有限公司 | Method and device for coordinating passing route for vehicle |
Non-Patent Citations (2)
Title |
---|
李海霞;罗芳芳;: "汽车辅助驾驶系统交通信号灯识别", 电子技术与软件工程, no. 13, pages 250 - 252 * |
赵九九;: "实时智能控制系统在交通信号灯中的设计思路与工作过程", 中国设备工程, no. 21, pages 123 - 124 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114333380A (en) * | 2021-12-13 | 2022-04-12 | 重庆长安汽车股份有限公司 | Traffic light identification method and system based on camera and V2x, and vehicle |
CN117152718A (en) * | 2023-10-31 | 2023-12-01 | 广州小鹏自动驾驶科技有限公司 | Traffic light response method, device, vehicle and computer readable storage medium |
CN117152718B (en) * | 2023-10-31 | 2024-03-15 | 广州小鹏自动驾驶科技有限公司 | Traffic light response method, device, vehicle and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112699773B (en) | 2023-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112580571A (en) | Vehicle running control method and device and electronic equipment | |
JP7292355B2 (en) | Methods and apparatus for identifying vehicle alignment information, electronics, roadside equipment, cloud control platforms, storage media and computer program products | |
CN112541475B (en) | Sensing data detection method and device | |
CN113421432A (en) | Traffic restriction information detection method and device, electronic equipment and storage medium | |
CN112699773B (en) | Traffic light identification method and device and electronic equipment | |
EP3910611A1 (en) | Method and apparatus for adjusting channelization of traffic intersection | |
CN112818792A (en) | Lane line detection method, lane line detection device, electronic device, and computer storage medium | |
CN113989777A (en) | Method, device and equipment for identifying speed limit sign and lane position of high-precision map | |
CN114037966A (en) | High-precision map feature extraction method, device, medium and electronic equipment | |
CN114973687B (en) | Traffic information processing method, device, equipment and medium | |
CN115641359A (en) | Method, apparatus, electronic device, and medium for determining motion trajectory of object | |
CN111667706A (en) | Lane-level road surface condition recognition method, road condition prompting method and device | |
CN114771576A (en) | Behavior data processing method, control method of automatic driving vehicle and automatic driving vehicle | |
CN114677848A (en) | Perception early warning system, method, device and computer program product | |
CN111814724B (en) | Lane number identification method, device, equipment and storage medium | |
CN113052047B (en) | Traffic event detection method, road side equipment, cloud control platform and system | |
CN114724113B (en) | Road sign recognition method, automatic driving method, device and equipment | |
CN114379587B (en) | Method and device for avoiding pedestrians in automatic driving | |
CN114596704B (en) | Traffic event processing method, device, equipment and storage medium | |
CN114694401B (en) | Method and device for providing reference vehicle speed in high-precision map and electronic equipment | |
CN115782919A (en) | Information sensing method and device and electronic equipment | |
CN115973190A (en) | Decision-making method and device for automatically driving vehicle and electronic equipment | |
CN115114312A (en) | Map data updating method and device and electronic equipment | |
CN114998883A (en) | License plate recognition method and device, electronic equipment and intelligent transportation equipment | |
CN114998863A (en) | Target road identification method, target road identification device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211012 Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing Applicant after: Apollo Zhilian (Beijing) Technology Co.,Ltd. Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085 Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |