CN111310708B - Traffic signal lamp state identification method, device, equipment and storage medium - Google Patents

Traffic signal lamp state identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN111310708B
CN111310708B CN202010130208.0A CN202010130208A CN111310708B CN 111310708 B CN111310708 B CN 111310708B CN 202010130208 A CN202010130208 A CN 202010130208A CN 111310708 B CN111310708 B CN 111310708B
Authority
CN
China
Prior art keywords
traffic signal
signal lamp
detection result
target
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010130208.0A
Other languages
Chinese (zh)
Other versions
CN111310708A (en
Inventor
黄章帅
雷雨苍
李子贺
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weride Technology Co Ltd
Original Assignee
Guangzhou Weride Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weride Technology Co Ltd filed Critical Guangzhou Weride Technology Co Ltd
Publication of CN111310708A publication Critical patent/CN111310708A/en
Application granted granted Critical
Publication of CN111310708B publication Critical patent/CN111310708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a traffic signal lamp state identification method, a device, equipment and a storage medium. The method comprises the following steps: determining at least one alternative traffic signal lamp detection result according to the image acquired by the vehicle; determining a target traffic signal lamp matched with the vehicle position information from the alternative traffic signal lamps according to the map data; according to the relative position relation between the target traffic signal lamp and the vehicle, mapping the target traffic signal lamp into the image to obtain a traffic signal lamp mapping result; and determining the traffic signal lamp state according to the traffic signal lamp mapping result and the alternative traffic signal lamp detection result. By using the technical scheme of the embodiment of the invention, the state of the traffic signal lamp in front of the vehicle can be accurately identified.

Description

Traffic signal lamp state identification method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to a data processing technology, in particular to a traffic signal lamp state identification method, a device, equipment and a storage medium.
Background
The recognition technology of the traffic light state is widely applied to the fields of automatic driving, auxiliary driving, driving safety and the like of vehicles.
The existing traffic signal lamp state identification method is mainly based on the positioning and identification of the image interest region, and the method needs to position the interest region where the traffic signal lamp appears in the vehicle-mounted shooting image and then position the traffic signal lamp in the interest region for identification.
In carrying out the invention, the inventors have found that the prior art has the following drawbacks: due to factors such as deviation of vehicle positioning, variation of vehicle-mounted camera parameters, variation of sizes and positions of traffic lights in images during vehicle running and the like, wrong regions of interest are easily positioned. In addition, the method for identifying the state of the traffic signal lamp in the prior art cannot exclude the interference of the non-traffic signal lamp or other traffic signal lamps, and cannot accurately identify the state of the traffic signal under the condition that the resolution of the vehicle-mounted photographed image is low.
Disclosure of Invention
The embodiment of the invention provides a traffic signal lamp state identification method, a device, equipment and a storage medium, which are used for accurately identifying the state of a traffic signal lamp in front of a vehicle.
In a first aspect, an embodiment of the present invention provides a traffic signal status identifying method, where the method includes:
determining at least one alternative traffic signal lamp detection result according to the image acquired by the vehicle;
determining a target traffic signal lamp matched with the vehicle position information from the alternative traffic signal lamps according to the map data;
according to the relative position relation between the target traffic signal lamp and the vehicle, mapping the target traffic signal lamp into the image to obtain a traffic signal lamp mapping result;
and determining the traffic signal lamp state according to the traffic signal lamp mapping result and the alternative traffic signal lamp detection result.
In a second aspect, an embodiment of the present invention further provides a traffic signal status identifying device, where the device includes:
The alternative traffic signal lamp detection result acquisition module is used for determining at least one alternative traffic signal lamp detection result according to the image acquired by the vehicle;
The target traffic signal lamp acquisition module is used for determining a target traffic signal lamp matched with the vehicle position information from the alternative traffic signal lamps according to the map data;
The traffic signal lamp mapping result acquisition module is used for mapping the target traffic signal lamp into the image according to the relative position relationship between the target traffic signal lamp and the vehicle to obtain a traffic signal lamp mapping result;
And the traffic signal lamp state determining module is used for determining the traffic signal lamp state according to the traffic signal lamp mapping result and the alternative traffic signal lamp detection result.
In a third aspect, an embodiment of the present invention further provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the traffic light status identifying method according to any one of the embodiments of the present invention when executing the program.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are used to perform a traffic light status identification method according to any of the embodiments of the present invention.
According to the embodiment of the invention, after the detection result of the alternative traffic signal lamp is determined according to the image acquired by the vehicle, the target traffic signal lamp actually existing in front of the vehicle is mapped into the image acquired by the vehicle, and the detection result of the alternative traffic signal lamp in the picture is subjected to auxiliary verification through the position of the target traffic signal lamp in the image, so that the problem that the recognition error rate is high when the state of the traffic signal lamp is directly recognized according to the image acquired by the vehicle in the prior art is solved, the interference of the non-traffic signal lamp or other traffic signal lamps can be effectively eliminated, and the state of the traffic signal can be accurately recognized under the condition that the resolution of the vehicle-mounted photographed image is low. The method realizes the accurate identification of the state of the traffic light in front of the vehicle, and further ensures the safety in the running process of the vehicle.
Drawings
FIG. 1 is a flow chart of a traffic light status recognition method in accordance with a first embodiment of the present invention;
FIG. 2a is a flow chart of a traffic light status recognition method in a second embodiment of the present invention;
FIG. 2b is a flow chart of a method of identifying traffic light status suitable for use in embodiments of the present invention;
FIG. 2c is a flow chart of a method of projecting traffic signal locations in a three-dimensional laser point cloud onto a camera;
FIG. 2d is a flow chart of a method of similarity matching traffic light detection results and traffic light projections in an image;
fig. 3 is a schematic structural diagram of a traffic light status recognition device according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device in a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of a traffic light status recognition method according to a first embodiment of the present invention, where the present embodiment is applicable to a situation where a vehicle accurately recognizes a traffic light status in front of the vehicle during driving, and the method may be performed by a traffic light status recognition device, which may be implemented by software and/or hardware and is generally integrated in the vehicle, and used in conjunction with a vehicle-mounted camera configured in front of the vehicle.
As shown in fig. 1, the technical solution of the embodiment of the present invention specifically includes the following steps:
S110, determining at least one alternative traffic signal lamp detection result according to the image acquired by the vehicle.
The image acquired by the vehicle may be an image captured by the vehicle-mounted camera, and the candidate traffic signal lamp detection result may be a position, a contour, display attribute information, and the like of the traffic signal lamp detected by the image acquired by the vehicle. The display attribute information may further include color information, shape information, etc. of the traffic signal, and in a specific example, the color information may be red, yellow, green, etc., and the shape information may be a circle, a left arrow, a right arrow, an up arrow, etc. The embodiment of the invention does not limit the specific content of the detection result of the alternative traffic signal lamp and the specific content of the display attribute information.
In the embodiment of the invention, at least one alternative traffic signal lamp detection result is determined according to the image acquired by the vehicle, and the mode and the specific process for determining the alternative traffic signal lamp detection result according to the image acquired by the vehicle are not limited.
In an alternative implementation of the present embodiment, determining at least one alternative traffic light detection result from an image acquired by the vehicle may include: inputting an image acquired by a vehicle into a pre-trained traffic signal lamp identification model, and acquiring the detection result of at least one alternative traffic signal lamp; the alternative traffic signal lamp detection result comprises the position, the outline and the display attribute information of the alternative traffic signal lamp in the image.
The traffic signal lamp recognition model can be obtained through training according to a traditional machine learning algorithm or through training according to a deep learning algorithm, and the training process of the traffic signal lamp recognition model is not limited in the embodiment of the invention. In one specific example, the traffic light recognition model may be trained based on a conventional machine learning algorithm such as logistic regression, support vector machine, and the like. In another specific example, the traffic light recognition model may be trained according to a deep learning algorithm such as a deep convolutional neural network. And inputting the image acquired by the vehicle into a traffic signal lamp identification model to obtain an alternative traffic signal lamp detection result. Alternatively, the marking of the traffic signal lamp can be first performed in the image acquired by the vehicle, and then the detection result of the alternative traffic signal lamp can be acquired through the traffic signal lamp identification model.
In another alternative implementation of the present embodiment, determining at least one alternative traffic light detection result from an image acquired by the vehicle may include:
Acquiring a standard traffic signal lamp image set, wherein the standard traffic signal lamp image set comprises typical images of various different traffic signal lamps;
and in the images acquired by the vehicle, searching and matching each standard traffic signal lamp image included in the standard traffic signal lamp image set, and if the target standard traffic signal lamp image is matched with a first area in the images acquired by the vehicle, determining that the target standard traffic signal lamp image is identified in the first area.
And S120, determining a target traffic signal lamp matched with the vehicle position information from the alternative traffic signal lamps according to the map data.
The map data may be data in a road area where the vehicle is located, the vehicle position information may be information about a position of the vehicle in the road where the vehicle is located, and the target traffic signal may be a traffic signal located in front of the vehicle and whose state needs to be determined.
In the embodiment of the invention, a traffic signal lamp positioned in front of a vehicle is determined in a map as a target traffic signal lamp.
Alternatively, the real-time positioning result (i.e., vehicle position information) of the vehicle may be acquired by a positioning module configured on the vehicle, such as a GPS module, and when it is determined that the image acquired by the vehicle identifies an alternative traffic signal, the traffic signal located in front of the vehicle is identified in the map data according to the real-time positioning result.
In an alternative embodiment of the present invention, determining a target traffic signal that matches the vehicle location information from the candidate traffic signals according to the map data may include: determining a search area according to the vehicle position information, and acquiring geographic position information of at least one traffic signal lamp in the search area from map data; and screening at least one traffic signal lamp of which the geographic position information and the vehicle position information meet the distance association condition as the target traffic signal lamp.
The search area may be an area within a certain position range of the vehicle, and the geographic position information of the traffic signal light may be information related to the position of the traffic signal light in the map. The distance-related condition may be a condition satisfied by a distance between the vehicle position and the traffic light position, for example, a relative distance between the two is 5 meters or less or 10 meters or the like.
Alternatively, the distance-related condition may be that the distance between the traffic light position and the vehicle position is closest. The advantage of this arrangement is that traffic signals nearest to the vehicle can be preferentially identified, providing convenience to the driver. In a specific example, a nearest neighbor search algorithm K-d Tree (K-dimensional Tree) may be used to obtain a traffic signal lamp closest to the vehicle location, and the method of obtaining a traffic signal lamp closest to the vehicle location is not limited in this embodiment.
In the embodiment of the invention, the traffic signal lamp meeting the distance association condition is screened out from the traffic signal lamps within a certain position range from the vehicle on the map to serve as a target traffic signal lamp.
And S130, mapping the target traffic signal lamp into the image according to the relative position relation between the target traffic signal lamp and the vehicle to obtain a traffic signal lamp mapping result.
The traffic signal lamp mapping result may be a result obtained by projecting the target traffic signal lamp in the image.
In the embodiment of the invention, the target traffic signal lamp is mapped into the image according to the distance between the position of the target traffic signal lamp and the position of the vehicle, and the traffic signal lamp mapping result of the target traffic signal lamp in the image is obtained. In order to map the target traffic signal lamp into the image, the relative position relationship between the target traffic signal lamp and the vehicle, that is, the relative distance between each point in the target traffic signal lamp and the vehicle needs to be determined first, and after the relative position relationship is obtained, the target traffic signal lamp can be mapped into the image by combining the internal and external parameters of the vehicle-mounted camera and the principle of small-hole imaging of the camera.
Optionally, the vehicle may send a laser radar signal to the surrounding environment (mainly the front), and based on the echo signal of the received laser radar signal, obtain point cloud data in front of the vehicle, and identify the target traffic signal lamp in the point cloud data, so as to obtain the relative positional relationship between each point in the target traffic signal lamp and the vehicle;
Or the vehicle and the target traffic signal lamp are respectively mapped into a three-dimensional street view map according to the vehicle position information and the geographic position information of the target traffic signal lamp, and the relative distance between the target point on the vehicle and each point on the target traffic signal lamp is calculated in the three-dimensional street view map and is used as the relative position relation between each point in the target traffic signal lamp and the vehicle.
In an optional embodiment of the present invention, mapping the target traffic signal lamp to the image according to the relative positional relationship between the target traffic signal lamp and the vehicle to obtain a traffic signal lamp mapping result may include: marking the target traffic signal lamp in point cloud data acquired by a vehicle in real time according to the geographic position information of the target traffic signal lamp; and mapping the marked target traffic signal lamp into the image according to the internal and external parameters of the vehicle-mounted camera of the vehicle to obtain a traffic signal lamp mapping result.
The point cloud data may be data recorded in the form of points including three-dimensional coordinates. The internal and external parameters of the in-vehicle camera may include internal parameters of the in-vehicle camera, including a focal length of the camera, a pixel size, and the like, and the external parameters may include parameters related to a position of the camera in the map, and in a specific example, the external parameters may be a position, a rotation direction, an offset direction, and the like of the camera in the map.
In the embodiment of the invention, the vehicle collects the point cloud data in real time, and marks the target traffic signal lamp in the point cloud data according to the geographic position information of the target traffic signal lamp. And mapping the marked target traffic signal lamp according to the internal parameters and the external parameters of the vehicle-mounted camera. The embodiment does not limit the manner of collecting the point cloud data.
And S140, determining the traffic signal lamp state according to the traffic signal lamp mapping result and the alternative traffic signal lamp detection result.
Wherein the traffic light status in front of the vehicle may be a status determined from color information and control information of the traffic light in front of the vehicle. In a specific example, the traffic light color information in front of the vehicle is green, and the control information is straight, and the traffic light state in front of the vehicle may be that the traffic light nearest to the vehicle is displayed as straight ahead.
In the embodiment of the invention, the traffic light state is determined according to the traffic light mapping result and the alternative traffic light detection result, but not just according to the alternative traffic light detection result. The advantage of setting up like this is that the degree of accuracy of the discernment of improvement traffic light state reduces the error, improves user experience.
In an alternative embodiment of the present invention, determining the traffic light status according to the traffic light mapping result and the alternative traffic light detection result may include: and acquiring a target detection result matched with the traffic signal lamp mapping result from the alternative traffic signal lamp detection results, and determining the traffic signal lamp state according to the target detection result.
In the embodiment of the invention, the alternative traffic signal lamp detection result matched with the traffic signal lamp mapping result is selected as the target detection result from all the alternative traffic signal lamp detection results, and the traffic signal lamp state is determined according to the target detection result. In an alternative embodiment of the present invention, determining the traffic signal status according to the target detection result may include: and determining the state of the traffic signal lamp according to the target detection result and the control information of the target traffic signal lamp.
The control information may be information determined according to shape information of the target traffic signal lamp. In a specific example, when the shape information of the target traffic signal is an up arrow, it may be determined that the control information is straight. When the shape information of the target traffic signal lamp is a left arrow, it may be determined that the control information is a left turn.
In an alternative implementation of the embodiment of the present invention, the target detection result is selected from the candidate traffic signal detection results, and thus includes the position, the outline, the display attribute information, and the like of the traffic signal detected by the image acquired by the vehicle. After the target detection result is selected, the traffic signal lamp state can also be determined directly according to the target detection result.
According to the technical scheme, after the detection result of the alternative traffic signal lamp is determined according to the image acquired by the vehicle, the target traffic signal lamp actually existing in front of the vehicle is mapped into the image acquired by the vehicle, and the detection result of the alternative traffic signal lamp in the picture is subjected to auxiliary verification through the position of the target traffic signal lamp in the image, so that the problem that the recognition error rate is high when the state of the traffic signal lamp is directly recognized according to the image acquired by the vehicle in the prior art is solved, the interference of the non-traffic signal lamp or other traffic signal lamps can be effectively eliminated, and the state of the traffic signal can be accurately recognized under the condition that the resolution of the vehicle-mounted photographed image is low. The method realizes the accurate identification of the state of the traffic light in front of the vehicle, and further ensures the safety in the running process of the vehicle.
Example two
Fig. 2a is a flowchart of a traffic light status recognition method according to a second embodiment of the present invention, where, based on the foregoing embodiment, an acquisition process of a detection result of an alternative traffic light, an acquisition process of a target traffic light, an acquisition process of a mapping result of a traffic light, and an acquisition process of a status of a traffic light are further specified.
Correspondingly, as shown in fig. 2a, the technical solution of the embodiment of the present invention specifically includes the following steps:
s210, inputting an image acquired by the vehicle into a pre-trained traffic light identification model, and acquiring the at least one alternative traffic light detection result.
S220, determining a search area according to the vehicle position information, and acquiring geographic position information of at least one traffic signal lamp in the search area from map data.
And S230, screening out at least one traffic signal lamp of which the geographic position information and the vehicle position information meet the distance association condition as the target traffic signal lamp.
S240, acquiring point cloud data.
The laser sensor may be a sensor arranged on a vehicle, measuring by using a laser technology, and acquiring point cloud data.
In the embodiment of the invention, the point cloud data is collected by transmitting and receiving laser information through the laser sensor arranged on the vehicle. The advantage of setting like this is that through laser sensor, can constantly collect the vehicle itself and all around various accurate data, carries out the accurate positioning to the vehicle, and the point cloud data real-time that laser sensor gathered is better, and the precision is higher.
S250, marking the target traffic signal lamp in the point cloud data according to the vehicle position information and the geographic position information of the target traffic signal lamp.
In the embodiment of the invention, after the point cloud data is acquired, the target traffic signal lamp can be marked according to the data attribute of the point cloud data and the position information of the vehicle and the target traffic signal lamp.
Alternatively, after the point cloud data of the road is acquired by the laser sensor on the vehicle, the position information of the vehicle may include position coordinates of the vehicle, a yaw angle, and the like, according to the current position information of the vehicle, for example. And marking the position of the target traffic signal lamp in the point cloud data of the road according to the position relation between the screened target traffic signal lamp and the vehicle. This has the advantage of facilitating the acquisition of the vertex coordinates of the target traffic signal.
And S260, acquiring each vertex coordinate of each target traffic signal lamp in the point cloud data.
The vertex coordinates may be three-dimensional coordinates of the vertices of the target traffic signal. In one specific example, 8 three-dimensional coordinates { P w=(xi,yi,zi }, i=1 … } may be used to represent 8 vertices of a traffic light.
In the embodiment of the invention, the target traffic signal lamp can be projected according to the vertex coordinates of the target traffic signal lamp, and the vertex coordinates can be acquired according to the point cloud data and the geographic position information of the target traffic signal lamp.
And S270, mapping each vertex coordinate into the image according to the internal and external parameters of the vehicle-mounted camera of the vehicle to obtain pixel point coordinates corresponding to each vertex coordinate of each target traffic signal lamp.
The internal and external parameters of the vehicle-mounted camera comprise internal parameters and external parameters of the camera, the internal parameters of the camera can comprise a focal length of the camera, an imaging pixel size and the like, and the external parameters of the camera can comprise a position, a rotation direction, an offset direction and the like of the camera in a three-dimensional coordinate system. The process of determining the internal and external parameters of the camera is also called camera calibration, and the specific implementation method of camera calibration is not limited in this embodiment. The pixel point coordinates may be coordinates of each vertex of the target traffic signal in the image after the target traffic signal is mapped into the image. In one specific example, each vertex can be mapped to one pixel P uv = (u, v) on the camera two-dimensional image by transformation.
In the embodiment of the invention, the vertexes of the target traffic signal lamps are mapped into the image according to the internal and external parameters of the vehicle-mounted camera and the three-dimensional coordinates of the vertexes of the known target traffic signal lamps by the principle of small-hole imaging, so as to obtain the pixel point coordinates of each vertex.
In an optional embodiment of the present invention, mapping each vertex coordinate to the image according to the internal and external parameters of the vehicle-mounted camera of the vehicle to obtain a pixel point coordinate corresponding to each vertex coordinate of each target traffic signal lamp may include: by the following formula: p uv=K(RPw + t), mapping each vertex coordinate P w into the image to obtain a pixel point coordinate P uv corresponding to each vertex coordinate of each target traffic signal lamp; wherein, P w=(xi,yi,zi), i e [1, N ], N represents the number of vertices of a target traffic signal lamp, P uv = (u, v), u, v represent coordinates of pixels, K represents an internal reference matrix of the vehicle-mounted camera, R represents a rotation matrix of the vehicle-mounted camera, and t represents a translation vector of the vehicle-mounted camera;
the internal parameters of the vehicle-mounted camera can be represented by an internal parameter matrix K, and the external parameters of the vehicle-mounted camera can be represented by a rotation matrix R and a translation vector t.
Exemplary, the reference matrix may beWhere f x、fy denotes the focal length of the camera, x 0、y0 denotes the principal point offset of the camera, and s denotes the pixel size. The rotation matrix can be used/>Wherein the columns of the rotation matrix R represent the directions of the world coordinate system axes in the camera coordinate system. Translation vector can be used/>To indicate that the translation vector t indicates the position of the origin of the world coordinate system under the camera coordinate system.
S280, calculating pixel point coordinate average values of adjacent pixel points to obtain each mapping point coordinate of each target traffic signal lamp in the image, and taking each mapping point coordinate as a traffic signal lamp mapping result.
The pixel coordinate average value may be an average value of pixel coordinates of two adjacent mapped vertices. In one specific example, when the coordinates of two adjacent pixel points are P uv1=(u1,v1),Puv2=(u2,v2 respectively), the coordinates of the mapping point are P uv=(u1+u2/2,v1+v2/2). The map point coordinates may be coordinates of map points obtained by averaging the coordinates of adjacent pixel points.
In the embodiment of the invention, three-dimensional vertex coordinates are mapped into an image, after corresponding pixel points are obtained, each pixel point is provided with a nearest neighboring pixel point, the coordinates of two neighboring pixel points are averaged, so that the two pixel points are combined into one pixel point, the combined pixel point is used as a mapping point, and the coordinates of each mapping point are the mapping result of the traffic signal lamp.
In an optional embodiment of the present invention, calculating an average value of coordinates of pixel points of adjacent pixel points to obtain coordinates of each mapping point of each target traffic signal lamp in the image may include: according to the pixel point coordinates P uv, two adjacent pixel points are taken as pixel point pairs to obtainA plurality of pixel point pairs; calculating the pixel point coordinate average value of each pixel point pair to obtain the mapping point coordinate P uvi of each target traffic signal lamp in the image; wherein P uvi=(ui,vi), i [ E [1, M ], M represents the coordinate number of the traffic signal lamp mapping result of a target traffic signal lamp in the image,/>
In the embodiment of the invention, two nearest adjacent pixel points can be divided into one pixel point pair, and the coordinates of each pixel point of the pixel point pair are averaged to obtain the coordinates of the mapping point. In a specific example, 8 three-dimensional coordinates { P w=(xi,yi,zi), i= … 8} are used to represent 8 vertices of the traffic signal, each vertex is mapped into an image to obtain 8 pixel coordinates { P uvi=(ui,vi), i= … 8} and pixel coordinates of adjacent positions are averaged, and finally, 4 mapping point coordinates { P uvi=(ui,vi), i= … 4} are used to represent the mapping result of the target traffic signal in the image.
S290, respectively calculating the similarity between the detection result of each alternative traffic signal lamp and the mapping result of each traffic signal lamp.
Wherein the similarity can be used to compare the similarity of two things. The similarity is generally expressed by calculating the distance between features of things, and if the distance is small, the similarity is large, and if the distance is large, the similarity is small.
In an alternative embodiment of the present invention, calculating the similarity between the alternative traffic signal detection result and the traffic signal mapping result may include: acquiring a first vertex pixel coordinate set and a second vertex pixel coordinate set which correspond to the currently processed alternative traffic signal lamp detection result and the currently processed traffic signal lamp mapping result respectively; calculating at least one two-dimensional image attribute parameter corresponding to the first vertex pixel coordinate set and the second vertex pixel coordinate set respectively; calculating at least one similarity calculation result between the currently processed alternative traffic signal lamp detection result and the currently processed traffic signal lamp mapping result according to the two-dimensional image attribute parameters; and obtaining the similarity between the currently processed alternative traffic signal lamp detection result and the currently processed traffic signal lamp mapping result according to the at least one similarity calculation result.
The first vertex pixel coordinate set may be a set of pixel coordinates corresponding to the detection result of each alternative traffic signal lamp. The second set of vertex pixel coordinates may be a set of pixel coordinates corresponding to each traffic signal mapping result. The two-dimensional image attribute parameters may include length, height, center point distance, and the like.
In a specific example, the pixel coordinate corresponding to the alternative traffic light detection result Pa is pa= (Pa 1,pa2,pa3,pa4), and the pixel coordinate corresponding to the traffic light mapping result Pb is pb= (Pb 1,pb2,pb3,pb4). And calculating two-dimensional image attribute parameters of Pa and Pb, specifically calculating the center point distance of Pa and Pb, the height and length of Pa and the height and length of Pb. The first similarity is calculated according to the ratio of the distance between the center points of Pa and Pb to the distance between the diagonal lines of Pa, the second similarity is calculated according to the ratio of the height of Pa to the height of Pb, and the third similarity is calculated according to the ratio of the length of Pa to the length of Pb. And finally, calculating the similarity between the alternative traffic signal lamp detection result Pa and the traffic signal lamp mapping result Pb according to the numerical value of each similarity. Alternatively, the values of the respective similarities may be directly added, and the sum of the values of the respective similarities may be used as the similarity between the alternative traffic signal detection result Pa and the traffic signal mapping result Pb. When the similarities have different weights, the sum of the products of the similarity values and the weights can also be used as the similarity between the alternative traffic signal lamp detection result Pa and the traffic signal lamp mapping result Pb.
In the embodiment of the invention, the similarity is calculated according to the position information of the alternative traffic signal lamp detection result and the traffic signal lamp mapping result. The embodiment of the invention does not limit the attribute information used for calculating the similarity, and can also calculate the similarity by using other attributes of the alternative traffic signal lamp detection result and the traffic signal lamp mapping result.
In another alternative embodiment of the present invention, when the position information of the alternative traffic light detection result and the traffic light mapping result is insufficient to describe the similarity of the two, the similarity may be calculated in combination with the control information of the alternative traffic light detection result and the traffic light mapping result. Firstly, calculating a similarity foundation according to position information of the detection result of the alternative traffic signal lamp and the mapping result of the traffic signal lamp, and then judging whether control information of the detection result and the mapping result of the traffic signal lamp is identical. In a specific example, if both are displayed as right turns, the control information of both are the same. If the control information of the two control information is the same, weighting a coefficient w1 on the basis of the similarity, wherein w1 is larger than 1; and if the control information of the two traffic signal lamp detection results is different, weighting a coefficient w2 on the basis of the similarity, wherein w2 is smaller than 1, so that the similarity between the final alternative traffic signal lamp detection result and the traffic signal lamp mapping result is obtained.
And S2100, matching the alternative traffic signal lamp detection result with the traffic signal lamp mapping result according to the calculation result of the similarity and a preset matching algorithm, and taking the matched alternative traffic signal lamp detection result as a target detection result.
The matching algorithm is used for searching the optimal matching between the detection result of each alternative traffic signal lamp and the mapping result of each traffic signal lamp. Optionally, the matching algorithm may be a KM (Kuhn-Munkres) algorithm, or a greedy strategy may be used, and the selection of the matching algorithm is not limited in the embodiment of the present invention.
Alternatively, the best match between the alternative traffic signal lamp detection result and the traffic signal lamp mapping result can be obtained through a bipartite graph maximum weight matching algorithm. And respectively taking the detection result of each alternative traffic light and the mapping result of each traffic light as nodes of the bipartite graph, and taking the similarity between the detection result of each alternative traffic light and the mapping result of each traffic light as the weight between the nodes. When each traffic signal lamp mapping result finds the corresponding alternative traffic signal lamp detection result and the sum of the weights between each traffic signal lamp mapping result and the corresponding alternative traffic signal lamp detection result is maximum, the alternative traffic signal lamp detection result which is optimally matched with each traffic signal lamp mapping result can be obtained as the target detection result.
In the embodiment of the invention, according to the similarity and the matching algorithm, the detection results of the alternative traffic signal lamps and the mapping results of the traffic signal lamps are matched, and the detection results of the alternative traffic signal lamps matched with the mapping results of the traffic signal lamps are used as target detection results.
S2110, color information of the traffic signal lamp is acquired in the display attribute information of the target detection result.
In the embodiment of the invention, the color information of the traffic signal lamp is acquired according to the display attribute information of the target detection result and is used for indicating the forward or stop state of the display of the traffic signal lamp.
S2120, determining the state of the traffic light in front of the vehicle according to the color information and the control information of the target traffic light.
In an alternative embodiment of the present invention, fig. 2b provides a flowchart of a method for identifying traffic light status, which is suitable for use in an embodiment of the present invention, and as shown in fig. 2b, the steps of the method for identifying traffic light status include:
S1, detecting and identifying traffic signal lamps in the vehicle-mounted camera image by adopting a machine learning method.
S2, acquiring information of the nearest traffic signal lamp in front of the vehicle positioning query, and then projecting the position of the traffic signal lamp in the three-dimensional laser point cloud onto a camera image by utilizing the position relation of the camera and the laser sensor.
Fig. 2c is a flowchart of a method of projecting traffic light positions in a three-dimensional laser point cloud onto a camera, as shown in fig. 2c, comprising the steps of:
S201, acquiring three-dimensional laser point cloud data of a road by using a laser sensor, and marking the position and the attribute of a traffic signal lamp in the three-dimensional laser point cloud.
S202, obtaining the internal and external parameters of the camera through calibration.
S203, based on the parameters of the camera, the position of the traffic signal lamp in the three-dimensional laser point cloud is projected onto a two-dimensional image shot by the vehicle-mounted camera.
And S3, performing similarity matching based on the detection result of the traffic signal lamp in the image and the projection of the traffic signal lamp to obtain the optimal matching result.
Fig. 2d is a flowchart of a method for performing similarity matching between a traffic light detection result and a traffic light projection in an image, as shown in fig. 2d, and the method includes the following steps:
S301, calculating the similarity between each traffic signal lamp detection result in the camera image and the projection of the traffic signal lamp.
S302, calculating an optimal matching strategy according to the similarity, and obtaining optimal matching of the traffic signal lamp and the detection result.
And S4, in the matching pair, combining the detection result and the marking information of the corresponding traffic signal lamp in the three-dimensional point cloud, so as to obtain the state of the traffic signal.
According to the technical scheme, an image acquired by a vehicle is input into a traffic signal lamp identification model to acquire alternative traffic signal lamp detection results, geographic position information of the traffic signal lamps is acquired in map data through a search area determined by vehicle position information, target traffic signal lamps are obtained through screening, the target traffic signal lamps are marked in the point cloud data through point cloud data acquired in real time and position information of the vehicle and the target traffic signal lamps, the target traffic signal lamps are mapped into the image according to camera parameters to obtain traffic signal lamp mapping results, similarity between each alternative traffic signal lamp detection result and each traffic signal lamp mapping result is calculated, optimal matching is carried out, target detection results are acquired, and traffic signal lamp states are determined according to the target detection results and control information of the target traffic signal lamps. The method solves the problem of high recognition error rate when the state of the traffic signal lamp is directly recognized according to the image acquired by the vehicle in the prior art, can effectively eliminate the interference of the non-traffic signal lamp or other traffic signal lamps, and can accurately recognize the state of the traffic signal under the condition of low resolution of the vehicle-mounted photographed image. The method realizes the accurate identification of the state of the traffic light in front of the vehicle, and further ensures the safety in the running process of the vehicle.
Example III
Fig. 3 is a schematic structural diagram of a traffic light status recognition device according to a third embodiment of the present invention, where the device includes: an alternative traffic light detection result acquisition module 310, a target traffic light acquisition module 320, a traffic light mapping result acquisition module 330, and a traffic light status determination module 340.
Wherein:
An alternative traffic light detection result acquisition module 310, configured to determine at least one alternative traffic light detection result according to an image acquired by the vehicle;
A target traffic signal acquisition module 320, configured to determine, from the candidate traffic signals, a target traffic signal that matches the vehicle location information according to the map data;
The traffic signal lamp mapping result obtaining module 330 is configured to map the target traffic signal lamp to the image according to the relative positional relationship between the target traffic signal lamp and the vehicle, so as to obtain a traffic signal lamp mapping result;
And the traffic light state determining module 340 is configured to determine a traffic light state according to the traffic light mapping result and the alternative traffic light detection result.
According to the technical scheme, after the detection result of the alternative traffic signal lamp is determined according to the image acquired by the vehicle, the target traffic signal lamp actually existing in front of the vehicle is mapped into the image acquired by the vehicle, and the detection result of the alternative traffic signal lamp in the picture is subjected to auxiliary verification through the position of the target traffic signal lamp in the image, so that the problem that the recognition error rate is high when the state of the traffic signal lamp is directly recognized according to the image acquired by the vehicle in the prior art is solved, the interference of the non-traffic signal lamp or other traffic signal lamps can be effectively eliminated, and the state of the traffic signal can be accurately recognized under the condition that the resolution of the vehicle-mounted photographed image is low. The method realizes the accurate identification of the state of the traffic light in front of the vehicle, and further ensures the safety in the running process of the vehicle.
On the basis of the above embodiment, the traffic light status determining module 340 includes:
And the target detection result acquisition unit is used for acquiring a target detection result matched with the traffic signal lamp mapping result from the alternative traffic signal lamp detection results, and determining the traffic signal lamp state according to the target detection result.
On the basis of the above embodiment, the traffic signal mapping result obtaining module 330 includes:
The target traffic signal lamp labeling unit is used for labeling the target traffic signal lamp in point cloud data acquired by a vehicle in real time according to the geographic position information of the target traffic signal lamp;
And the target traffic signal lamp mapping unit is used for mapping the marked target traffic signal lamp into the image according to the internal and external parameters of the vehicle-mounted camera of the vehicle to obtain a traffic signal lamp mapping result.
On the basis of the foregoing embodiment, the alternative traffic light detection result obtaining module 310 includes:
The alternative traffic signal lamp detection result acquisition unit is used for inputting the image acquired by the vehicle into a pre-trained traffic signal lamp identification model to acquire at least one alternative traffic signal lamp detection result;
the alternative traffic signal lamp detection result comprises the position, the outline and the display attribute information of the alternative traffic signal lamp in the image.
On the basis of the above embodiment, the target traffic signal acquisition module 320 includes:
a geographic position information acquisition unit, configured to determine a search area according to the vehicle position information, and acquire geographic position information of at least one traffic signal lamp in the search area in map data;
And the target traffic signal lamp acquisition unit is used for screening out at least one traffic signal lamp with the geographic position information and the vehicle position information meeting the distance association condition as the target traffic signal lamp.
On the basis of the above embodiment, the target traffic signal lamp labeling unit includes:
The point cloud data acquisition subunit is used for acquiring point cloud data;
And the target traffic signal lamp labeling subunit is used for labeling the target traffic signal lamp in the point cloud data according to the vehicle position information and the geographic position information of the target traffic signal lamp.
On the basis of the above embodiment, the target traffic signal mapping unit includes:
the vertex coordinate acquisition subunit is used for acquiring each vertex coordinate of each target traffic signal lamp in the point cloud data;
the vertex coordinate mapping subunit is used for mapping each vertex coordinate into the image according to the internal and external parameters of the vehicle-mounted camera of the vehicle to obtain pixel point coordinates corresponding to each vertex coordinate of each target traffic signal lamp;
And the mapping point coordinate acquisition subunit is used for calculating the average value of the pixel point coordinates of the adjacent pixel points to obtain the coordinates of each mapping point of each target traffic signal lamp in the image, and taking the coordinates of each mapping point as a traffic signal lamp mapping result.
On the basis of the above embodiment, the vertex coordinate mapping subunit is specifically configured to:
by the following formula: p uv=K(RPw + t), mapping each vertex coordinate P w into the image to obtain a pixel point coordinate P uv corresponding to each vertex coordinate of each target traffic signal lamp;
Wherein, P w=(xi,yi,zi), i e [1, N ], N represents the number of vertices of a target traffic signal lamp, P uv = (u, v), u, v represent coordinates of pixels, K represents an internal reference matrix of the vehicle-mounted camera, R represents a rotation matrix of the vehicle-mounted camera, and t represents a translation vector of the vehicle-mounted camera;
the mapping point coordinate obtaining subunit is specifically configured to:
According to the pixel point coordinates P uv, two adjacent pixel points are taken as pixel point pairs to obtain A plurality of pixel point pairs;
calculating the pixel point coordinate average value of each pixel point pair to obtain the mapping point coordinate P uvi of each target traffic signal lamp in the image;
Wherein P uvi=(ui,vi), i [ E [1, M ], M represents the coordinate number of the traffic signal lamp mapping result of a target traffic signal lamp in the image,
On the basis of the above embodiment, the target detection result acquisition unit includes:
The similarity calculation unit is used for calculating the similarity between the detection result of each alternative traffic signal lamp and the mapping result of each traffic signal lamp respectively;
the target detection result acquisition subunit is used for matching the alternative traffic signal lamp detection result with the traffic signal lamp mapping result according to the calculation result of the similarity and a preset matching algorithm, and taking the matched alternative traffic signal lamp detection result as a target detection result.
On the basis of the above embodiment, the similarity calculation unit is specifically configured to:
Acquiring a first vertex pixel coordinate set and a second vertex pixel coordinate set which correspond to the currently processed alternative traffic signal lamp detection result and the currently processed traffic signal lamp mapping result respectively;
Calculating at least one two-dimensional image attribute parameter corresponding to the first vertex pixel coordinate set and the second vertex pixel coordinate set respectively;
calculating at least one similarity calculation result between the currently processed alternative traffic signal lamp detection result and the currently processed traffic signal lamp mapping result according to the two-dimensional image attribute parameters;
And obtaining the similarity between the currently processed alternative traffic signal lamp detection result and the currently processed traffic signal lamp mapping result according to the at least one similarity calculation result.
On the basis of the above embodiment, the target detection result acquisition unit includes:
And the traffic signal lamp state determining subunit is used for determining the traffic signal lamp state according to the target detection result and the control information of the target traffic signal lamp.
On the basis of the above embodiment, the traffic light status determination subunit is specifically configured to:
Acquiring color information of a traffic signal lamp in display attribute information of the target detection result;
And determining the state of the traffic signal lamp in front of the vehicle according to the color information and the control information of the target traffic signal lamp.
The traffic signal lamp state recognition device provided by the embodiment of the invention can execute the traffic signal lamp state recognition method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention, and as shown in fig. 4, the computer device includes a processor 70, a memory 71, an input device 72 and an output device 73; the number of processors 70 in the computer device may be one or more, one processor 70 being taken as an example in fig. 4; the processor 70, memory 71, input means 72 and output means 73 in the computer device may be connected by a bus or other means, in fig. 4 by way of example.
The memory 71 is used as a computer readable storage medium for storing software programs, computer executable programs, and modules, such as modules corresponding to the traffic light status recognition method in the embodiment of the present invention (for example, the alternative traffic light detection result acquisition module 310, the target traffic light acquisition module 320, the traffic light mapping result acquisition module 330, and the traffic light status determination module 340 in the traffic light status recognition device). The processor 70 executes various functional applications of the computer device and data processing, namely, implements the traffic light status recognition method described above by running software programs, instructions and modules stored in the memory 71. The method comprises the following steps:
determining at least one alternative traffic signal lamp detection result according to the image acquired by the vehicle;
determining a target traffic signal lamp matched with the vehicle position information from the alternative traffic signal lamps according to the map data;
according to the relative position relation between the target traffic signal lamp and the vehicle, mapping the target traffic signal lamp into the image to obtain a traffic signal lamp mapping result;
and determining the traffic signal lamp state according to the traffic signal lamp mapping result and the alternative traffic signal lamp detection result.
The memory 71 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 71 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 71 may further include memory remotely located relative to processor 70, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 72 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the computer device. The output means 73 may comprise a display device such as a display screen.
Example five
A fifth embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing a traffic light status identification method, the method comprising:
determining at least one alternative traffic signal lamp detection result according to the image acquired by the vehicle;
determining a target traffic signal lamp matched with the vehicle position information from the alternative traffic signal lamps according to the map data;
according to the relative position relation between the target traffic signal lamp and the vehicle, mapping the target traffic signal lamp into the image to obtain a traffic signal lamp mapping result;
and determining the traffic signal lamp state according to the traffic signal lamp mapping result and the alternative traffic signal lamp detection result.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform the related operations in the traffic light status identifying method provided in any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
It should be noted that, in the embodiment of the traffic light status identifying device, each unit and module included are only divided according to the functional logic, but not limited to the above-mentioned division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (10)

1. A traffic light status recognition method, comprising:
determining at least one alternative traffic signal lamp detection result according to the image acquired by the vehicle;
determining a target traffic signal lamp matched with the vehicle position information from the alternative traffic signal lamps according to the map data;
according to the relative position relation between the target traffic signal lamp and the vehicle, mapping the target traffic signal lamp into the image to obtain a traffic signal lamp mapping result;
Determining the state of the traffic signal lamp according to the traffic signal lamp mapping result and the alternative traffic signal lamp detection result;
determining the traffic signal lamp state according to the traffic signal lamp mapping result and the alternative traffic signal lamp detection result, including:
obtaining a target detection result matched with the traffic signal lamp mapping result from the alternative traffic signal lamp detection results, and determining the traffic signal lamp state according to the target detection result;
and acquiring a target detection result matched with the traffic signal lamp mapping result from the alternative traffic signal lamp detection results, wherein the target detection result comprises:
respectively calculating the similarity between each alternative traffic signal lamp detection result and each traffic signal lamp mapping result;
Matching the alternative traffic signal lamp detection result with the traffic signal lamp mapping result according to the calculation result of the similarity and a preset matching algorithm, and taking the matched alternative traffic signal lamp detection result as a target detection result;
the method for calculating the similarity between the detection result of each alternative traffic signal lamp and the mapping result of each traffic signal lamp comprises the following steps:
respectively calculating the similarity foundation between the detection result of each alternative traffic signal lamp and the position information of the mapping result of each traffic signal lamp;
when the position information of the alternative traffic signal lamp detection result and the traffic signal lamp mapping result is insufficient to describe the similarity of the two, judging whether the control information of the alternative traffic signal lamp detection result and the control information of the traffic signal lamp mapping result are the same or not respectively;
If the two are the same, weighting the coefficient w1 on the basis of the similarity to obtain the similarity between the final alternative traffic signal lamp detection result and the traffic signal lamp mapping result; wherein w1 is greater than 1;
If the control information of the two types of traffic signal lamp detection information is different, weighting a coefficient w2 on the basis of the similarity to obtain the similarity between the final alternative traffic signal lamp detection result and the traffic signal lamp mapping result; wherein w2 is less than 1.
2. The method of claim 1, wherein mapping the target traffic signal into the image according to the relative positional relationship between the target traffic signal and the vehicle to obtain a traffic signal mapping result comprises:
Marking the target traffic signal lamp in point cloud data acquired by a vehicle in real time according to the vehicle position information and the geographic position information of the target traffic signal lamp;
and mapping the marked target traffic signal lamp into the image according to the internal and external parameters of the vehicle-mounted camera of the vehicle to obtain a traffic signal lamp mapping result.
3. The method of claim 1, wherein determining a target traffic signal from the candidate traffic signals that matches the vehicle location information based on map data comprises:
determining a search area according to the vehicle position information, and acquiring geographic position information of at least one traffic signal lamp in the search area from map data;
And screening at least one traffic signal lamp of which the geographic position information and the vehicle position information meet the distance association condition as the target traffic signal lamp.
4. The method of claim 2, wherein mapping the marked target traffic signal into the image according to the internal and external parameters of the vehicle-mounted camera of the vehicle to obtain a traffic signal mapping result comprises:
Acquiring each vertex coordinate of each target traffic signal lamp in the point cloud data;
Mapping each vertex coordinate into the image according to the internal and external parameters of the vehicle-mounted camera of the vehicle to obtain pixel point coordinates corresponding to each vertex coordinate of each target traffic signal lamp;
And calculating the average value of pixel point coordinates of adjacent pixel points to obtain the coordinates of each mapping point of each target traffic signal lamp in the image, and taking the coordinates of each mapping point as a traffic signal lamp mapping result.
5. The method of claim 4, wherein mapping each vertex coordinate into the image according to the internal and external parameters of the onboard camera of the vehicle to obtain a pixel coordinate corresponding to each vertex coordinate of each target traffic signal lamp, comprising:
By the following formula: Coordinates of each vertex/> Mapping the pixel point coordinates to the images to obtain pixel point coordinates/>, wherein the pixel point coordinates correspond to the vertex coordinates of each target traffic signal lamp
Wherein,,/>N represents the number of vertices of a target traffic signal,/>Three-dimensional coordinates of the ith vertex; /(I)U, v represent coordinates of pixels, K represents an internal reference matrix of the vehicle-mounted camera, R represents a rotation matrix of the vehicle-mounted camera, and t represents a translation vector of the vehicle-mounted camera;
calculating the average value of pixel point coordinates of adjacent pixel points to obtain the coordinates of each mapping point of each target traffic signal lamp in the image, wherein the method comprises the following steps:
according to the coordinates of each pixel point Two adjacent pixel points are taken as pixel point pairs to obtain/>A plurality of pixel point pairs;
Calculating the average value of pixel point coordinates of each pixel point pair to obtain the mapping point coordinates of each target traffic signal lamp in the image
Wherein,,/>M represents the coordinate number of the traffic signal mapping result of a target traffic signal in the image,/>
6. The method of claim 1, wherein separately calculating the similarity between each alternative traffic signal detection result and each traffic signal mapping result comprises:
Acquiring a first vertex pixel coordinate set and a second vertex pixel coordinate set which correspond to the currently processed alternative traffic signal lamp detection result and the currently processed traffic signal lamp mapping result respectively;
Calculating at least one two-dimensional image attribute parameter corresponding to the first vertex pixel coordinate set and the second vertex pixel coordinate set respectively;
calculating at least one similarity calculation result between the currently processed alternative traffic signal lamp detection result and the currently processed traffic signal lamp mapping result according to the two-dimensional image attribute parameters;
And obtaining the similarity between the currently processed alternative traffic signal lamp detection result and the currently processed traffic signal lamp mapping result according to the at least one similarity calculation result.
7. The method of claim 1, wherein determining traffic light status based on the target detection result comprises:
Acquiring color information of a traffic signal lamp in display attribute information of the target detection result;
and determining the state of the traffic signal lamp according to the color information and the control information of the target traffic signal lamp.
8. A traffic light status recognition device, comprising:
The alternative traffic signal lamp detection result acquisition module is used for determining at least one alternative traffic signal lamp detection result according to the image acquired by the vehicle;
The target traffic signal lamp acquisition module is used for determining a target traffic signal lamp matched with the vehicle position information from the alternative traffic signal lamps according to the map data;
The traffic signal lamp mapping result acquisition module is used for mapping the target traffic signal lamp into the image according to the relative position relationship between the target traffic signal lamp and the vehicle to obtain a traffic signal lamp mapping result;
The traffic signal lamp state determining module is used for determining the traffic signal lamp state according to the traffic signal lamp mapping result and the alternative traffic signal lamp detection result;
A traffic light status determination module comprising:
The target detection result acquisition unit is used for acquiring a target detection result matched with the traffic signal lamp mapping result from the alternative traffic signal lamp detection results and determining the traffic signal lamp state according to the target detection result;
The target detection result acquisition unit includes:
The similarity calculation unit is used for calculating the similarity between the detection result of each alternative traffic signal lamp and the mapping result of each traffic signal lamp respectively;
The target detection result acquisition subunit is used for matching the alternative traffic signal lamp detection result with the traffic signal lamp mapping result according to the calculation result of the similarity and a preset matching algorithm, and taking the matched alternative traffic signal lamp detection result as a target detection result;
the similarity calculation unit is specifically configured to:
when the position information of the alternative traffic light detection result and the traffic light mapping result is insufficient to describe the similarity of the two, respectively calculating a similarity foundation between the position information of each alternative traffic light detection result and each traffic light mapping result;
judging whether the detection result of the alternative traffic signal lamp is the same as the control information of the mapping result of the traffic signal lamp or not respectively;
If the two are the same, weighting the coefficient w1 on the basis of the similarity to obtain the similarity between the final alternative traffic signal lamp detection result and the traffic signal lamp mapping result; wherein w1 is greater than 1;
If the control information of the two types of traffic signal lamp detection information is different, weighting a coefficient w2 on the basis of the similarity to obtain the similarity between the final alternative traffic signal lamp detection result and the traffic signal lamp mapping result; wherein w2 is less than 1.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the traffic light status identification method of any of claims 1-7 when the program is executed by the processor.
10. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the traffic light status identification method of any one of claims 1-7.
CN202010130208.0A 2020-02-14 2020-02-28 Traffic signal lamp state identification method, device, equipment and storage medium Active CN111310708B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010092710 2020-02-14
CN2020100927107 2020-02-14

Publications (2)

Publication Number Publication Date
CN111310708A CN111310708A (en) 2020-06-19
CN111310708B true CN111310708B (en) 2024-05-14

Family

ID=71162046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010130208.0A Active CN111310708B (en) 2020-02-14 2020-02-28 Traffic signal lamp state identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111310708B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114078330A (en) * 2020-08-20 2022-02-22 华为技术有限公司 Traffic signal identification method and device
CN112101272B (en) * 2020-09-23 2024-05-14 阿波罗智联(北京)科技有限公司 Traffic light detection method, device, computer storage medium and road side equipment
CN112489466B (en) * 2020-11-27 2022-02-22 恒大新能源汽车投资控股集团有限公司 Traffic signal lamp identification method and device
CN113343872B (en) * 2021-06-17 2022-12-13 亿咖通(湖北)技术有限公司 Traffic light identification method, device, equipment, medium and product
CN115546755A (en) * 2021-06-30 2022-12-30 上海商汤临港智能科技有限公司 Traffic identification recognition method and device, electronic equipment and storage medium
CN113963331A (en) * 2021-10-29 2022-01-21 广州文远知行科技有限公司 Signal lamp identification model training method, signal lamp identification method and related device
CN114972824B (en) * 2022-06-24 2023-07-14 小米汽车科技有限公司 Rod detection method, device, vehicle and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017158664A1 (en) * 2016-03-18 2017-09-21 株式会社Jvcケンウッド Object recognition device, object recognition method, and object recognition program
CN108305475A (en) * 2017-03-06 2018-07-20 腾讯科技(深圳)有限公司 A kind of traffic lights recognition methods and device
CN110543814A (en) * 2019-07-22 2019-12-06 华为技术有限公司 Traffic light identification method and device
CN110705485A (en) * 2019-10-08 2020-01-17 东软睿驰汽车技术(沈阳)有限公司 Traffic signal lamp identification method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017158664A1 (en) * 2016-03-18 2017-09-21 株式会社Jvcケンウッド Object recognition device, object recognition method, and object recognition program
CN108305475A (en) * 2017-03-06 2018-07-20 腾讯科技(深圳)有限公司 A kind of traffic lights recognition methods and device
CN110543814A (en) * 2019-07-22 2019-12-06 华为技术有限公司 Traffic light identification method and device
CN110705485A (en) * 2019-10-08 2020-01-17 东软睿驰汽车技术(沈阳)有限公司 Traffic signal lamp identification method and device

Also Published As

Publication number Publication date
CN111310708A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111310708B (en) Traffic signal lamp state identification method, device, equipment and storage medium
US11094198B2 (en) Lane determination method, device and storage medium
CN111291676B (en) Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
JP5057183B2 (en) Reference data generation system and position positioning system for landscape matching
JP5168601B2 (en) Own vehicle position recognition system
WO2021227645A1 (en) Target detection method and device
JP2011215057A (en) Scene matching reference data generation system and position measurement system
JP6626410B2 (en) Vehicle position specifying device and vehicle position specifying method
TW201528220A (en) Apparatus and method for vehicle positioning
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
JP6278791B2 (en) Vehicle position detection device, vehicle position detection method, vehicle position detection computer program, and vehicle position detection system
JP5182594B2 (en) Image processing system
JP2008065087A (en) Apparatus for creating stationary object map
CN112507862A (en) Vehicle orientation detection method and system based on multitask convolutional neural network
CN114413881A (en) Method and device for constructing high-precision vector map and storage medium
JP2018077162A (en) Vehicle position detection device, vehicle position detection method and computer program for vehicle position detection
US20220414917A1 (en) Method and apparatus for obtaining 3d information of vehicle
JP2017181476A (en) Vehicle location detection device, vehicle location detection method and vehicle location detection-purpose computer program
JP2021149863A (en) Object state identifying apparatus, object state identifying method, computer program for identifying object state, and control apparatus
CN112424568A (en) System and method for constructing high-definition map
CN111783597B (en) Method and device for calibrating driving trajectory, computer equipment and storage medium
CN116343165A (en) 3D target detection system, method, terminal equipment and storage medium
CN113611008B (en) Vehicle driving scene acquisition method, device, equipment and medium
JP5177579B2 (en) Image processing system and positioning system
JP5557036B2 (en) Exit determination device, exit determination program, and exit determination method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant