CN115985113A - Traffic signal lamp control method and electronic equipment - Google Patents

Traffic signal lamp control method and electronic equipment Download PDF

Info

Publication number
CN115985113A
CN115985113A CN202211561343.6A CN202211561343A CN115985113A CN 115985113 A CN115985113 A CN 115985113A CN 202211561343 A CN202211561343 A CN 202211561343A CN 115985113 A CN115985113 A CN 115985113A
Authority
CN
China
Prior art keywords
target
sensor data
track
information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211561343.6A
Other languages
Chinese (zh)
Other versions
CN115985113B (en
Inventor
林潇
戴雪瑞
李智
周浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanji Technology Co Ltd
Original Assignee
Beijing Wanji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanji Technology Co Ltd filed Critical Beijing Wanji Technology Co Ltd
Priority to CN202211561343.6A priority Critical patent/CN115985113B/en
Publication of CN115985113A publication Critical patent/CN115985113A/en
Application granted granted Critical
Publication of CN115985113B publication Critical patent/CN115985113B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application provides a traffic signal lamp control method and electronic equipment, and relates to the technical field of road traffic, wherein the method comprises the steps of obtaining first sensor data and second sensor data; fusing the first sensor data and the second sensor data to obtain third sensor data; determining track information of each target in the third sensor data according to the information of each target in the third sensor data and the information of each target in the fourth sensor data; according to the track information and the high-precision map, determining the predicted track of each target in the third sensor data and the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp; determining the prediction duration of the traffic signal lamp according to the track information, the prediction track and the traffic state information; and controlling the traffic signal lamp according to the predicted duration. The technical scheme provided by the application can relieve traffic jam.

Description

Traffic signal lamp control method and electronic equipment
Technical Field
The application relates to the technical field of road traffic, in particular to a traffic signal lamp control method and electronic equipment.
Background
With the development of social life, the problem of road traffic jam is increasingly prominent. At present, road traffic operation is mainly controlled by traffic signal lamps with fixed light and dark time of each color.
However, the above solutions cannot be well adapted to actual traffic operation requirements, for example, in a congested intersection in a rush hour scene at morning and evening, the above solutions often cause traffic congestion.
Disclosure of Invention
In view of this, the present application provides a traffic signal lamp control method and an electronic device, so as to adapt to actual traffic operation requirements and alleviate traffic congestion.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a traffic signal lamp control method, including:
acquiring first sensor data and second sensor data;
fusing the first sensor data and the second sensor data to obtain third sensor data, wherein the third sensor data comprises information of each fused target;
determining track information of each target in the third sensor data according to information of each target in the third sensor data and information of each target in fourth sensor data, wherein the fourth sensor data is obtained by fusing point cloud data acquired by a first sensor and a second sensor in a last frame;
according to the track information and the high-precision map, determining the predicted track of each target in the third sensor data and the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp;
determining the prediction duration of the target traffic signal lamp according to the track information, the prediction track and the traffic state information;
and controlling the target traffic signal lamp according to the predicted duration.
As an optional implementation manner of the embodiment of the present application, the first sensor data includes information of each target acquired by the millimeter wave radar at the current frame, and the second sensor data includes information of each target acquired by the laser radar at the current frame.
As an optional implementation manner of the embodiment of the present application, the first sensor data includes an identifier, a type, coordinates, length information, a speed, a heading angle, and a confidence level of each target;
acquiring the second sensor data, comprising:
acquiring original point cloud data of the laser radar, wherein the original point cloud data comprises coordinates of each target collected by the laser radar in a current frame;
and carrying out target detection on the original point cloud data to obtain second sensor data, wherein the second sensor data comprises identification, type, coordinates, overall dimension, speed, course angle and confidence coefficient of each target.
As an optional implementation manner of the embodiment of the present application, the method further includes:
converting the coordinates of each target in the first sensor data to a geodetic coordinate system according to a calibration algorithm between a millimeter wave radar coordinate system and the geodetic coordinate system;
and converting the coordinates of each target in the second sensor data into a geodetic coordinate system according to a calibration algorithm between the laser radar coordinate system and the geodetic coordinate system.
As an optional implementation manner of the embodiment of the present application, the fusing the first sensor data and the second sensor data to obtain third sensor data includes:
performing first association matching on each target in the first sensor data and each target in the second sensor data respectively;
performing second association matching on the objects which are not successfully matched in the first sensor data and the second sensor data, wherein the matching modes of the first association matching and the second association matching are different;
fusing the successfully matched targets in the first sensor data and the second sensor data to obtain a fused target;
and determining the third sensor data according to the target which is not successfully matched in the first sensor data and the second sensor data twice and the fusion target.
As an optional implementation manner of this embodiment of the present application, the performing first association matching on each target in the first sensor data and each target in the second sensor data includes:
calculating distances between each target in the first sensor data and each target in the second sensor data in the same coordinate system respectively;
if the distance between a first target in the first sensor data and a second target in the second sensor data is smaller than or equal to a first threshold, determining that the first target and the second target are successfully matched;
and if the distance between the first target and the second target is greater than the first threshold, determining that the first target and the second target fail to be matched.
As an optional implementation manner of this embodiment of the present application, performing a second association matching on an object that is not successfully matched in the first sensor data and the second sensor data includes:
determining a first historical track of each target which is not successfully matched for the first time in the first sensor data and a second historical track of each target which is not successfully matched for the first time in the second sensor data;
respectively determining an average Euclidean distance and a cosine distance between each first historical track and each second historical track;
if the sum of the mean Euclidean distance and the cosine distance between a first target historical track in each first historical track and a second target historical track in each second historical track is smaller than or equal to a second threshold value, the target corresponding to the first target historical track and the target corresponding to the second target historical track are successfully matched;
and if the sum of the mean Euclidean distance and the cosine distance between the first target historical track and the second target historical track is larger than a second threshold value, the matching of the target corresponding to the first target historical track and the target corresponding to the second target historical track fails.
As an optional implementation manner of the embodiment of the present application, the determining, according to the trajectory information and the high-precision map, the predicted trajectory of each target in the third sensor data and the traffic state information of each lane at an intersection corresponding to a target traffic signal lamp includes:
determining a predicted track of each target in the third sensor data according to the track information;
and determining the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp according to the track information and the high-precision map.
In a second aspect, an embodiment of the present application provides a traffic signal lamp control device, including:
the acquisition module is used for acquiring first sensor data and second sensor data;
the fusion module is used for fusing the first sensor data and the second sensor data to obtain third sensor data, and the third sensor data comprises fused information of each target;
the determining module is used for determining track information of each target in the third sensor data according to information of each target in the third sensor data and information of each target in fourth sensor data, and the fourth sensor data is obtained by fusing point cloud data acquired by a first sensor and a second sensor in a previous frame;
according to the track information and the high-precision map, determining the predicted track of each target in the third sensor data and the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp;
determining the prediction duration of the traffic signal lamp according to the track information, the prediction track and the traffic state information;
and the control module is used for controlling the traffic signal lamp according to the predicted duration.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory for storing a computer program and a processor; the processor is configured to perform the method according to the first aspect or any one of the first aspects as described above when the computer program is invoked.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method according to the first aspect or any embodiment of the first aspect.
According to the traffic signal lamp control scheme provided by the embodiment of the application, first sensor data and second sensor data are obtained; then, fusing the first sensor data and the second sensor data to obtain third sensor data (including information of each fused target); determining track information of each target in the third sensor data according to the information of each target in the third sensor data and the information of each target in the fourth sensor data; then, according to the track information and the high-precision map, the predicted track of each target in the third sensor data and the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp are determined; determining the prediction duration of the target traffic signal lamp according to the track information, the prediction track and the traffic state information; and controlling the target traffic signal lamp according to the predicted duration, wherein the fourth sensor data is obtained by fusing point cloud data acquired by the first sensor and the second sensor in the last frame. In the technical scheme, the information of each target is obtained through the fusion of the first sensor data and the second sensor data, so that the situations of redundancy and errors existing when a single sensor obtains the target information can be reduced, and the robustness and the accuracy when the target information is obtained are improved; in addition, according to the track information and the high-precision map, the predicted track of each target in the third sensor data and the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp are determined, and compared with the method only based on the sensors, the method has the advantage that the acquired traffic state information is more accurate; in addition, the prediction duration of the target traffic signal lamp is determined according to the track information, the prediction track and the traffic state information, and the target traffic signal lamp is controlled according to the prediction duration, so that the prediction duration of the target traffic signal lamp can be matched with the predicted traffic state, and when the target traffic signal lamp is controlled according to the prediction duration of the target traffic signal lamp, the timing of the target traffic signal lamp can be more intelligent, so that the traffic state prediction method can adapt to actual traffic operation requirements, and further the traffic jam can be relieved.
Drawings
Fig. 1 is a schematic flowchart of a traffic signal lamp control method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of acquisition frequencies of a millimeter wave radar and a laser radar provided in the embodiment of the present application;
fig. 3 is a schematic flowchart of a fusion method of first sensor data and second sensor data provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a traffic signal lamp control device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiments of the present application are described below with reference to the drawings. The terminology used in the description of the embodiments herein is for the purpose of describing particular embodiments herein only and is not intended to be limiting of the application. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 1 is a schematic flow chart of a traffic light control method provided in an embodiment of the present application, and as shown in fig. 1, the traffic light control method provided in the embodiment of the present application may include the following steps:
and S110, acquiring first sensor data and second sensor data.
The first sensor data can be collected by combining a first sensor and a camera, and the first sensor can be a sensor such as a millimeter wave radar; the second sensor data may be collected by a combination of a second sensor and a camera, the second sensor may be a sensor such as a lidar, the first sensor being different from the second sensor. In the following description, the first sensor is a millimeter wave radar, and the second sensor is a laser radar.
Specifically, target-level point cloud data (i.e., first sensor data) acquired by the millimeter wave radar at the current frame and original point cloud data acquired by the laser radar at the current frame may be received in real time.
The target-level point cloud data may include identification, type, coordinates, length information, speed, heading angle, confidence, and other parameters for each target.
The raw point cloud data may include the coordinates of each target. After the original point cloud data is obtained, the original point cloud data can be further subjected to target detection to obtain more target information, namely second sensor data.
Specifically, a Second point cloud detection algorithm may be adopted to detect the original point cloud data to obtain information such as identification, type, coordinates, overall dimensions (length, width, and height), speed, heading angle, and confidence of each target, where the type of the target may include an automobile, a bus, a truck, a bicycle, a motorcycle, a pedestrian, and the like.
It can be understood that, because there may be a difference between the acquisition frequencies of the millimeter wave radar and the laser radar, the time synchronization may be performed with the sensor with the lower frequency as the time reference, and the system time stamp of the data received from the two sensors is used as the synchronization time. Fig. 2 is a schematic diagram of the collection frequencies of the millimeter wave radar and the laser radar provided in the embodiment of the present application, and as shown in fig. 2, the collection frequency of the laser radar is 50ms, and the collection frequency of the millimeter wave radar is 25ms, so that the data collected by the millimeter wave radar on the basis different from that of the laser radar can be discarded by using the frequency of the laser radar as the reference.
Because the data that millimeter wave radar and laser radar that same benchmark corresponds gathered also probably have the difference of time, consequently when receiving millimeter wave radar and laser radar data collection, can judge the time difference of two data of receipt earlier, if the time difference is greater than the setting value (for example, 10 ms), then can abandon above-mentioned two data, the data that millimeter wave radar and laser radar that same benchmark corresponds gathered can be considered approximately to gather simultaneously like this, can promote the accuracy of gathering.
And S120, converting the first sensor data and the second sensor data into the same coordinate system.
Before the first sensor data and the second sensor data are fused, the first sensor data and the second sensor data may be converted into the same coordinate system, for example, a millimeter wave radar coordinate system, a laser radar coordinate system, a geodetic coordinate system, and the like. The present embodiment will be described in detail with reference to the example in which the first sensor data and the second sensor data are both converted into a geodetic coordinate system.
Specifically, rigid transformation from the millimeter wave radar coordinate system to the geodetic coordinate system can be obtained according to a calibration algorithm between the millimeter wave radar coordinate system and the geodetic coordinate system, and then coordinates of each target in the first sensor data are converted into the geodetic coordinate system; and according to a calibration algorithm between the laser radar coordinate system and the geodetic coordinate system, rigid transformation from the laser radar coordinate system to the geodetic coordinate system is obtained, and then the coordinates of each target in the second sensor data are converted into the geodetic coordinate system. In this way, the first sensor data and the second sensor data are unified to the geodetic coordinate system for subsequent fusion of the first sensor data and the second sensor data.
And S130, fusing the first sensor data and the second sensor data to obtain third sensor data.
Fig. 3 is a schematic flowchart of a fusion method of first sensor data and second sensor data provided in an embodiment of the present application, and as shown in fig. 3, the fusion method of first sensor data and second sensor data provided in an embodiment of the present application may include the following steps:
and S131, performing first association matching on each target in the first sensor data and each target in the second sensor data.
Specifically, the distances between the respective targets in the first sensor data and the respective targets in the second sensor data may be first calculated in the same coordinate system, respectively.
The first sensor data comprises length information of each target, the transverse and longitudinal distances of each target relative to the millimeter wave radar can be obtained through the millimeter wave radar, the width of each target can be set according to the type of the target, and therefore a rectangular frame corresponding to each target in the first sensor data can be obtained; the rectangular frame corresponding to each target in the Second sensor data can be directly obtained through a Second point cloud detection algorithm.
The IOU distance between the rectangular box of each object in the first sensor data and the rectangular box of each object in the second sensor data may then be calculated, e.g., a objects in the first sensor data and b objects in the second sensor data, then the IOU distance between the rectangular boxes of each object in the first sensor data and the second sensor data is dij (i =1,2, \ 8230;, a, j =1,2, ..., b).
Then, the target distance dij can be used as a cost matrix to perform maximum matching calculation by using a Hungarian algorithm, and then the target distance dij is checked according to an IOU threshold value s _ IOU (namely, a first threshold value which is between 0 and 1 and can be adjusted according to specific conditions). If the distance between a first target in the first sensor data and a second target in the second sensor data is less than or equal to the IOU threshold value, determining that the first target and the second target are successfully matched; if the distance between the first target and the second target is greater than the IOU threshold, it is determined that the first target and the second target fail to match.
For example, d23 represents the IOU distance between the 2 nd target in the first sensor data and the 3 rd target in the second sensor data, if d23 ≦ s _ IOU, it represents that the 2 nd target in the first sensor data and the 3 rd target in the second sensor data match successfully, and if d23 > s _ IOU, it represents that the 2 nd target in the first sensor data and the 3 rd target in the second sensor data match unsuccessfully.
The matching success indicates that the 2 nd target in the first sensor data and the 3 rd target in the second sensor data are the same target, and the matching failure indicates that the 2 nd target in the first sensor data and the 3 rd target in the second sensor data are two independent targets.
And S132, performing second association matching on the targets which are not successfully matched in the first sensor data and the second sensor data.
In order to increase the accuracy in fusion, the second association matching may be performed on the objects that are not successfully matched in the first sensor data and the second sensor data.
First, a first history track of each object which is not successfully matched for the first time in the first sensor data and a second history track of each object which is not successfully matched for the first time in the second sensor data can be determined, then, an average euclidean distance and a cosine distance between each first history track and each second history track are respectively determined, then, weighted summation calculation is carried out on the euclidean distance and the cosine distance between each first history track and each second history track respectively, and a track distance wij (i =1,2, \ 8230; u, j =1,2, ... v) between each first history track and each second history track is obtained, wherein u is the number of the first history tracks, and v is the number of the second history tracks.
Then, the track distance wij can be used as a cost matrix to perform maximum matching calculation by using a Hungarian algorithm, and then the track distance wij is checked according to a distance threshold value s _ t (namely, a second threshold value which is between 0 and 1 and can be adjusted according to specific conditions). If wij is smaller than or equal to s _ t, matching the target corresponding to the first target historical track and the target corresponding to the second target historical track successfully; and if wij is larger than s _ t, the matching of the target corresponding to the first target historical track and the target corresponding to the second target historical track fails.
For example, w23 represents a track distance between the 2 nd historical track in the first sensor data and the 3 rd historical track in the second sensor data, if w23 is less than or equal to s _ t, it represents that the target corresponding to the 2 nd historical track in the first sensor data and the target corresponding to the 3 rd historical track in the second sensor data are successfully matched, and if w23 > s _ t, it represents that the target corresponding to the 2 nd historical track in the first sensor data and the target corresponding to the 3 rd historical track in the second sensor data are unsuccessfully matched.
The matching success indicates that the target corresponding to the 2 nd historical track in the first sensor data and the target corresponding to the 3 rd historical track in the second sensor data are the same target, and the matching failure indicates that the target corresponding to the 2 nd historical track in the first sensor data and the target corresponding to the 3 rd historical track in the second sensor data are two independent targets.
And S133, fusing the successfully matched targets in the first sensor data and the second sensor data to obtain a fused target.
Specifically, targets that are successfully matched for the first time or the second time in the first sensor data and the second sensor data may be fused to obtain a fused target, and during the fusion, for the same parameter, an average value of values corresponding to the parameter in the first sensor data and the second sensor data may be used, or a value in a sensor with relatively high precision may be selected, for example, the speed of the target a in the first sensor data is a, the matching of the target a ' in the second sensor data with the target a is successful, and the speed of the target a ' is b, so that the speed of the fused target a (i.e., the target a ') may be (a + b)/2, or a (relative to the speed data acquired by the millimeter wave radar, which is relatively more accurate).
And S134, determining third sensor data according to the target and the fusion target which are not successfully matched in the first sensor data and the second sensor data in two times.
The two times of successful matching are both targets which are not successfully matched, that is, the two times of successful matching are considered as two independent targets, because the dimension expansion can be performed on the fusion target, the targets which are not successfully matched in the two times are added into the fusion target to obtain third sensor data, and the third sensor data can include information of each target after fusion (namely the fusion target and the targets which are not successfully matched in the two times).
And S140, determining track information of each target in the third sensor data according to the information of each target in the third sensor data and the information of each target in the fourth sensor data.
The fourth sensor data is obtained by fusing point cloud data acquired by the first sensor and the second sensor in the previous frame.
Specifically, the Sort algorithm may be adopted to track each target in the third sensor data to determine a target in the fourth sensor data corresponding to each target in the third sensor data, so as to determine the track information of each target in the third sensor data.
For example, if the target B in the third sensor data corresponds to the target B ' in the fourth sensor data, it is indicated that the target B in the current frame and the target B ' in the previous frame are the same target, and the track information of the target B ' is updated according to the information of the target B, so that the track information of the target B can be obtained.
It is understood that, if the target B does not have a corresponding target in the fourth sensor data, it indicates that the target B is acquired by the millimeter wave radar and/or the laser radar for the first time, that is, the target B is a new target.
S150, according to the track information and the high-precision map, the predicted track of each target in the third sensor data and the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp are determined.
Specifically, the predicted trajectory of each target in the third sensor data may be determined based on a trajectory prediction method (e.g., GRIP + +) based on deep learning, based on trajectory information of each target in the third sensor data.
And then analyzing the track information of each target in the third sensor data according to the high-precision map so as to determine the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp. The traffic state information may include a lane average queuing length, a lane average time occupancy, a lane traffic density, a lane average speed, and the like.
And S160, determining the prediction duration of the target traffic signal lamp according to the track information, the predicted track and the traffic state information.
Specifically, a deep learning method can be adopted to optimally predict the duration of each traffic signal lamp. And inputting a plurality of frames of historical tracks, a plurality of frames of predicted tracks and traffic state information into the prediction model each time to obtain the duration of each red light and each green light when the traffic signal light changes next time.
And S170, controlling the target traffic light according to the predicted duration.
Specifically, the corresponding red and green lights may be controlled according to the predicted duration of each red and green light at the next traffic signal change.
According to the traffic signal lamp control scheme provided by the embodiment of the application, first sensor data and second sensor data are obtained; then, fusing the first sensor data and the second sensor data to obtain third sensor data (including fused information of each target); determining track information of each target in the third sensor data according to the information of each target in the third sensor data and the information of each target in the fourth sensor data; then, according to the track information and the high-precision map, the predicted track of each target in the third sensor data and the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp are determined; determining the prediction duration of the target traffic signal lamp according to the track information, the prediction track and the traffic state information; and controlling the target traffic signal lamp according to the predicted duration, wherein the fourth sensor data is obtained by fusing point cloud data acquired by the first sensor and the second sensor in the last frame. In the technical scheme, the information of each target is obtained through the fusion of the first sensor data and the second sensor data, so that the situations of redundancy and errors existing when a single sensor obtains the target information can be reduced, and the robustness and the accuracy when the target information is obtained are improved; in addition, according to the track information and the high-precision map, the predicted track of each target in the third sensor data and the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp are determined, and compared with the method only based on the sensors, the method has the advantage that the acquired traffic state information is more accurate; in addition, the prediction duration of the target traffic signal lamp is determined according to the track information, the prediction track and the traffic state information, and the target traffic signal lamp is controlled according to the prediction duration, so that the prediction duration of the target traffic signal lamp can be matched with the predicted traffic state, and when the target traffic signal lamp is controlled according to the prediction duration of the target traffic signal lamp, the timing of the target traffic signal lamp can be more intelligent, so that the traffic state prediction method can adapt to actual traffic operation requirements, and further the traffic jam can be relieved.
Based on the same inventive concept, as an implementation of the foregoing method, an embodiment of the present application provides a traffic signal lamp control apparatus, where the apparatus embodiment corresponds to the foregoing method embodiment, and for convenience of reading, details in the foregoing method embodiment are not repeated in this apparatus embodiment one by one, but it should be clear that the apparatus in this embodiment can correspondingly implement all the contents in the foregoing method embodiment.
Fig. 4 is a schematic structural diagram of a traffic signal lamp control device provided in an embodiment of the present application, and as shown in fig. 4, the traffic signal lamp control device provided in this embodiment may include: the method comprises the following steps: an obtaining module 11, a fusing module 12, a determining module 13 and a control module 14, wherein:
the acquisition module 11 is configured to acquire first sensor data and second sensor data;
the fusion module 12 is configured to fuse the first sensor data and the second sensor data to obtain third sensor data, where the third sensor data includes information of each fused target;
the determining module 13 is configured to determine track information of each target in the third sensor data according to information of each target in the third sensor data and information of each target in fourth sensor data, where the fourth sensor data is obtained by fusing point cloud data acquired by the first sensor and the second sensor in a previous frame;
according to the track information and the high-precision map, determining the predicted track of each target in the third sensor data and the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp;
determining the prediction duration of the traffic signal lamp according to the track information, the prediction track and the traffic state information;
the control module 14 is configured to control the traffic light according to the predicted duration.
As an optional implementation manner, the first sensor data includes information of each target acquired by the millimeter wave radar at the current frame, and the second sensor data includes information of each target acquired by the laser radar at the current frame.
As an alternative embodiment, the first sensor data includes identification, type, coordinates, length information, speed, heading angle, and confidence level of each target;
the obtaining module 11 is specifically configured to:
acquiring original point cloud data of the laser radar, wherein the original point cloud data comprises coordinates of all targets acquired by the laser radar in a current frame;
and carrying out target detection on the original point cloud data to obtain second sensor data, wherein the second sensor data comprises identification, type, coordinates, overall dimension, speed, course angle and confidence coefficient of each target.
As an optional implementation manner, the traffic signal light control device further includes a conversion module 15, where the conversion module 15 is specifically configured to:
converting the coordinates of each target in the first sensor data to a geodetic coordinate system according to a calibration algorithm between a millimeter wave radar coordinate system and the geodetic coordinate system;
and converting the coordinates of each target in the second sensor data to a geodetic coordinate system according to a calibration algorithm between the laser radar coordinate system and the geodetic coordinate system.
As an optional implementation, the fusion module 12 is specifically configured to: performing first association matching on each target in the first sensor data and each target in the second sensor data respectively;
performing second association matching on the objects which are not successfully matched in the first sensor data and the second sensor data, wherein the matching modes of the first association matching and the second association matching are different;
fusing the successfully matched targets in the first sensor data and the second sensor data to obtain a fused target;
and determining the third sensor data according to the target which is not successfully matched in the first sensor data and the second sensor data twice and the fusion target.
As an optional implementation, the fusion module 12 is specifically configured to:
calculating distances between each target in the first sensor data and each target in the second sensor data in the same coordinate system respectively;
if the distance between a first target in the first sensor data and a second target in the second sensor data is smaller than or equal to a first threshold, determining that the first target and the second target are successfully matched;
and if the distance between the first target and the second target is greater than the first threshold, determining that the first target and the second target fail to be matched.
As an optional implementation, the fusion module 12 is specifically configured to:
determining a first historical track of each target which is not successfully matched for the first time in the first sensor data and a second historical track of each target which is not successfully matched for the first time in the second sensor data;
respectively determining an average Euclidean distance and a cosine distance between each first historical track and each second historical track;
if the sum of the mean Euclidean distance and the cosine distance between a first target historical track in each first historical track and a second target historical track in each second historical track is smaller than or equal to a second threshold value, the target corresponding to the first target historical track and the target corresponding to the second target historical track are successfully matched;
and if the sum of the mean Euclidean distance and the cosine distance between the first target historical track and the second target historical track is larger than a second threshold value, the matching of the target corresponding to the first target historical track and the target corresponding to the second target historical track fails.
As an optional implementation manner, the determining module 13 is specifically configured to: determining a predicted track of each target in the third sensor data according to the track information;
and determining the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp according to the track information and the high-precision map.
Based on the same inventive concept, the embodiment of the application also provides the electronic equipment. Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 5, the electronic device according to the embodiment includes: a memory 210 and a processor 220, the memory 210 for storing computer programs; the processor 220 is adapted to perform the method according to the above-described method embodiments when invoking the computer program.
The electronic device provided in this embodiment may perform the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method described in the foregoing method embodiments.
The embodiment of the present application further provides a computer program product, which when running on an electronic device, enables the electronic device to implement the method described in the above method embodiment when executed.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or a first programmable device. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optics, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, or a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those skilled in the art can understand that all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer readable storage medium and can include the processes of the method embodiments described above when executed. And the aforementioned storage medium may include: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.
The naming or numbering of the steps appearing in the present application does not mean that the steps in the method flow have to be executed in the chronological/logical order indicated by the naming or numbering, and the named or numbered process steps may be executed in a modified order depending on the technical purpose to be achieved, as long as the same or similar technical effects are achieved.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in an electrical, mechanical or other form.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In the description of the present application, a "/" indicates a relationship in which the objects associated before and after are an "or", for example, a/B may indicate a or B; in the present application, "and/or" is only an association relationship describing an association object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists singly, A and B exist simultaneously, and B exists singly, wherein A and B can be singular or plural.
Also, in the description of the present application, "a plurality" means two or more than two unless otherwise specified. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein.
Reference throughout this specification to "one embodiment" or "some embodiments" or the like, described with reference to "one embodiment" or "some embodiments" or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in a first embodiment," "in another embodiment," and the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather mean "one or more, but not all embodiments" unless specifically stated otherwise in the first aspect.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A traffic signal light control method, comprising:
acquiring first sensor data and second sensor data;
fusing the first sensor data and the second sensor data to obtain third sensor data, wherein the third sensor data comprises information of each fused target;
determining track information of each target in the third sensor data according to information of each target in the third sensor data and information of each target in fourth sensor data, wherein the fourth sensor data is obtained by fusing point cloud data acquired by a first sensor and a second sensor in a previous frame;
according to the track information and the high-precision map, determining the predicted track of each target in the third sensor data and the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp;
determining the prediction duration of the target traffic signal lamp according to the track information, the prediction track and the traffic state information;
and controlling the target traffic signal lamp according to the predicted duration.
2. The method of claim 1, wherein the first sensor data comprises information about targets collected by the millimeter wave radar in a current frame, and the second sensor data comprises information about targets collected by the laser radar in the current frame.
3. The method of claim 2, wherein the first sensor data includes an identification, type, coordinates, length information, speed, heading angle, and confidence level of each target;
acquiring the second sensor data, comprising:
acquiring original point cloud data of the laser radar, wherein the original point cloud data comprises coordinates of each target collected by the laser radar in a current frame;
and carrying out target detection on the original point cloud data to obtain second sensor data, wherein the second sensor data comprises identification, type, coordinates, overall dimension, speed, course angle and confidence coefficient of each target.
4. The method of claim 2, further comprising:
converting the coordinates of each target in the first sensor data to a geodetic coordinate system according to a calibration algorithm between a millimeter wave radar coordinate system and the geodetic coordinate system;
and converting the coordinates of each target in the second sensor data to a geodetic coordinate system according to a calibration algorithm between the laser radar coordinate system and the geodetic coordinate system.
5. The method of claim 1, wherein said fusing the first sensor data and the second sensor data to obtain third sensor data comprises:
performing first association matching on each target in the first sensor data and each target in the second sensor data respectively;
performing second association matching on the objects which are not successfully matched in the first sensor data and the second sensor data, wherein the matching modes of the first association matching and the second association matching are different;
fusing the successfully matched targets in the first sensor data and the second sensor data to obtain a fused target;
and determining the third sensor data according to the target which is not successfully matched in the first sensor data and the second sensor data twice and the fusion target.
6. The method of claim 5, wherein the first associative matching of each target in the first sensor data with each target in the second sensor data comprises:
calculating distances between each target in the first sensor data and each target in the second sensor data in the same coordinate system respectively;
if the distance between a first target in the first sensor data and a second target in the second sensor data is smaller than or equal to a first threshold, determining that the first target and the second target are successfully matched;
and if the distance between the first target and the second target is greater than the first threshold, determining that the first target and the second target fail to be matched.
7. The method of claim 5, wherein the performing a second associative matching of the unmatched objects of the first sensor data and the second sensor data comprises:
determining a first historical track of each target which is not successfully matched for the first time in the first sensor data and a second historical track of each target which is not successfully matched for the first time in the second sensor data;
respectively determining an average Euclidean distance and a cosine distance between each first historical track and each second historical track;
if the sum of the mean Euclidean distance and the cosine distance between a first target historical track in each first historical track and a second target historical track in each second historical track is less than or equal to a second threshold value, matching the target corresponding to the first target historical track and the target corresponding to the second target historical track successfully;
if the sum of the mean Euclidean distance and the cosine distance between the first target historical track and the second target historical track is larger than a second threshold value, the target corresponding to the first target historical track and the target corresponding to the second target historical track fail to be matched.
8. The method according to any one of claims 1-7, wherein determining the predicted trajectory of each target and the traffic status information of each lane at the intersection corresponding to the target traffic signal in the third sensor data according to the trajectory information and the high-precision map comprises:
determining a predicted track of each target in the third sensor data according to the track information;
and determining the traffic state information of each lane of the intersection corresponding to the target traffic signal lamp according to the track information and the high-precision map.
9. An electronic device, comprising: a memory for storing a computer program and a processor; the processor is adapted to perform the method of any of claims 1-8 when the computer program is invoked.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a traffic signal control method according to any one of claims 1 to 8.
CN202211561343.6A 2022-12-07 2022-12-07 Traffic signal lamp control method and electronic equipment Active CN115985113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211561343.6A CN115985113B (en) 2022-12-07 2022-12-07 Traffic signal lamp control method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211561343.6A CN115985113B (en) 2022-12-07 2022-12-07 Traffic signal lamp control method and electronic equipment

Publications (2)

Publication Number Publication Date
CN115985113A true CN115985113A (en) 2023-04-18
CN115985113B CN115985113B (en) 2023-11-14

Family

ID=85963804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211561343.6A Active CN115985113B (en) 2022-12-07 2022-12-07 Traffic signal lamp control method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115985113B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112747765A (en) * 2021-01-08 2021-05-04 重庆长安汽车股份有限公司 Path pushing method and system based on navigation and sensor fusion and storage medium
CN115019512A (en) * 2022-07-05 2022-09-06 北京动视元科技有限公司 Road event detection system based on radar video fusion
CN115273034A (en) * 2022-08-08 2022-11-01 江苏智行未来汽车研究院有限公司 Traffic target detection and tracking method based on vehicle-mounted multi-sensor fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112747765A (en) * 2021-01-08 2021-05-04 重庆长安汽车股份有限公司 Path pushing method and system based on navigation and sensor fusion and storage medium
CN115019512A (en) * 2022-07-05 2022-09-06 北京动视元科技有限公司 Road event detection system based on radar video fusion
CN115273034A (en) * 2022-08-08 2022-11-01 江苏智行未来汽车研究院有限公司 Traffic target detection and tracking method based on vehicle-mounted multi-sensor fusion

Also Published As

Publication number Publication date
CN115985113B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN109087510B (en) Traffic monitoring method and device
EP4152204A1 (en) Lane line detection method, and related apparatus
CN109784254B (en) Vehicle violation event detection method and device and electronic equipment
CN110781949B (en) Asynchronous serial multi-sensor-based flight path data fusion method and storage medium
CN111784730B (en) Object tracking method and device, electronic equipment and storage medium
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN111582130A (en) Traffic behavior perception fusion system and method based on multi-source heterogeneous information
CN111695619A (en) Multi-sensor target fusion method and device, vehicle and storage medium
WO2021057324A1 (en) Data processing method and apparatus, chip system, and medium
CN115393681A (en) Target fusion method and device, electronic equipment and storage medium
CN112633120A (en) Intelligent roadside sensing system based on semi-supervised learning and model training method
CN113269811A (en) Data fusion method and device and electronic equipment
CN111275087A (en) Data processing method and device, electronic equipment and motor vehicle
US20220234588A1 (en) Data Recording for Advanced Driving Assistance System Testing and Validation
CN117130010A (en) Obstacle sensing method and system for unmanned vehicle and unmanned vehicle
CN115985113A (en) Traffic signal lamp control method and electronic equipment
CN112590808B (en) Multi-sensor fusion method and system and automatic driving vehicle
CN114842643B (en) Video vehicle detection model online updating method and device and radar fusion system
CN116434056A (en) Target identification method and system based on radar fusion and electronic equipment
CN115578716A (en) Vehicle-mounted data processing method, device, equipment and medium
CN115116034A (en) Method, device and system for detecting pedestrians at night
CN115662167B (en) Automatic driving map construction method, automatic driving method and related devices
WO2023221848A1 (en) Vehicle starting behavior prediction method and apparatus, storage medium, and program product
CN112784789B (en) Method, device, electronic equipment and medium for identifying traffic flow of road
CN117037496A (en) Traffic conflict point event monitoring method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant