CN114545434A - Road side visual angle speed measurement method and system, electronic equipment and storage medium - Google Patents

Road side visual angle speed measurement method and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN114545434A
CN114545434A CN202210039036.5A CN202210039036A CN114545434A CN 114545434 A CN114545434 A CN 114545434A CN 202210039036 A CN202210039036 A CN 202210039036A CN 114545434 A CN114545434 A CN 114545434A
Authority
CN
China
Prior art keywords
point cloud
target
information
data
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210039036.5A
Other languages
Chinese (zh)
Inventor
金立生
贺阳
霍震
王欢欢
金秋坤
张哲�
郭柏苍
谢宪毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN202210039036.5A priority Critical patent/CN114545434A/en
Publication of CN114545434A publication Critical patent/CN114545434A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a system, an electronic device and a storage medium for measuring speed of a road side visual angle, wherein the method comprises the following steps: acquiring image data acquired by camera equipment and radar point cloud data acquired by a radar sensor; the camera equipment and the radar sensor are time-synchronized; filtering and dimensionality reduction processing are carried out on the radar point cloud data to obtain processed point cloud data; carrying out target detection and tracking processing on the image data to obtain visual output information; fusing the processed point cloud data and the visual output information to obtain target space information; determining speed information according to the target space information and the visual output information; and carrying out data structuring on the speed information, the target space information and the visual output information, and determining output data. The scheme has high speed detection precision and high accuracy.

Description

Road side visual angle speed measurement method and system, electronic equipment and storage medium
Technical Field
The invention belongs to the technical field of intelligent traffic, and particularly relates to a method and a system for measuring speed of a road side visual angle, electronic equipment and a storage medium.
Background
The development of intelligent traffic infrastructure provides powerful support for the realization of the internet automatic driving function, and the accurate and stable road side visual angle speed measurement system is constructed, so that the system has important significance for providing accurate multi-scale information of traffic participants for the internet automatic driving vehicle. Along with the reduction of the cost of the sensor, the laser radar can gradually meet the cost requirement of a roadside sensing system, and meanwhile, a speed measuring system based on data fusion becomes one of key technologies which need to be solved urgently.
The intelligent traffic roadside sensing system provides structural data of the networked automatic driving vehicle with target type, ID, position, speed, acceleration and the like, wherein the speed and acceleration information comprises motion state information of traffic participants, and space-time information and motion representation can be provided for functions of collision early warning, road condition early warning and the like of the networked automatic driving vehicle. The existing methods mainly focus on the basis of a single sensor, such as a vision-based sensor, a laser radar-based sensor, and a millimeter-wave radar-based sensor. Most of the visual sensors need to construct a mapping relation between a pixel space and a real physical space, and finally, the speed and the acceleration are estimated through a pixel distance and are not obtained through true value measurement. The sparse point cloud provided by the laser radar and the millimeter wave radar contains the true value information of the space, but the effective characteristics of the target are limited, and the adjacent frame target is difficult to accurately track when the working conditions such as shielding, congestion and the like occur, so that the detection speed jumps, and the speed detection precision is reduced.
Disclosure of Invention
An object of an embodiment of the present specification is to provide a method, a system, an electronic device, and a storage medium for roadside perspective speed measurement.
In order to solve the above technical problem, the embodiments of the present application are implemented in the following manner:
in a first aspect, the present application provides a method for measuring speed of a roadside viewing angle, including:
acquiring image data acquired by camera equipment and radar point cloud data acquired by a radar sensor; the camera equipment and the radar sensor are time-synchronized;
filtering and dimensionality reduction processing are carried out on the radar point cloud data to obtain processed point cloud data;
carrying out target detection and tracking processing on the image data to obtain visual output information;
fusing the processed point cloud data and the visual output information to obtain target space information;
determining speed information according to the target space information and the visual output information;
and carrying out data structuring on the speed information, the target space information and the visual output information, and determining output data.
In a second aspect, the present application provides a roadside viewing angle speed measurement system, which is characterized in that the system includes:
the acquisition module is used for acquiring image data acquired by the camera equipment and radar point cloud data acquired by the radar sensor; the camera equipment and the radar sensor are time-synchronized;
the radar module is used for filtering and reducing the dimension of the radar point cloud data to obtain processed point cloud data;
the vision module is used for carrying out target detection and tracking processing on the image data to obtain vision output information;
the fusion module is used for carrying out fusion processing on the processed point cloud data and the visual output information to obtain target space information;
the speed measuring module is used for determining speed information according to the target space information and the visual output information;
and the output module is used for carrying out data structuring on the speed information, the target space information and the visual output information and determining output data.
In a third aspect, the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the method for measuring speed of a roadside viewing angle according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a computer program is stored, where the program is executed by a processor to implement the roadside perspective speed measurement method according to the first aspect.
As can be seen from the technical solutions provided in the embodiments of the present specification, the solution: the sensing results of sensors such as a camera and a laser radar sensor arranged on the road side are integrated, and accurate detection of the speed of a traffic participant is achieved. The target is detected and tracked through the image collected by the camera, the accuracy of a sensing result can be guaranteed, and the accuracy of spatial information detection can be guaranteed by measuring the distance based on the laser radar sensor. The speed measurement is carried out based on the point cloud data fusion of the image and the laser radar and the target tracking, the speed measurement can be carried out by using the real measurement value of the sensor, the detection and tracking are carried out by using the texture information, and finally, stable and reliable traffic scene multi-dimensional information is provided for the networked automatic driving vehicle.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic flow chart of a roadside viewing angle speed measurement method provided in the present application;
fig. 2 is a schematic flow chart of filtering and dimension reduction processing on radar point cloud data according to the present application;
fig. 3 is a schematic flowchart of the process of performing target detection and tracking processing on image data according to the present application;
FIG. 4 is a schematic flow chart of ID management in FIG. 3;
fig. 5 is a schematic flow chart illustrating a process of fusing the processed point cloud data and the visual output information according to the present application;
FIG. 6 is a schematic flow chart of the three-dimensional detection of the target in FIG. 5;
FIG. 7 is a schematic flow chart illustrating the process of determining speed information provided herein;
FIG. 8 is a schematic flow chart of determining output data provided herein;
fig. 9 is a general technical route diagram of the roadside perspective speed measurement method provided by the present application;
fig. 10 is a schematic structural diagram of a roadside viewing angle speed measurement system provided in the present application;
FIG. 11 is a schematic flow chart illustrating data transmission between software modules provided in the present application;
fig. 12 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be apparent to those skilled in the art that various modifications and variations can be made in the specific embodiments described herein without departing from the scope or spirit of the application. Other embodiments will be apparent to the skilled person from the description of the present application. The specification and examples are exemplary only.
As used herein, the terms "comprising," "including," "having," "containing," and the like are open-ended terms that mean including, but not limited to.
In the present application, "parts" are in parts by mass unless otherwise specified.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Referring to fig. 1, a schematic flow chart of a method for measuring speed from a viewing angle at the roadside provided by the embodiment of the application is shown.
As shown in fig. 1, the method for measuring speed from a roadside perspective may include:
s110, acquiring image data acquired by camera equipment and radar point cloud data acquired by a radar sensor; the image pickup apparatus and the radar sensor are time-synchronized.
Specifically, the camera device may employ a camera, the radar sensor may employ a laser radar sensor or a laser radar for short, and data collected by the laser radar sensor may be referred to as laser radar point cloud data.
Sensors such as camera and laser radar sensor arrange in the roadside, for the convenience of acquireing better control visual angle, deploy laser radar sensor and camera in road central authorities upside. After the camera and the laser radar sensor are arranged, time synchronization and parameter calibration are carried out, so that laser radar point cloud collected by the laser radar sensor can be accurately projected in an image imaging space collected by the camera. Wherein the calibrated parameter is camera internal parameter CparamExternal ginseng Cext
And S120, filtering and dimensionality reduction are carried out on the radar point cloud data to obtain processed point cloud data.
Wherein, S120 includes the following steps S1201 to S1204.
And S1201, performing elevation filtering on the radar point cloud data according to the road height limit and the radar sensor installation height to obtain an elevation-filtered point cloud set.
Specifically, the road height limit may also be referred to as a vehicle height limit or a target height, and different limits may be set according to actual requirements, for example, the limit of the main road of the city is 5 meters.
Laser radar sensor mounting height also has different restrictions in different cities, different roads etc. can set up according to actual demand, and for example, laser radar mounting height is 5 meters in certain scene.
According to the height limit of the vehicle and the installation height of the laser radar in the scene, performing elevation filtering on the collected point cloud data of the laser radar to remove noise points in the sky, wherein an elevation filtering formula is shown as the following formula:
PtsH={Ptsi|Hmin<zi<Hmax},i=1,2,...,num(Pts)
wherein, PtsiIs the ith point cloud, z in the point cloud set PtsiTo its height, HminAnd HmaxThe height range is determined by the target height and the laser radar installation height in a scene, for example, the laser radar sensor installation height in a certain scene is 5 meters, and the road isThe height of the urban trunk road is limited to 5 meters, so that H is setmin=-5m,Hmax=0m。PtsHFor the elevation filtered point cloud set, num (Pts) is the number of point clouds in the set Pts.
And S1202, performing direct filtering on the point cloud set subjected to the elevation filtering according to the sensing range of the effective features of the radar sensor to obtain a direct filtering point cloud set.
Specifically, the sensing range of the effective characteristics of the laser radar sensor can be set according to actual conditions.
According to the sensing range of effective features provided by the laser radar, performing direct filtering on the elevation-filtered point cloud set output in the step S1201, and keeping the point cloud within the range with obvious features, wherein a direct filtering formula is shown as follows:
Figure BDA0003469453400000051
wherein x isi、yiIs a set of points PtsHIth point cloud PtsHiX, y axis coordinates of (1), RminAnd RmaxThe radius range is determined by the effective distance of point cloud in the scene, if a certain 128-line laser radar is adopted, the limit distance measurement exceeds 200 meters, but the distance capable of providing the effective characteristic of speed measurement is 100 meters, and meanwhile, when the device is installed at the height of 5 meters, a 15-meter view blind area exists, so that R is setmin=15m,Rmax=100m。PtsHRFor straight-through filtered point cloud sets, num (Pts)H) Is set of PtsHThe number of the medium point clouds.
And S1203, performing direct filtering on the point cloud set subjected to direct filtering according to the road interesting area to obtain a point cloud set in the interesting area.
Specifically, a road in the scene is set as a Region of Interest (ROI), the straight-through filtering is performed on the straight-through filtered point cloud set output in step S1202, and the point cloud in the road Region is retained, where the straight-through filtering formula is shown as follows:
PtsROI={PtsHRi|x1<xi<x2,y1<yi<y2},i=1,2,...,num(PtsHR)
wherein, PtsHRiIs a set of points PtsHRThe ith point cloud of (1), xi、yiIs its x, y axis coordinate, x1、x2、y1、y2Setting x for ROI threshold, e.g. installed in the center of road, road for two-way 4-lane1Or y1For-7.5 m, set x2Or y2The distance is-7.5 m, is determined according to the installation direction of the laser radar, is not restricted according to the follow-up threshold value set by the lane, is set to be +/-100 m, and is determined according to the area of the road in the scene. PtsROIIs a set of point clouds, num (Pts) in the ROI areaHR) Is set of PtsHRThe number of the medium point clouds;
and S1204, performing dimensionality reduction on the point cloud set in the region of interest to obtain processed point cloud data.
Specifically, the point cloud set in the region of interest output in step S1203 may be subjected to rasterization processing to reduce the dimensionality, so as to obtain processed point cloud data.
Referring to fig. 2, a schematic flow diagram illustrating filtering and dimensionality reduction processing performed on radar point cloud data is shown, and as shown in fig. 2, laser radar point cloud data is subjected to point cloud elevation filtering, then point cloud effective range setting and ROI area setting are performed, and finally point cloud rasterization processing is performed to obtain processed point cloud, namely processed point cloud data.
And S130, carrying out target detection and tracking processing on the image data to obtain visual output information.
Specifically, the visual output information includes a detection result of target detection and a tracking result of tracking processing, where the tracking result includes an ID (i.e., a number) corresponding to the detection result.
Wherein, S130 includes the following steps S1301 to S1303.
S1301, preprocessing the image data to obtain preprocessed image data.
The preprocessing may include, for example, an averaging operation.
S1302, inputting the preprocessed image data into a pre-built deep learning model for target detection to obtain a detection result.
The pre-built deep learning model is a pre-built deep learning model and is used for carrying out target detection on an input image. Acquiring and labeling a data set under the scene, training a deep learning network structure to obtain weights, and performing forward derivation calculation on the preprocessed image data acquired in real time by using the weights obtained by training to detect a target in the image.
The detection result may include coordinates of a lower left point of the two-dimensional bounding box, coordinates of an upper right point of the two-dimensional bounding box, a target category, which may be referred to as a category for short, a confidence level, and the like. The object classes may include, for example, pedestrians, automobiles, trucks, and the like.
S1303, judging whether the preprocessed image data is an initial frame or not, and if so, generating an initial frame initialization ID corresponding to a detection result; and if the frame is a non-initial frame, generating a non-initial frame initialization ID corresponding to the detection result, and performing ID matching on the non-initial frame initialization ID and the previous frame ID to obtain a non-initial frame matched ID.
Specifically, the initialization ID, that is, the ID corresponding to the detection result, may be generated according to a rule, if the preprocessed image data is an initial frame, that is, there is no previous frame, there is no way to perform matching, and the initialization ID generated according to the rule is the initial frame initialization ID corresponding to the detection result; if the preprocessed image data is a non-initial frame, the ID of the adjacent frame, namely the previous frame, can be matched, the initialization ID of the current frame can be corrected, and the matched ID of the non-initial frame can be obtained.
Generating a non-initial frame initialization ID corresponding to the detection result, and performing ID matching between the non-initial frame initialization ID and a previous frame ID to obtain a non-initial frame matched ID, which may include: initializing an ID, calculating a cost matrix by an IOU, performing Hungarian matching, performing ID matching and managing an ID life cycle, and specifically:
generating a non-initial frame initialization ID corresponding to the detection result according to the detection result;
determining an IOU cost matrix according to the non-initial frame initialization ID;
performing Hungarian matching on adjacent frame targets in the preprocessed image data according to the IOU cost matrix to obtain an index vector after the Hungarian matching;
according to the index vector after Hungarian matching, performing ID matching on all non-initial frame initialization IDs and the corresponding previous frame IDs respectively, reserving the IDs which are correctly matched with the previous frame in the non-initial frame initialization IDs, and obtaining tracking IDs of all current frames;
and determining the matched ID of the non-initial frame according to the tracking IDs of all the current frames.
Illustratively, first, all the detection results are subjected to ID initialization, and unique IDs are generated by categories. Illustratively, the category of the detection result is 5, i.e. 5, and the ID needs to be initialized for each category, for example, 5 categories include cars, 5 cars, the ID of the car is 1-5, 5 categories also include people, 10 people, and the ID of the people is 1-10.
And traversing all the same type targets of the current frame and the adjacent frames according to the IDs, calculating an IOU value and generating an IOU cost matrix. Specifically, the formula is shown as follows:
CMs={CMi},i=1,2,...,num(class)
wherein CMs is a cost matrix CM of adjacent frames per classiAnd (3) forming a cost matrix set, wherein class is a target class which can be detected by the pre-built deep learning model, and num (class) is the number of detectable classes, and for example, the target classes which can be detected by the pre-built deep learning model include: pedestrian, automobile, truck, fire truck, cyclist, the number of categories at this time is 5.
Cost matrix CMiThe construction rules of (1) are as follows:
Figure BDA0003469453400000081
wherein, cmkjThe merging ratio of the ith detection frame of the current frame and the ith detection frame of the previous frame is shown, and a and b are the number of the ith target detection frames of the current frame and the previous frame. Wherein the cross-linking ratio is cmkjThe calculation is shown below:
Figure BDA0003469453400000082
wherein x isk2、yk2、xk1、yk1The pixel coordinates, x, of the upper right point and the lower left point of the ith class kth detection frame of the current framej2、yj2、xj1、yj1And for pixel coordinates of upper right and lower left points of the ith class jth detection frame of the previous frame, max () and min () are functions of maximum and minimum values.
And then, performing Hungarian matching on the adjacent frame targets according to the IOU cost matrix, wherein the Hungarian matching is specifically shown as the following formula:
Idxs={Idxi},i=1,2,...,num(class)
Idxi=lsa(CMi)
wherein IdxiFor the index vectors after Hungarian matching, lsa () is a Hungarian matching function, and Idxs is a set formed by the index vectors of all targets matched according to categories;
then, matching the IDs of the adjacent frames, and keeping the ID of the current frame that is correctly matched with the previous frame, as shown in the following formula:
IDs={IDi},i=1,2,...,num(class)
IDi={IDik},k∈Idxi
Figure BDA0003469453400000083
wherein IDs are tracking result IDs of all kinds of targetsiSet of compositions, IDikFor the tracking ID of the ith target of the current frame, taking IdxiThe most relevant ID in the list is taken as the current ID, IDijTracking ID, Th of jth target of ith class for previous frameIOUIn order to judge whether the threshold is matched correctly, the complexity of the target in the scene is analyzed and determined, and if the monitored scene belongs to an urban trunk road and congestion conditions such as early and late peaks exist, Th is setIOUAnd the value is 0.95, so that accurate tracking can be ensured, and the ID switching times are reduced.
And finally, deleting the IDs which exist in the previous frame but do not exist in the current frame, regenerating the IDs of the targets which exist in the current frame but do not exist in the previous frame and outputting the final result, namely the ID after the non-initial frame is matched. The ID management policy is specifically shown as follows:
Index=where(IDik==φ)
Figure BDA0003469453400000091
therein, IndexIs IDikThe index of the ID is not added, where () is an index search function, an index vector meeting the condition is returned, and n is the number of targets detected by the ith class of the current frame.
Referring to fig. 3, a flowchart illustrating the target detection and tracking process performed on the image data in step S103 is shown. As shown in fig. 3, firstly, performing an average value reduction preprocessing on an image, then performing deep learning detection on the preprocessed image to obtain a detection result, then judging whether the image is an initial frame, if so, generating an initialization ID (initial frame initialization ID) corresponding to the detection result, and outputting a bounding box (belonging to one of the detection results) + ID; if not, generating an initialization ID (non-initial frame initialization ID) corresponding to the detection result, then sequentially performing IOU calculation cost matrix, Hungarian matching, ID matching and ID life cycle management, and outputting a bounding box (belonging to one of the detection results) + the ID (non-initial frame matched ID).
Referring to fig. 4, a schematic workflow diagram of the ID management of fig. 3 is shown. As shown in fig. 4, the ID of the previous frame and the ID of the current frame are input, then the unmatched IDs are sorted, the unmatched ID of the previous frame is discarded, and the unmatched ID of the current frame is retained, so that the ID of the current frame is obtained.
S104, performing fusion processing on the processed point cloud data and the visual output information to obtain target space information, wherein the fusion processing may include:
s1041, projecting the processed point cloud data to obtain a point cloud set projected to a pixel space.
Specifically, the input processed point cloud data is projected to obtain a point cloud set Pts after the point cloud is projected to a pixel spaceuvSpecifically, the formula is shown as follows:
Ptsuv=Cparam×Cext×PtsROI
s1042, filtering the point cloud set projected to the pixel space to obtain a filtered effective point cloud set.
Specifically, the invalid point removing is performed on the point cloud in the point cloud set projected to the pixel space, and the point cloud corresponding to the image pixel space is retained, which is specifically shown as the following formula:
Pts'uv={Ptsuvi|widthmin≤ui≤widthmax,heightmin≤vi≤heightmax,di≥0},i=1,2,...,num(Ptsuv)
wherein, PtsuviIs PtsuvThe ith point cloud, widthminIs the minimum value of the image width, widthmaxHeight is the maximum value of the image widthminHeight is the minimum value of the image heightmaxIs the maximum value of the image height, num (Pts)uv) Is PtsuvNumber of mid-point clouds, diIs the depth of the point cloud after projection, Pts'uvIs a filtered effective point cloud set.
And S1043, determining a point cloud set in the detection frame according to the filtered effective point cloud set.
Specifically, a point cloud set in the detection frame is found according to the detection frame and the filtered effective point cloud set, and the following formula is specifically shown:
Inneruv={Inneruvi}
Inneruvi={Inneruvik}
Inneruvik={Pts'uvm|x1ik≤um≤x2ik,y1ik≤vm≤y2ik},m=1,2,...,num(Pts'uvm)
wherein, Pts'uvmIs Pts'uvThe m-th point cloud of (2),num(Pts'uvm) Is Pts'uvmNumber of point clouds in (u)m、vmIs a coordinate value, x1ik、y1ik、x2ik、y2ikInner being the pixel coordinates of the two-dimensional detection frame of the pointuvikInner, an Inner point set for the ith class of kth targetsuviFor a set of i-th class target inliers, InneruvAnd (4) detecting the point cloud collection in the frame for all the point collections in the class.
And S1044, carrying out density filtering on the point cloud set in the detection frame to obtain a density-corrected inner point set.
Specifically, density filtering is performed on the point cloud set in the detection frame to remove noise point clouds which do not belong to the target, and the following formula is specifically shown:
Inner'uv={Inneruvm|density(Inneruvm)≥Thdensity},m=1,2,...,num(Inneruv)
wherein, dense (Inner)uvm) Is InneruvDensity, Th, of the m-Th point clouddensityThe density threshold is determined by the distribution of point clouds in a scene, the number of points in a fixed radius is approximately substituted for calculation, and Th is set under the condition of urban main roadsdensity=20。num(Inneruv) Is InneruvNumber of mid-point clouds, Inneru'vTo perform density corrected interior point collection.
And S1045, carrying out three-dimensional detection on the target according to the density corrected inner point set to obtain target space information.
The target space information comprises a central point three-dimensional coordinate, three main direction vectors and a three-dimensional boundary.
The three-dimensional detection of the target comprises center point estimation, course estimation and correction and three-dimensional surrounding frame estimation.
Specifically, step S1045 may include:
weighting the inner points belonging to the same target in the density-corrected inner point set to obtain the three-dimensional coordinates of the center point of each target, which is specifically shown as the following formula:
Figure BDA0003469453400000111
Figure BDA0003469453400000112
Figure BDA0003469453400000113
wherein x isikmX-axis coordinate, y, of m-th inner point of i-th class k-th targetikmIs the m-th inner point y-axis coordinate, z, of the ith class of k-th objectsikmIs the m-th inner point z-axis coordinate of the ith class of k-th target, (x)cik,ycik,zcik) Is the coordinates of the center point of the ith class kth target.
And (3) carrying out principal component analysis on the interior points belonging to the same target in the density-corrected interior point set to obtain three principal direction vectors of each target, and carrying out course estimation, wherein the following formula is specifically shown:
[v1ik,v2ik,v3ik]=PCA(Inner'zvik)
wherein PCA () is a principal component analysis function, [ v ]1ik,v2ik,v3ik]Three principal direction vectors for the ith class kth target;
and correcting the estimated course by an angle, wherein the angle is limited within a range of 90 degrees, and the angle is specifically represented by the following formula:
[v'1ik,v'2ik,v'3ik]=amend([v1ik,v2ik,v3ik])
wherein, the angle correction function of the amend () angle, the corrected angle is within the range of plus or minus 90 degrees, and the specific correction is as follows: if vector v1ikIs greater than 90 deg. and less than 180 deg.,
Figure BDA0003469453400000114
[v'1ik,v'2ik,v'3ik]three main directions for corrected class i kth targetAnd (5) vector quantity.
Respectively projecting all the interior points in the density-corrected interior point set to three main directions, respectively searching the maximum value and the minimum value of coordinate projection in the three main directions, and defining the three-dimensional boundary of each target according to the maximum value and the minimum value, wherein the three-dimensional boundary is specifically represented by the following formula:
lengthik=max(Pv1ik)-min(Pv1ik)
widthik=max(Pv2ik)-min(Pv2ik)
heightik=max(Pv3ik)-min(Pv3ik)
wherein, Pv1ikIs the projection of the ith class kth target interior point in its 1 st principal direction, Pv2ikIs the projection of the ith class kth target interior point in its 2 nd principal direction, Pv3ikLength, being the projection of the ith class kth target interior point in its 3 rd principal directionik、widthik、heightikThe length, width and height of the ith target in the ith class.
Referring to fig. 5, which shows a schematic flow chart of the fusion processing performed on the processed point cloud data and the visual output information in step S104, as shown in fig. 5, the input data is the processed point cloud data and the visual output information (including the bounding box + ID), the point cloud projection is performed on the processed point cloud data, then invalid points are removed by filtering, then, the internal point search is performed, a point cloud set in the detection box is found, the density filtering is performed on the point cloud set in the detection box, the noise point cloud not belonging to the target is removed, the same target internal point is obtained, and finally, the target three-dimensional detection is performed, so that the target space information is obtained.
Referring to fig. 6, a schematic flow chart of the three-dimensional detection of the target in fig. 5 is shown. As shown in fig. 6, the interior points are input, the outliers in the interior points are removed, and then the center point estimation, the main direction estimation, the heading angle correction, and the 3D bounding box estimation are sequentially performed to obtain the three-dimensional coordinates of the center point, the main direction, the heading angle, and the 3D bounding box (i.e., the three-dimensional boundary) in sequence.
S105, determining the speed information according to the target space information and the visual output information, which may include:
determining the time difference of adjacent frames according to the visual output information;
and determining speed information according to the target space information and the time difference of the adjacent frames, wherein the speed information comprises speed and acceleration.
Specifically, the formula is shown as follows:
Figure BDA0003469453400000121
Figure BDA0003469453400000122
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003469453400000123
for the coordinates of the center point of the ith class kth target at time t,
Figure BDA0003469453400000124
is the coordinate of the central point of the ith class kth target at the moment of t-1, delta t is the time difference of adjacent data frames,
Figure BDA0003469453400000131
for the speed at the kth target time t of the ith class,
Figure BDA0003469453400000132
the velocity at the time of the ith target t-1 of the ith class,
Figure BDA0003469453400000133
class i acceleration at the kth target time t.
Referring to fig. 7, a flow chart for determining speed information is shown. As shown in fig. 7, target space information and bounding box, ID are input, adjacent frame position information is determined, a velocity is calculated from the adjacent frame position information, and then an acceleration is calculated from the adjacent frame velocity, and the velocity and the acceleration are output.
And S106, carrying out data structuring on the speed information, the target space information and the visual output information, and determining output data.
Referring to fig. 8, a schematic flow chart for determining output data is shown. As shown in fig. 8, target space information, velocity, acceleration, bounding box, and ID are input, a data dictionary is generated, json is generated, and finally, structured data is output: category, confidence, internal point cloud, center point coordinates, 3D bounding box size, heading, ID, speed, acceleration.
Referring to fig. 9, a general technical route diagram of the roadside perspective speed measurement method provided by the present application is shown. As shown in fig. 9, inputting 3 adjacent data frames, including data frame 1, data frame 2, and data frame 3, initializing the 3 data frames, respectively, obtaining detection results corresponding to the three data frames, sequentially detecting result 1, detection result 2, and detection result 3, then performing fusion operation on the 3 detection results, respectively, obtaining a fusion result (target space information), and outputting the detection result and the target space information, which are the category, number, confidence, internal point cloud, center point coordinate, 3D bounding box size, and heading of the corresponding target. Then, tracking the serial number in the detection result to obtain an ID, processing the target space information and the detection result to obtain a speed and an acceleration, carrying out data structuring on the speed information, the target space information and the visual output information (comprising the detection result and the ID), constructing and outputting output data, and outputting: category, confidence, internal point cloud, center point coordinates, 3D bounding box size, heading, ID, speed, acceleration.
According to the embodiment of the application, sensing results of the sensors such as the camera and the laser radar sensor arranged on the road side are integrated, and accurate detection of the speed of the traffic participants is achieved. The target is detected and tracked through the image acquired by the camera, the accuracy of a sensing result can be guaranteed, and the accuracy of spatial information detection can be guaranteed by measuring the distance based on the laser radar sensor. The speed measurement is carried out based on the point cloud data fusion of the image and the laser radar and the target tracking, the speed measurement can be carried out by using the real measurement value of the sensor, the detection and tracking are carried out by using the texture information, and finally, stable and reliable traffic scene multi-dimensional information is provided for the networked automatic driving vehicle.
According to the embodiment of the application, through data layer heterogeneous data fusion, the advantages of image texture information in target detection and point cloud space information in position measurement are combined, and road side view advantages are exerted in road side equipment in deployment, the situations that target detection and tracking are inaccurate and position needs to be estimated due to single sensor can be avoided, and the speed measurement system is guaranteed to provide reliable relative true value motion information in complex working conditions such as congestion.
According to the embodiment of the application, the characteristics of high accuracy and high recall rate of a deep learning algorithm are inherited, and most targets in a scene can be detected.
According to the embodiment of the application, the target space information is obtained through measurement and the speed is measured, so that the error of speed measurement based on estimation is reduced, and the measurement precision of the speed measurement system is improved.
According to the method and the device, the same target point cloud is corrected, the accuracy of speed measurement is further improved, and the influence of noise points on the measurement result is reduced.
According to the embodiment of the application, the method can directly provide perception information for the internet automatic driving vehicle by constructing the structured data for outputting.
Referring to fig. 10, a schematic structural diagram of a roadside perspective speed measurement system according to an embodiment of the present application is shown.
As shown in fig. 10, the roadside perspective speed measurement system 1000 may include: the system comprises an acquisition module 1010, a radar module 1020, a vision module 1030, a fusion module 1040, a speed measurement module 1050 and an output module 1060, wherein software modules are deployed in the roadside perception hardware module, and the software modules communicate with one another through software to achieve data transmission.
Referring to fig. 11, a flow diagram of data transmission between various software modules is shown. As shown in fig. 11, an obtaining module 1010, configured to obtain image data collected by an image capturing apparatus and radar point cloud data collected by a radar sensor; the camera equipment and the radar sensor are time-synchronized;
the radar module 1020 is configured to perform filtering and dimension reduction processing on the radar point cloud data output by the obtaining module 1010 to obtain processed point cloud data;
the vision module 1030 is configured to perform target detection and tracking processing on the image data output by the obtaining module 1010 to obtain vision output information; the visual output information includes a bounding box and an ID;
the fusion module 1040 is configured to perform fusion processing on the processed point cloud data output by the radar module 1020 and the visual output information output by the visual module 1030 to obtain target space information;
the speed measuring module 1050 is configured to determine speed information according to the target space information output by the fusion module 1040 and the visual output information output by the visual module 1030;
the output module 1060 is configured to perform data structuring on the speed information output by the speed measurement module 1050, the target space information output by the fusion module 1040, and the visual output information output by the visual module 1030, and determine output data.
Optionally, the radar module 1020 is further configured to:
according to the road height limit and the installation height of the radar sensor, performing elevation filtering on the radar point cloud data to obtain an elevation-filtered point cloud set;
according to the sensing range of the effective features of the radar sensor, performing straight-through filtering on the point cloud set after elevation filtering to obtain a straight-through filtered point cloud set;
according to the road interesting area, carrying out direct filtering on the point cloud set subjected to direct filtering to obtain a point cloud set in the interesting area;
and performing dimensionality reduction on the point cloud set in the region of interest to obtain processed point cloud data.
Optionally, the vision module 1030 is further configured to:
preprocessing the image data to obtain preprocessed image data;
inputting the preprocessed image data into a pre-built deep learning model for target detection to obtain a detection result;
judging whether the preprocessed image data is an initial frame or not, and if so, generating an initial frame initialization ID corresponding to a detection result; and if the frame is a non-initial frame, generating a non-initial frame initialization ID corresponding to the detection result, and performing ID matching on the non-initial frame initialization ID and the previous frame ID to obtain a non-initial frame matched ID.
Optionally, the vision module 1030 is further configured to:
generating a non-initial frame initialization ID corresponding to the detection result according to the detection result;
determining an IOU cost matrix according to the non-initial frame initialization ID;
performing Hungarian matching on adjacent frame targets in the preprocessed image data according to the IOU cost matrix to obtain index vectors after Hungarian matching;
according to the index vector after Hungarian matching, performing ID matching on all non-initial frame initialization IDs and the corresponding previous frame IDs respectively, reserving the IDs which are correctly matched with the previous frame in the non-initial frame initialization IDs, and obtaining tracking IDs of all current frames;
and determining the matched ID of the non-initial frame according to the tracking IDs of all the current frames.
Optionally, the fusion module 1040 is further configured to:
projecting the processed point cloud data to obtain a point cloud set projected to a pixel space;
filtering the point cloud set projected to the pixel space to obtain a filtered effective point cloud set;
determining a point cloud set in the detection frame according to the filtered effective point cloud set;
carrying out density filtering on the point cloud set in the detection frame to obtain a density-corrected inner point set;
and carrying out three-dimensional detection on the target according to the density corrected inner point set to obtain target space information.
Optionally, the target space information includes a central point three-dimensional coordinate, three principal direction vectors, and a three-dimensional boundary; the fusion module 1040 is further configured to:
carrying out three-dimensional detection on the target according to the density corrected inner point set to obtain target space information, wherein the three-dimensional detection comprises the following steps:
weighting the inner points belonging to the same target in the density-corrected inner point set to obtain the three-dimensional coordinates of the center point of each target;
performing principal component analysis on the interior points belonging to the same target in the density-corrected interior point set to obtain three principal direction vectors of each target;
and respectively projecting all the interior points in the density-corrected interior point set to three main directions, respectively searching the maximum value and the minimum value of the coordinate projection in the three main directions, and limiting the three-dimensional boundary of each target according to the maximum value and the minimum value.
Optionally, the speed measuring module 1050 is further configured to:
determining the time difference of adjacent frames according to the visual output information;
and determining the speed information according to the target space information and the time difference of the adjacent frames.
The roadside viewing angle speed measurement system provided by this embodiment can execute the embodiments of the above method, and the implementation principle and technical effect thereof are similar, and are not described herein again.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 12, a schematic structural diagram of an electronic device 300 suitable for implementing the embodiments of the present application is shown.
As shown in fig. 12, the electronic apparatus 300 includes a Central Processing Unit (CPU)301 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage section 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the apparatus 300 are also stored. The CPU 301, ROM 302, and RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
The following components are connected to the I/O interface 305: an input portion 306 including a keyboard, a mouse, and the like; an output section 307 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 308 including a hard disk and the like; and a communication section 309 including a network interface card such as a LAN card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. A drive 310 is also connected to the I/O interface 306 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 310 as necessary, so that a computer program read out therefrom is mounted into the storage section 308 as necessary.
In particular, the process described above with reference to fig. 1 may be implemented as a computer software program, according to an embodiment of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the above-described roadside perspective speed measurement method. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 309, and/or installed from the removable medium 311.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The described units or modules may also be provided in a processor. The names of these units or modules do not in some cases constitute a limitation of the unit or module itself.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a mobile phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
As another aspect, the present application also provides a storage medium, which may be the storage medium included in the foregoing system in the foregoing embodiment; or may be a storage medium that exists separately and is not assembled into the device. The storage medium stores one or more programs, which are used by one or more processors to execute the roadside perspective speed measurement method described in the present application.
Storage media, including persistent and non-persistent, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It is to be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

Claims (10)

1. A method for measuring speed of a road side viewing angle is characterized by comprising the following steps:
acquiring image data acquired by camera equipment and radar point cloud data acquired by a radar sensor; the camera device and the radar sensor are time-synchronized;
filtering and dimensionality reduction processing are carried out on the radar point cloud data to obtain processed point cloud data;
carrying out target detection and tracking processing on the image data to obtain visual output information;
fusing the processed point cloud data and the visual output information to obtain target space information;
determining speed information according to the target space information and the visual output information;
and carrying out data structuring on the speed information, the target space information and the visual output information, and determining output data.
2. The method of claim 1, wherein the filtering and dimensionality reduction of the radar point cloud data to obtain processed point cloud data comprises:
according to the road height limit and the installation height of the radar sensor, performing elevation filtering on the radar point cloud data to obtain an elevation-filtered point cloud set;
performing straight-through filtering on the point cloud set subjected to the elevation filtering according to the sensing range of the effective features of the radar sensor to obtain a straight-through filtered point cloud set;
according to the road interesting area, performing straight-through filtering on the straight-through filtered point cloud set to obtain a point cloud set in the interesting area;
and performing dimensionality reduction on the point cloud set in the region of interest to obtain the processed point cloud data.
3. The method of claim 1, wherein the performing target detection and tracking on the image data to obtain visual output information comprises:
preprocessing the image data to obtain preprocessed image data;
inputting the preprocessed image data into a pre-built deep learning model for target detection to obtain a detection result;
judging whether the preprocessed image data is an initial frame or not, and if so, generating an initial frame initialization ID corresponding to the detection result; and if the frame is a non-initial frame, generating a non-initial frame initialization ID corresponding to the detection result, and performing ID matching on the non-initial frame initialization ID and a previous frame ID to obtain a non-initial frame matched ID.
4. The method of claim 3, wherein the generating a non-initial frame initialization ID corresponding to the detection result, and performing ID matching on the non-initial frame initialization ID and a previous frame ID to obtain a non-initial frame matched ID comprises:
generating the non-initial frame initialization ID corresponding to the detection result according to the detection result;
determining an IOU cost matrix according to the non-initial frame initialization ID;
performing Hungary matching on adjacent frame targets in the preprocessed image data according to the IOU cost matrix to obtain index vectors after Hungary matching;
according to the index vector after Hungarian matching, performing ID matching on all the non-initial frame initialization IDs and the corresponding previous frame IDs respectively, reserving the IDs which are correctly matched with the previous frame in the non-initial frame initialization IDs, and obtaining the tracking IDs of all current frames;
and determining the matched ID of the non-initial frame according to the tracking IDs of all the current frames.
5. The method according to any one of claims 1 to 5, wherein the fusing the processed point cloud data and the visual output information to obtain target spatial information comprises:
projecting the processed point cloud data to obtain a point cloud set projected to a pixel space;
filtering the point cloud set projected to the pixel space to obtain a filtered effective point cloud set;
determining a point cloud set in a detection frame according to the filtered effective point cloud set;
performing density filtering on the detection frame inner point cloud set to obtain a density corrected inner point set;
and carrying out three-dimensional detection on the target according to the density corrected inner point set to obtain the target space information.
6. The method of claim 5, wherein the target space information comprises a center point three-dimensional coordinate, three principal direction vectors, a three-dimensional boundary;
the three-dimensional detection of the target according to the density-corrected inner point set to obtain the target space information includes:
weighting the inner points belonging to the same target in the density-corrected inner point set to obtain the three-dimensional coordinates of the central point of each target;
performing principal component analysis on the interior points belonging to the same target in the density-corrected interior point set to obtain the three principal direction vectors of each target;
and respectively projecting all the interior points in the density-corrected interior point set to three main directions, respectively searching the maximum value and the minimum value of coordinate projection in the three main directions, and limiting the three-dimensional boundary of each target according to the maximum value and the minimum value.
7. The method of any of claims 1-4, wherein determining velocity information based on the target spatial information and the visual output information comprises:
determining the time difference of adjacent frames according to the visual output information;
and determining the speed information according to the target space information and the time difference of the adjacent frames.
8. A roadside viewing angle speed measurement system, comprising:
the acquisition module is used for acquiring image data acquired by the camera equipment and radar point cloud data acquired by the radar sensor; the camera device and the radar sensor are time-synchronized;
the radar module is used for filtering and reducing the dimension of the radar point cloud data to obtain processed point cloud data;
the vision module is used for carrying out target detection and tracking processing on the image data to obtain vision output information;
the fusion module is used for carrying out fusion processing on the processed point cloud data and the visual output information to obtain target space information;
the speed measuring module is used for determining speed information according to the target space information and the visual output information;
and the output module is used for carrying out data structuring on the speed information, the target space information and the visual output information and determining output data.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the roadside perspective speed measurement method according to any one of claims 1-7 when executing the program.
10. A readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements a roadside perspective speed measurement method according to any one of claims 1-7.
CN202210039036.5A 2022-01-13 2022-01-13 Road side visual angle speed measurement method and system, electronic equipment and storage medium Pending CN114545434A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210039036.5A CN114545434A (en) 2022-01-13 2022-01-13 Road side visual angle speed measurement method and system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210039036.5A CN114545434A (en) 2022-01-13 2022-01-13 Road side visual angle speed measurement method and system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114545434A true CN114545434A (en) 2022-05-27

Family

ID=81671549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210039036.5A Pending CN114545434A (en) 2022-01-13 2022-01-13 Road side visual angle speed measurement method and system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114545434A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116736322A (en) * 2023-08-15 2023-09-12 山东科技大学 Speed prediction method integrating camera image and airborne laser radar point cloud data
WO2023240805A1 (en) * 2022-06-13 2023-12-21 之江实验室 Connected vehicle overspeed early warning method and system based on filtering correction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion
CN113255504A (en) * 2021-05-19 2021-08-13 燕山大学 Road side visual angle beyond visual range global fusion perception system based on deep learning
CN113484875A (en) * 2021-07-30 2021-10-08 燕山大学 Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion
CN113255504A (en) * 2021-05-19 2021-08-13 燕山大学 Road side visual angle beyond visual range global fusion perception system based on deep learning
CN113484875A (en) * 2021-07-30 2021-10-08 燕山大学 Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郑少武;李巍华;胡坚耀;: "基于激光点云与图像信息融合的交通环境车辆检测", 仪器仪表学报, no. 12, 15 December 2019 (2019-12-15) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023240805A1 (en) * 2022-06-13 2023-12-21 之江实验室 Connected vehicle overspeed early warning method and system based on filtering correction
CN116736322A (en) * 2023-08-15 2023-09-12 山东科技大学 Speed prediction method integrating camera image and airborne laser radar point cloud data
CN116736322B (en) * 2023-08-15 2023-10-20 山东科技大学 Speed prediction method integrating camera image and airborne laser radar point cloud data

Similar Documents

Publication Publication Date Title
CN110658531B (en) Dynamic target tracking method for port automatic driving vehicle
US11113959B2 (en) Crowdsourced detection, identification and sharing of hazardous road objects in HD maps
CN111656136B (en) Vehicle positioning system using lidar
US10580164B2 (en) Automatic camera calibration
CN113554698B (en) Vehicle pose information generation method and device, electronic equipment and storage medium
Stein et al. A robust method for computing vehicle ego-motion
Siegemund et al. A temporal filter approach for detection and reconstruction of curbs and road surfaces based on conditional random fields
Zhu et al. Reliable detection of overtaking vehicles using robust information fusion
CN113487759B (en) Parking patrol method and device, mobile patrol equipment and patrol system
CN111391823A (en) Multilayer map making method for automatic parking scene
CN114545434A (en) Road side visual angle speed measurement method and system, electronic equipment and storage medium
US9098750B2 (en) Gradient estimation apparatus, gradient estimation method, and gradient estimation program
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
WO2020154990A1 (en) Target object motion state detection method and device, and storage medium
CN112740225B (en) Method and device for determining road surface elements
Pantilie et al. Real-time obstacle detection using dense stereo vision and dense optical flow
CN110794406A (en) Multi-source sensor data fusion system and method
CN113261010A (en) Object trajectory-based multi-modal sensor fusion method for cross-domain correspondence
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
Hussain et al. Multiple objects tracking using radar for autonomous driving
US20230087261A1 (en) Three-dimensional target estimation using keypoints
Kohara et al. Obstacle detection based on occupancy grid maps using stereovision system
WO2022099620A1 (en) Three-dimensional point cloud segmentation method and apparatus, and mobile platform
CN115512542B (en) Track restoration method and system considering shielding based on roadside laser radar
Meuter et al. 3D traffic sign tracking using a particle filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination