CN113850237B - Internet vehicle target detection and evaluation method and system based on video and track data - Google Patents

Internet vehicle target detection and evaluation method and system based on video and track data Download PDF

Info

Publication number
CN113850237B
CN113850237B CN202111430712.3A CN202111430712A CN113850237B CN 113850237 B CN113850237 B CN 113850237B CN 202111430712 A CN202111430712 A CN 202111430712A CN 113850237 B CN113850237 B CN 113850237B
Authority
CN
China
Prior art keywords
acquiring
vehicle
detected
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111430712.3A
Other languages
Chinese (zh)
Other versions
CN113850237A (en
Inventor
施丘岭
何书贤
杨哲
任学锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ismartways Wuhan Technology Co ltd
Original Assignee
Ismartways Wuhan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ismartways Wuhan Technology Co ltd filed Critical Ismartways Wuhan Technology Co ltd
Priority to CN202111430712.3A priority Critical patent/CN113850237B/en
Publication of CN113850237A publication Critical patent/CN113850237A/en
Application granted granted Critical
Publication of CN113850237B publication Critical patent/CN113850237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Abstract

The invention provides a method and a system for detecting and evaluating a networked vehicle target based on video and track data, wherein the method comprises the following steps: acquiring video data of multiple vehicles, and performing data processing on the video data of the multiple vehicles through a target detection algorithm to acquire detection results of the multiple target vehicles and a system to be detected, wherein the detection results meet the identification precision; acquiring track data of the multi-target vehicle; acquiring a sensing range of the system to be detected according to the acquired detection result and the acquired track data of the system to be detected; acquiring the perception time delay of a system to be tested; and analyzing the performance of a target detection algorithm of the system to be detected according to the acquired sensing range and the sensing time delay. According to the method for detecting and evaluating the internet vehicle target based on the video and the track data, the high-precision sensing range of the target detection algorithm of the system to be detected is obtained by obtaining the sensing range of the system to be detected, and the performance of the target detection algorithm of the system to be detected is evaluated by obtaining the sensing time delay of the system to be detected.

Description

Internet vehicle target detection and evaluation method and system based on video and track data
Technical Field
The invention relates to the technical field of internet vehicle target detection algorithms, in particular to a method and a system for detecting and evaluating internet vehicle targets based on video and track data.
Background
The video multi-target tracking is taken as a research hotspot in the field of computer vision, and has good application prospects in the fields of video monitoring, automatic driving and the like. Due to reasons such as shielding, illumination, camera movement and the like, errors may occur in the trajectory data obtained by the tracking algorithm, and researchers need to investigate and analyze the erroneous trajectory data so as to improve the tracking algorithm, and finally compare the trajectory data with the real positioning information of the vehicle to evaluate the performance of the algorithm. However, there is no more perfect trajectory analysis technology system. Therefore, it is necessary and urgent to research techniques related to trajectory analysis in video multi-target tracking.
The target detection is researched based on the video data, the vehicle type and the vehicle queuing number in the visual range of the camera can be accurately identified through the video data, and the queuing phenomenon at the signalized intersection can be simply judged. However, the detection range of the camera is related to objective factors such as shooting angle, illumination, camera resolution and the like, so that the video detection range of the camera is limited. When the vehicle is far away from the camera, the phenomena of vehicle false recognition, false recognition and the like exist, and the accuracy and precision of a vehicle tracking algorithm are influenced. When the vehicle is close to the camera, the vehicle category can be accurately identified, but when the vehicle is far away from the camera, the identification effect is not obvious and the phenomenon of false identification exists.
Target perception is studied based on trajectory data. The real-time travel track of the vehicle is a big data product derived along with the popularization and application of electronic police and gate equipment in a traffic system. When the vehicle passes through the detection equipment arranged at the urban intersection, the vehicle passing data can be recorded. The vehicle passing data contains rich space-time information (such as vehicle ID, camera orientation, shooting time, driving direction and the like), and the characteristics of a user trip mode and the research of microscopic traffic flow message service can be mined by analyzing the vehicle passing data, so that the urban traffic planning and management level can be improved, the congestion degree of an urban road can be reduced, and the operation efficiency of a traffic system can be improved. In some implementations of obtaining the vehicle travel track, the vehicle track is collected through a gps (global Positioning system) system configured for the vehicle, and then a spectral clustering method is adopted to realize automatic splitting of a vehicle travel chain from the perspective of the spatiotemporal characteristics of bayonet data, so as to identify a starting point, a passing point position and an end point of a single vehicle travel track.
With the development of the intelligent internet-connected automobile, the way of acquiring the track data is improved, true-value automobile track data can be acquired through On-board units (OBUs), and the extraction of travel data of a specific automobile from data of a complex checkpoint is avoided. The trajectory data is characterized by continuity and high frequency, and the acquisition range can be adjusted as required. No matter the GPS system is used for collecting the track of the vehicle or the OBU equipment is used for collecting the track of the bicycle, the problems of track loss, drifting and the like caused by signal interruption and shielding cannot be avoided. Once the track data has the problems of missing, drifting and the like, the adopted track data effect is greatly reduced.
Disclosure of Invention
The invention aims to overcome the technical problems that when a vehicle is far away from a camera, the identification effect of the camera is not obvious, the phenomenon of false identification exists, and the precision of adopted track data is greatly reduced due to the problems of track loss, drift and the like caused by signal interruption and shielding cannot be avoided, and provides a method and a system for detecting and evaluating a networked vehicle target based on video and track data.
In a first aspect, the invention provides a method for detecting and evaluating a vehicle-connected target based on video and track data, which comprises the following steps:
acquiring video data of multiple vehicles, and performing data processing on the video data of the multiple vehicles through a target detection algorithm to acquire detection results of the multiple target vehicles and a system to be detected, wherein the detection results meet the identification precision;
acquiring track data of the multi-target vehicle;
acquiring a sensing range of the system to be detected according to the acquired detection result and the acquired track data of the system to be detected;
acquiring the perception time delay of a system to be tested;
and analyzing the performance of a target detection algorithm of the system to be detected according to the acquired sensing range and the sensing time delay.
According to the first aspect, in a first possible implementation manner of the first aspect, the step of "obtaining a sensing range of the system under test according to the obtained detection result and the obtained trajectory data of the system under test" includes the following steps:
acquiring multidirectional farthest sensing distances of the system to be detected according to the acquired detection results and track data of the system to be detected;
and acquiring the sensing range of the system to be tested according to the acquired multidirectional farthest sensing distance of the system to be tested.
According to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the step of "obtaining the farthest sensing distance of the system to be tested in multiple directions according to the obtained detection result and the track data of the system to be tested" specifically includes the following steps:
controlling a plurality of vehicles to sequentially drive in from different directions and different lanes and pass through the intersection of the test area to obtain video data;
performing data processing on the acquired video data through a target identification algorithm to acquire positioning data of the multi-target vehicle and the system to be detected;
acquiring track data of the multi-target vehicle;
comparing the positioning data and the track data of the system to be tested to obtain first positioning information of a first frame of continuous multi-frame pictures of which the system to be tested meets the precision requirement;
controlling the vehicle to approach or leave the camera, and acquiring second positioning information of the vehicle when the vehicle leaves the detection range of the camera;
and acquiring the farthest sensing distance of the system to be tested in the current position according to the first positioning information and the second positioning information.
According to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the obtaining the farthest sensing distance of the system to be measured in the current position according to the first positioning information and the second positioning information includes the following steps:
converting the first positioning information and the second positioning information into coordinates of a planar coordinate system according to equations (1) and (2):
Figure 339937DEST_PATH_IMAGE001
formula (1);
Figure 870276DEST_PATH_IMAGE002
formula (2);
in the formula (I), the compound is shown in the specification,
Figure 623468DEST_PATH_IMAGE003
positioning latitude and longitude information for the GPS of the first positioning information,
Figure 770416DEST_PATH_IMAGE004
positioning latitude and longitude information for the GPS of the second positioning information,
Figure 710997DEST_PATH_IMAGE005
which is the radius of the earth, is,
Figure 361421DEST_PATH_IMAGE006
for the first positioning information to be converted into coordinates of a planar coordinate system,
Figure 285515DEST_PATH_IMAGE007
converting the second positioning information into a coordinate of a plane coordinate system;
transforming the coordinates of the plane coordinate system converted from the first positioning information and the second positioning information according to an Euler distance calculation formula (3), and acquiring the farthest sensing distance D of the current position to-be-measured system:
Figure 903447DEST_PATH_IMAGE008
formula (3).
According to the first aspect, in a fourth possible implementation manner of the first aspect, the step of "acquiring the sensing delay of the system under test" specifically includes the following steps:
obtaining the output time t of the detection result of the system to be detected1
Acquiring watermark time t of video data acquisition equipment0
Obtaining a device response time t between a video data acquisition device and a trajectory data acquisition device2
Acquiring data preprocessing time t of system to be tested3
T to be acquired0、t1、t2、t3Performing parameter transformation according to the formula (4) to obtain the perception time delay t of the system to be measured
t= t1-t0-t2-t3 Formula (4).
According to a fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the obtaining of the device response time t between the video data acquisition device and the trajectory data acquisition device is performed2The method specifically comprises the following steps:
two reference lines are defined in the detection area by taking a target vehicle as a starting point, the two reference lines are mutually vertical, and the other end points of the two reference lines are respectively positioned in a lane and an intersection range of the detection area;
acquiring vehicle positioning information in the detection area and positioning information of the other end point of the two reference lines;
controlling the vehicle to drive through the reference line at a speed meeting the speed limit regulation, and acquiring the first time t of triggering the reference line by the vehicle center point through the track data acquisition equipment21
Acquiring video data of a vehicle driving through a reference line at a speed meeting the speed limit regulation by video data acquisition equipment;
processing the video data by a target detection algorithm to obtain a detection result of the system to be detected, and obtaining a second time t corresponding to the vehicle triggering reference line in the detection result22
T to be acquired21And t22Performing parameter transformation according to the formula (6) to obtain the response time difference t20
t20= t22 -t21 Formula (6);
obtaining response time differences t of a plurality of vehicles in a test area20Acquiring the traffic flow state of the current vehicle;
acquiring the equipment response time t of the system to be tested according to the current traffic flow state of the vehicle2
According to a fifth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, the "obtaining the device response time t of the system to be tested according to the traffic flow state of the current vehicle2"the step of the method comprises the following steps,the method specifically comprises the following steps:
comparing the response time variance with a variance threshold value to obtain the traffic flow state of the current vehicle;
according to the traffic flow state of the current vehicle and the corresponding mapping table of the traffic flow state and the equipment response time, acquiring the equipment response time t of the system to be tested corresponding to the traffic flow state of the current vehicle2
According to a fourth possible implementation manner of the first aspect, in a seventh possible implementation manner of the first aspect, the step of obtaining the data preprocessing time t of the system to be tested3The method specifically comprises the following steps:
acquiring the data acquisition starting time t for the system to be tested to start new data acquisition of one frame31
Acquiring data preprocessing completion time t for acquiring the frame data and completing subsequent processing to acquire structural information by the system to be tested32
Will t31And t32Performing parameter transformation according to the formula (7) to obtain the data preprocessing time t of the system to be tested3
t3=t32- t31 Formula (7).
In a second aspect, the present invention provides a system for detecting and evaluating internet vehicle targets based on video and trajectory data, which is characterized by comprising:
the perception range acquisition module is used for acquiring video data of multiple vehicles, carrying out data processing on the video data of the multiple vehicles through a target detection algorithm and acquiring detection results of the multiple target vehicles and the system to be detected, wherein the detection results meet the identification precision; acquiring track data of the multi-target vehicle; acquiring a sensing range of the system to be detected according to the acquired detection result and the acquired track data of the system to be detected;
the sensing time delay acquisition module is used for acquiring the sensing time delay of the system to be tested;
and the evaluation module is in communication connection with the perception range acquisition module and the perception time delay acquisition module and is used for analyzing the performance of the target detection algorithm of the system to be detected according to the acquired perception range and the acquired perception time delay.
According to the second aspect, in a first possible implementation manner of the second aspect, the sensing range obtaining module includes:
the system comprises a detection result acquisition module of the system to be detected, a data processing module and a data processing module, wherein the detection result acquisition module is used for acquiring video data of multiple vehicles, and performing data processing on the video data of the multiple vehicles through a target detection algorithm to acquire detection results of the multiple vehicles and the system to be detected, which meet the identification precision;
the track data acquisition module is used for acquiring track data of the multi-target vehicle;
the farthest sensing distance acquisition module is in communication connection with the detection result acquisition module and the track data acquisition module of the system to be detected and is used for acquiring the multidirectional farthest sensing distance of the system to be detected according to the acquired detection result and track data of the system to be detected;
and the perception range acquisition module is in communication connection with the farthest perception distance acquisition module and is used for acquiring the perception range of the system to be detected according to the acquired multidirectional farthest perception distance of the system to be detected.
Compared with the prior art, the invention has the following advantages:
according to the method for detecting and evaluating the internet vehicle target based on the video and the track data, the high-precision sensing range of the target detection algorithm of the system to be detected is obtained by obtaining the sensing range of the system to be detected, and the performance of the target detection algorithm of the system to be detected is evaluated by obtaining the sensing time delay of the system to be detected.
Drawings
Fig. 1 is a schematic flow chart of a method for detecting and evaluating a target of a networked vehicle based on video and track data according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another method of the internet vehicle target detection and evaluation method based on video and track data according to the embodiment of the present invention;
fig. 3 is a scene schematic diagram of a vehicle trigger reference line in the online vehicle target detection and evaluation method based on video and track data according to the embodiment of the present invention;
fig. 4 is a functional block diagram of a system of a method for detecting and evaluating a target of a networked vehicle based on video and track data according to an embodiment of the present invention;
fig. 5 is another functional block diagram of the system of the online vehicle target detection and evaluation method based on video and track data according to the embodiment of the present invention.
100. A perception range acquisition module; 110. a detection result acquisition unit of the system to be detected; 120. a trajectory data acquisition unit; 130. a farthest sensing distance obtaining unit; 140. a sensing range acquisition unit; 200. a sensing time delay obtaining module; 300. and an evaluation module.
Detailed Description
Reference will now be made in detail to the present embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the specific embodiments, it will be understood that they are not intended to limit the invention to the embodiments described. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. It should be noted that the method steps described herein may be implemented by any functional block or functional arrangement, and that any functional block or functional arrangement may be implemented as a physical entity or a logical entity, or a combination of both.
In order that those skilled in the art will better understand the present invention, the following detailed description of the invention is provided in conjunction with the accompanying drawings and the detailed description of the invention.
Note that: the example to be described next is only a specific example, and does not limit the embodiments of the present invention necessarily to the following specific steps, values, conditions, data, orders, and the like. Those skilled in the art can, upon reading this specification, utilize the concepts of the present invention to construct more embodiments than those specifically described herein.
Referring to fig. 1, the present invention provides a method for detecting and evaluating a target of a networked vehicle based on video and track data, including the following steps:
s110, acquiring video data of multiple vehicles, and performing data processing on the video data of the multiple vehicles through a target detection algorithm to acquire detection results of the multiple target vehicles and a system to be detected, wherein the detection results meet the identification precision;
s120, acquiring track data of the multi-target vehicle;
s130, acquiring a sensing range of the system to be detected according to the acquired detection result and the acquired track data of the system to be detected;
s200, acquiring the perception time delay of the system to be tested;
and S300, analyzing the performance of the target detection algorithm of the system to be detected according to the acquired sensing range and sensing time delay.
In an embodiment, the video data is acquired by a video data acquisition device, and the track data is acquired by a track data acquisition device.
In a more specific embodiment, the video data is acquired by a camera in the detection area, and the trajectory data is acquired by an OBU device on the vehicle.
The invention provides a video and track data-based internet vehicle target detection evaluation method, which comprises the steps of obtaining a sensing range of a system to be detected, obtaining a high-precision sensing range of a target detection algorithm of the system to be detected, and evaluating the performance of the target detection algorithm of the system to be detected by obtaining the sensing time delay of the system to be detected;
the watermark time of the video data collected by the camera is in the second level, the track data of the vehicle collected by the OBU equipment can be realized to be in the millisecond level precision, the video data and the track data are subjected to fusion analysis, the millisecond level perception and response can be realized, and the perception and response precision of the system to be tested is improved;
the method has universality by identifying the tracked vehicles in the video data based on a target tracking detection algorithm, selecting the targets meeting the precision requirement for counting output and avoiding the phenomenon that the targets are lost or repeatedly counted due to the phenomenon of ID Switch in the target tracking process.
The system to be detected is a system for carrying out target identification, target tracking and target detection on video data, the target identification is to identify multi-target vehicles on the video data, and the target tracking and the target detection are to carry out target tracking on the multi-target vehicles on the video data and acquire positioning information of the multi-target vehicles through a target detection algorithm. The system to be tested is an object evaluated through the perception range and the perception time delay.
In an embodiment, the step of acquiring the sensing range of the system to be tested according to the acquired detection result and the acquired trajectory data of the system to be tested specifically includes the following steps:
s131, acquiring multidirectional farthest sensing distances of the system to be detected according to the acquired detection results and track data of the system to be detected;
s132, acquiring the sensing range of the system to be tested according to the acquired multidirectional farthest sensing distance of the system to be tested.
In an embodiment, the step of "S131, obtaining the farthest sensing distance of the system to be tested in multiple directions according to the obtained detection result and the track data of the system to be tested" includes the following steps:
s1311, controlling multiple vehicles to sequentially drive in from different directions and different lanes and pass through intersections of a test area to obtain video data;
s1312, carrying out data processing on the acquired video data through a target identification algorithm to acquire positioning data of the multi-target vehicle and the system to be detected;
s1313, acquiring track data of the multi-target vehicle;
s1314, comparing the positioning data and the track data of the system to be tested, and acquiring first positioning information of a first frame of continuous multi-frame pictures of which the system to be tested meets the precision requirement;
s1315, controlling the vehicle to approach or leave the camera, and obtaining second positioning information of the vehicle when the vehicle leaves the detection range of the camera through the OBU device;
s1316, according to the first positioning information and the second positioning information, obtaining the farthest sensing distance of the current position to-be-detected system.
Wherein the detection range of the camera is limited by the performance of the camera.
In an embodiment, the step of obtaining the farthest sensing distance of the system to be measured in the current position according to the first positioning information and the second positioning information by the step of S1316 includes the steps of:
will be firstPositioning information
Figure 141661DEST_PATH_IMAGE009
And second positioning information
Figure 912171DEST_PATH_IMAGE010
Converting into coordinates of a plane coordinate system according to the formula (1) and the formula (2)
Figure 990854DEST_PATH_IMAGE011
And
Figure 112394DEST_PATH_IMAGE012
Figure 951037DEST_PATH_IMAGE013
formula (1);
Figure 763004DEST_PATH_IMAGE014
formula (2);
in the formula (I), the compound is shown in the specification,
Figure 28901DEST_PATH_IMAGE015
positioning latitude and longitude information for the GPS of the first positioning information,
Figure 372157DEST_PATH_IMAGE010
the latitude and longitude information for the GPS positioning of the second positioning information, r is the radius of the earth,
Figure 14491DEST_PATH_IMAGE011
for the first positioning information to be converted into coordinates of a planar coordinate system,
Figure 746211DEST_PATH_IMAGE012
converting the second positioning information into a coordinate of a plane coordinate system;
transforming the coordinates of the plane coordinate system converted from the first positioning information and the second positioning information according to an Euler distance calculation formula (3), and acquiring the farthest sensing distance D of the current position to-be-measured system:
Figure 386271DEST_PATH_IMAGE016
formula (3).
Because the detection result output time of the system to be detected comprises the time difference of the video data acquisition equipment and the track data acquisition equipment, the data preprocessing time of the system to be detected and the watermark time of the video data, the perception time delay of the system to be detected can be obtained by comparing the difference value of the detection result output time with the watermark time of the video data after eliminating the time difference of the two data acquisition equipment and the data preprocessing time of the system to be detected, therefore, the step of S200 for obtaining the perception time delay of the system to be detected specifically comprises the following steps:
s210, obtaining the output time t of the detection result of the system to be detected1
S220, obtaining watermark time t of video data acquisition equipment0
S230, acquiring the device response time t between the video data acquisition device and the track data acquisition device2
S240, acquiring data preprocessing time t of the system to be tested3
S250, t to be acquired0、t1、t2、t3And (3) performing parameter transformation according to the formula (4) to obtain the perception time delay t of the system to be measured:
t= t1-t0-t2-t3 formula (4).
In one embodiment, the step S230 includes obtaining a device response time t between the video data acquisition device and the trajectory data acquisition device2The method specifically comprises the following steps:
s231, dividing two reference lines in the detection area by taking a target vehicle as a starting point, wherein the two reference lines are vertical to each other, the other end points of the two reference lines are respectively positioned in a lane and an intersection range of the detection area, one reference line is positioned in the lane, the position of the other reference line and the target vehicle are taken as the starting end point, and the other end point is positioned in the intersection range of the detection area;
s232, acquiring vehicle positioning information in the detection area and positioning information of the other end point of the two reference lines;
s233, controlling the vehicle to drive through the reference line at a speed meeting the speed limit regulation, and acquiring the first time t of triggering the reference line by the vehicle center point through the track data acquisition equipment21
S234, video data of the vehicle driving through the reference line at the speed meeting the speed limit regulation are obtained through the video data acquisition equipment;
s235, carrying out data processing on the video data through a target detection algorithm to obtain a detection result of the system to be detected, and obtaining a second time t22 corresponding to the vehicle trigger reference line in the detection result;
s236, t to be acquired21And t22Performing parameter transformation according to the formula (6) to obtain the response time difference t20
t20= t22 -t21 Formula (6);
s237, obtaining the response time difference t of a plurality of vehicles in the test area through a plurality of tests20According to the obtained multiple response time differences t20Acquiring a traffic flow state of a current vehicle;
s238, according to the current traffic flow state of the vehicle, obtaining the equipment response time t of the system to be tested2
In one embodiment, the vehicle driving along the lane past the reference line is defined as the vehicle trigger P1P2And (6) connecting the wires.
In a more specific embodiment, two reference lines are defined in the detection area along the direction perpendicular to the lane line, one reference line is in the lane, the other reference line is in the intersection range, the common target vehicle position of the two reference lines is an end point position, the longitude and latitude of the other end point of the two reference lines and the longitude and latitude of the vehicle center point are discretely collected, and the traffic participant center point is the geometric center of a minimum cube capable of wrapping traffic participants; the vehicle drives through the reference line at a speed meeting the speed limit regulation of the field of the detection area, and the track data acquisition equipment of the target vehicle records the first time t when the central point of the vehicle triggers the reference line21(ii) a ByThe method comprises the steps that video data of a target vehicle driving through a reference line are obtained by video data acquisition equipment, data processing is carried out on the video data through a target detection algorithm, and a detection result of a system to be detected is obtained; finding out a second time t corresponding to the point of the target vehicle trigger reference line in the detection result22(ii) a Will t22And t21The difference is counted as the response time of the system to be tested to the target vehicle in the current traffic state.
In an embodiment, since the acquisition frequency of the video data acquisition device is low, data is acquired when the target vehicle does not trigger the reference line in the detection result of the system to be detected, in this case, a frame of data closest to the reference line triggering time of the target vehicle may be selected, and t is adjusted according to the position of the target vehicle given by the frame of video data20
In one embodiment, the target vehicle travels once in each traveling direction within the detection area, for example, from 8 directions, and the response time of the system under test to the target vehicle traveling in different directions is measured.
In one embodiment, as shown in fig. 3, the target vehicle and the other end of the two reference lines enclose a triangle, the target vehicle Pv, and the other end of the two reference lines is the calibration point P1 and the calibration point P2, the area S of the triangle P1P2Pv is obtained by using the heleny formula, and when S is minimum, the target vehicle is considered to trigger the reference line.
Wherein, S minimum is a preset minimum area threshold, and in an ideal state, S minimum is 0.
Figure 951245DEST_PATH_IMAGE017
Figure 380958DEST_PATH_IMAGE018
Figure 980566DEST_PATH_IMAGE019
Figure 791528DEST_PATH_IMAGE020
Figure 93065DEST_PATH_IMAGE021
Figure 811622DEST_PATH_IMAGE022
In the formula (I), the compound is shown in the specification,
Figure 531316DEST_PATH_IMAGE023
as a calibration point P1The GPS-located latitude and longitude coordinates of (a),
Figure 309917DEST_PATH_IMAGE024
as a calibration point P2The GPS-located latitude and longitude coordinates of (a),
Figure 98750DEST_PATH_IMAGE025
as a calibration point PvThe GPS positioning longitude and latitude coordinates, r is the radius of the earth,
Figure 824260DEST_PATH_IMAGE026
as a calibration point P1The coordinates of the plane coordinate system converted from the longitude and latitude coordinates of the GPS,
Figure 132882DEST_PATH_IMAGE027
as a calibration point P2The coordinates of the plane coordinate system converted from the longitude and latitude coordinates of the GPS,
Figure 600160DEST_PATH_IMAGE028
as a calibration point P3The coordinate of the plane coordinate system converted from the longitude and latitude coordinates of the GPS is that P is half of the perimeter of a triangle P1P2Pv, and S is the triangle P1P2PvThe area of (a).
In an embodiment, the step S238 is to obtain the traffic flow state of the system under test according to the current traffic flow state of the vehicleDevice response time t2The method specifically comprises the following steps:
s2381, acquiring response time differences of a plurality of vehicles in the test area, and acquiring response time variances;
s2382, comparing the response time variance with the variance threshold value, and acquiring the traffic state of the current vehicle;
s2383, according to the traffic flow state of the current vehicle and the corresponding mapping table of the traffic flow state and the equipment response time, obtaining the equipment response time t of the system to be tested corresponding to the traffic flow state of the current vehicle2
The step of "S2382, comparing the response time variance with the variance threshold value, and obtaining the traffic state of the current vehicle" specifically includes the following steps:
s23821, when the variance of the response time is larger than the variance threshold value, judging that the current vehicle is in a peak traffic state;
s23822, when the variance of the response time is not more than the variance threshold, it is determined that the current vehicle is in a flat state.
In an embodiment, the step S2383 is to obtain the device response time t of the system to be tested corresponding to the traffic state of the current vehicle according to the traffic state of the current vehicle and the mapping table of the traffic state and the device response time2The method specifically comprises the following steps:
s23831, when the current vehicle is in the traffic flow state of the flat peak, the t is calculated according to the table 12aAs the equipment response time t of the system to be tested corresponding to the traffic state of the current vehicle2
S23832, when the current vehicle belongs to the peak traffic state, t is determined according to the table 12bDevice response time t of system under test corresponding to vehicle state of current vehicle2
Table 1 correspondence mapping table of traffic flow status and device response time
Traffic flow state Flat peak Peak
Response time t2a t2b
In the table, t2b> t2a
In an embodiment, the step S240 includes obtaining a data preprocessing time t of the system under test3The method specifically comprises the following steps:
s241, acquiring the data acquisition starting time t for the system to be tested to start new data acquisition of one frame31
S242, acquiring data preprocessing completion time t for acquiring the frame data and completing subsequent processing to acquire the structured information by the system to be tested32
S243, mixing t31And t32Performing parameter transformation according to the formula (7) to obtain the data preprocessing time t of the system to be tested3
t3=t32- t31 Formula (7).
Based on the same inventive concept, please refer to fig. 4, the invention provides a system for detecting and evaluating internet connection targets based on video and track data, comprising:
the perception range acquisition module 100 is configured to acquire video data of multiple vehicles, perform data processing on the video data of the multiple vehicles through a target detection algorithm, and acquire detection results of multiple target vehicles and a system to be detected, which meet identification accuracy; acquiring track data of the multi-target vehicle; acquiring a sensing range of the system to be detected according to the acquired detection result and the acquired track data of the system to be detected;
a sensing time delay obtaining module 200, configured to obtain a sensing time delay of a system to be tested;
and the evaluation module 300 is in communication connection with the sensing range acquisition module and the sensing time delay acquisition module, and is used for analyzing the performance of the target detection algorithm of the system to be detected according to the acquired sensing range and sensing time delay.
In an embodiment, please refer to fig. 5, the sensing range obtaining module includes:
the detection result acquisition unit 110 of the system to be detected is used for acquiring video data of multiple vehicles, and performing data processing on the video data of the multiple vehicles through a target detection algorithm to acquire detection results of the multiple target vehicles and the system to be detected, which meet the identification precision;
a trajectory data acquisition unit 120 for acquiring trajectory data of the multi-target vehicle;
a farthest sensing distance obtaining unit 130, communicatively connected to the detection result obtaining unit 110 and the trajectory data obtaining unit 120 of the system under test, for obtaining a multidirectional farthest sensing distance of the system under test according to the obtained detection result and trajectory data of the system under test;
a sensing range obtaining unit 140, communicatively connected to the farthest sensing distance obtaining unit 130, configured to obtain a sensing range of the system to be tested according to the obtained farthest sensing distance of the system to be tested in multiple directions.
Based on the same inventive concept, the embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements all or part of the method steps of the above method.
The present invention can implement all or part of the processes of the above methods, and can also be implemented by using a computer program to instruct related hardware, where the computer program can be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the above method embodiments can be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
Based on the same inventive concept, an embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program running on the processor, and the processor executes the computer program to implement all or part of the method steps in the method.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the computer device and the various interfaces and lines connecting the various parts of the overall computer device.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the computer device by executing or executing the computer programs and/or modules stored in the memory, as well as by invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (e.g., a sound playing function, an image playing function, etc.); the storage data area may store data (e.g., audio data, video data, etc.) created according to the use of the cellular phone. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, server, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), servers and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (4)

1. The method for detecting and evaluating the internet vehicle target based on the video and the track data is characterized by comprising the following steps of:
acquiring video data of multiple vehicles, and performing data processing on the video data of the multiple vehicles through a target detection algorithm to acquire detection results of the multiple target vehicles and a system to be detected, wherein the detection results meet the identification precision;
acquiring track data of the multi-target vehicle;
acquiring a sensing range of the system to be detected according to the acquired detection result and the acquired track data of the system to be detected;
acquiring the perception time delay of a system to be tested;
analyzing the performance of a target detection algorithm of the system to be detected according to the acquired sensing range and sensing time delay;
the step of acquiring the perception range of the system to be detected according to the acquired detection result and the acquired track data of the system to be detected specifically comprises the following steps:
acquiring multidirectional farthest sensing distances of the system to be detected according to the acquired detection results and track data of the system to be detected;
acquiring a sensing range of the system to be tested according to the acquired farthest sensing distance of the system to be tested in multiple directions;
the step of obtaining the multidirectional farthest sensing distance of the system to be detected according to the obtained detection result and the track data of the system to be detected specifically comprises the following steps:
controlling a plurality of vehicles to sequentially drive in from different directions and different lanes and pass through the intersection of the test area to obtain video data;
performing data processing on the acquired video data through a target identification algorithm to acquire positioning data of the multi-target vehicle and the system to be detected;
acquiring track data of the multi-target vehicle;
comparing the positioning data and the track data of the system to be tested to obtain first positioning information of a first frame of continuous multi-frame pictures of which the system to be tested meets the precision requirement;
controlling the vehicle to approach or leave the camera, and acquiring second positioning information of the vehicle when the vehicle leaves the detection range of the camera;
acquiring the farthest sensing distance of the current position to-be-detected system according to the first positioning information and the second positioning information;
the step of acquiring the farthest sensing distance of the system to be tested in the current position according to the first positioning information and the second positioning information comprises the following steps:
converting the first positioning information and the second positioning information into coordinates of a planar coordinate system according to equations (1) and (2):
Figure FDA0003465201250000021
Figure FDA0003465201250000022
in the formula (x)1,y1) Positioning latitude and longitude information for the GPS of the first positioning information, (x)2,y2) Positioning latitude and longitude information for the GPS of the second positioning information, r is the radius of the earth, (X)1,Y1,Z1) For converting the first positioning information into plane coordinatesCoordinates of the system, (X)2,Y2,Z2) Converting the second positioning information into a coordinate of a plane coordinate system;
and (3) converting the coordinates of the plane coordinate system converted from the first positioning information and the second positioning information into parameters according to an Euler distance calculation formula (3), and acquiring the farthest sensing distance D of the current position to-be-measured system:
Figure FDA0003465201250000023
the step of obtaining the perception time delay of the system to be tested specifically comprises the following steps:
obtaining the output time t of the detection result of the system to be detected1
Acquiring watermark time t of video data acquisition equipment0
Obtaining a device response time t between a video data acquisition device and a trajectory data acquisition device2
Acquiring data preprocessing time t of system to be tested3
T to be acquired0、t1、t2、t3And (3) performing parameter transformation according to the formula (4) to obtain the perception time delay t of the system to be measured:
t=t1-t0-t2-t3formula (4).
2. The method for detecting and evaluating the internet vehicle target based on the video and track data as claimed in claim 1, wherein the step of obtaining the device response time t between the video data acquisition device and the track data acquisition device2The method specifically comprises the following steps:
two reference lines are defined in the detection area by taking a target vehicle as a starting point, the two reference lines are mutually vertical, and the other end points of the two reference lines are respectively positioned in a lane and an intersection range of the detection area;
acquiring vehicle positioning information in the detection area and positioning information of the other end point of the two reference lines;
controlling the vehicle to drive through the reference line at a speed meeting the speed limit regulation, and acquiring the first time t of triggering the reference line by the vehicle center point through the track data acquisition equipment21
Acquiring video data of a vehicle driving through a reference line at a speed meeting the speed limit regulation by video data acquisition equipment;
processing the video data by a target detection algorithm to obtain a detection result of the system to be detected, and obtaining a second time t corresponding to the vehicle triggering reference line in the detection result22
T to be acquired21And t22Performing parameter transformation according to the formula (6) to obtain the response time difference t20
t20=t22-t21Formula (6);
obtaining response time differences t of a plurality of vehicles in a test area20According to the obtained multiple response time differences t20Acquiring a traffic flow state of a current vehicle;
acquiring the equipment response time t of the system to be tested according to the current traffic flow state of the vehicle2
3. The method for detecting and evaluating the internet vehicle target based on the video and the track data as claimed in claim 2, wherein the device response time t of the system to be tested is obtained according to the current traffic flow state of the vehicle2The method specifically comprises the following steps:
obtaining response time differences of a plurality of vehicles in a test area, and obtaining response time variances;
comparing the response time variance with a variance threshold value to obtain the traffic flow state of the current vehicle;
according to the traffic flow state of the current vehicle and the corresponding mapping table of the traffic flow state and the equipment response time, acquiring the equipment response time t of the system to be tested corresponding to the traffic flow state of the current vehicle2
4. The video and trajectory data-based web of claim 1The method for detecting and evaluating the on-line target is characterized in that the method for acquiring the data preprocessing time t of the system to be detected3The method specifically comprises the following steps:
acquiring the data acquisition starting time t for the system to be tested to start new data acquisition of one frame31
Acquiring data preprocessing completion time t for acquiring the frame data and completing subsequent processing to acquire structural information by the system to be tested32
Will t31And t32Performing parameter transformation according to the formula (7) to obtain the data preprocessing time t of the system to be tested3
t3=t32-t31Formula (7).
CN202111430712.3A 2021-11-29 2021-11-29 Internet vehicle target detection and evaluation method and system based on video and track data Active CN113850237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111430712.3A CN113850237B (en) 2021-11-29 2021-11-29 Internet vehicle target detection and evaluation method and system based on video and track data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111430712.3A CN113850237B (en) 2021-11-29 2021-11-29 Internet vehicle target detection and evaluation method and system based on video and track data

Publications (2)

Publication Number Publication Date
CN113850237A CN113850237A (en) 2021-12-28
CN113850237B true CN113850237B (en) 2022-02-22

Family

ID=78982221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111430712.3A Active CN113850237B (en) 2021-11-29 2021-11-29 Internet vehicle target detection and evaluation method and system based on video and track data

Country Status (1)

Country Link
CN (1) CN113850237B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114550450A (en) * 2022-02-15 2022-05-27 云控智行科技有限公司 Method and device for verifying perception accuracy of roadside sensing equipment and electronic equipment
CN116824869B (en) * 2023-08-31 2023-11-24 国汽(北京)智能网联汽车研究院有限公司 Vehicle-road cloud integrated traffic fusion perception testing method, device, system and medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8996234B1 (en) * 2011-10-11 2015-03-31 Lytx, Inc. Driver performance determination based on geolocation
CN105336207B (en) * 2015-12-04 2018-12-25 黄左宁 Vegicle recorder and public security comprehensive monitoring system
US10685244B2 (en) * 2018-02-27 2020-06-16 Tusimple, Inc. System and method for online real-time multi-object tracking
CN109617967A (en) * 2018-12-14 2019-04-12 深圳市曹操货的科技有限公司 One kind being based on big data Intelligent internet of things control platform
US20200211394A1 (en) * 2018-12-26 2020-07-02 Zoox, Inc. Collision avoidance system
CN110264783B (en) * 2019-06-19 2022-02-15 华设设计集团股份有限公司 Vehicle anti-collision early warning system and method based on vehicle-road cooperation
WO2021108434A1 (en) * 2019-11-27 2021-06-03 B&H Licensing Inc. Method and system for pedestrian-to-vehicle collision avoidance based on amplified and reflected wavelength
CN113139410B (en) * 2020-01-19 2024-02-13 杭州海康威视系统技术有限公司 Pavement detection method, device, equipment and storage medium
CN111723672B (en) * 2020-05-25 2021-03-30 华南理工大学 Method and device for acquiring video recognition driving track and storage medium
CN112347993B (en) * 2020-11-30 2023-03-17 吉林大学 Expressway vehicle behavior and track prediction method based on vehicle-unmanned aerial vehicle cooperation
CN113313154A (en) * 2021-05-20 2021-08-27 四川天奥空天信息技术有限公司 Integrated multi-sensor integrated automatic driving intelligent sensing device
CN113259900B (en) * 2021-05-27 2021-10-15 华砺智行(武汉)科技有限公司 Distributed multi-source heterogeneous traffic data fusion method and device
CN113347254B (en) * 2021-06-02 2022-05-31 安徽工程大学 Intelligent traffic control car networking system based on V2X and control method thereof
CN113362606A (en) * 2021-07-23 2021-09-07 重庆智和慧创科技有限公司 Car road is anticollision integrated control system in coordination based on car networking

Also Published As

Publication number Publication date
CN113850237A (en) 2021-12-28

Similar Documents

Publication Publication Date Title
CN109272756B (en) Method for estimating queuing length of signal control intersection
CN113850237B (en) Internet vehicle target detection and evaluation method and system based on video and track data
CN112069643B (en) Automatic driving simulation scene generation method and device
CN112069944B (en) Road congestion level determining method
CN107885795B (en) Data verification method, system and device for card port data
CN109359690B (en) Vehicle travel track identification method based on checkpoint data
CN113570864B (en) Method and device for matching running path of electric bicycle and storage medium
WO2019205020A1 (en) Road condition recognition method, apparatus and device
CN113847925A (en) Method, device, equipment and medium for detecting vehicle yaw based on track data
CN111275975B (en) Method and device for acquiring intersection turning flow data and storage medium
CN110851490B (en) Vehicle travel common stay point mining method and device based on vehicle passing data
CN112434566A (en) Passenger flow statistical method and device, electronic equipment and storage medium
CN110322687B (en) Method and device for determining running state information of target intersection
US20220237919A1 (en) Method, Apparatus, and Computing Device for Lane Recognition
CN112447060A (en) Method and device for recognizing lane and computing equipment
CN114771548A (en) Data logging for advanced driver assistance system testing and verification
CN110444026B (en) Triggering snapshot method and system for vehicle
CN110109159A (en) Travel management method, device, electronic equipment and storage medium
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium
CN111741267B (en) Method, device, equipment and medium for determining vehicle delay
CN112419751B (en) Signalized intersection lane queuing length estimation method based on single-section electric alarm data
CN111372051B (en) Multi-camera linkage blind area detection method and device and electronic equipment
CN113048988B (en) Method and device for detecting change elements of scene corresponding to navigation map
CN113128847A (en) Entrance ramp real-time risk early warning system and method based on laser radar
Hassan et al. ChattSpeed: Toward a New Dataset for Single Camera Visual Speed Estimation for Urban Testbeds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Method and System for Target Detection and Evaluation of Connected Vehicles Based on Video and Trajectory Data

Effective date of registration: 20231010

Granted publication date: 20220222

Pledgee: Bank of China Limited Wuhan Economic and Technological Development Zone sub branch

Pledgor: ISMARTWAYS (WUHAN) TECHNOLOGY Co.,Ltd.

Registration number: Y2023980060478

PE01 Entry into force of the registration of the contract for pledge of patent right