CN114596706A - Detection method and device of roadside sensing system, electronic equipment and roadside equipment - Google Patents

Detection method and device of roadside sensing system, electronic equipment and roadside equipment Download PDF

Info

Publication number
CN114596706A
CN114596706A CN202210255646.9A CN202210255646A CN114596706A CN 114596706 A CN114596706 A CN 114596706A CN 202210255646 A CN202210255646 A CN 202210255646A CN 114596706 A CN114596706 A CN 114596706A
Authority
CN
China
Prior art keywords
data
obstacle
vehicle
roadside
sensing system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210255646.9A
Other languages
Chinese (zh)
Other versions
CN114596706B (en
Inventor
郑义川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd, Apollo Zhixing Technology Guangzhou Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202210255646.9A priority Critical patent/CN114596706B/en
Priority claimed from CN202210255646.9A external-priority patent/CN114596706B/en
Publication of CN114596706A publication Critical patent/CN114596706A/en
Application granted granted Critical
Publication of CN114596706B publication Critical patent/CN114596706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0112Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/097Supervising of traffic control systems, e.g. by giving an alarm if two crossing streets have green light simultaneously

Abstract

The invention provides a detection method and device of a roadside sensing system, electronic equipment and roadside equipment, relates to the technical field of artificial intelligence, and particularly relates to the technical field of automatic driving, intelligent transportation, vehicle-road cooperation and computer vision. The implementation scheme is as follows: acquiring vehicle-end sensing data output by a vehicle-end sensing system; true value data meeting a preset credibility requirement are extracted from the vehicle-end sensing data; obtaining roadside sensing data which is output by the roadside sensing system and matched with the truth value data; and determining accuracy of the roadside perception data based on the truth data.

Description

Detection method and device of road side sensing system, electronic equipment and road side equipment
Technical Field
The present disclosure relates to the technical field of artificial intelligence, particularly to the technical field of automatic driving, intelligent transportation, vehicle-road coordination, and computer vision, and in particular to a method and an apparatus for detecting a roadside sensing system, an electronic device, a computer-readable storage medium, a computer program product, a roadside device, and a cloud control platform.
Background
Automatic driving and driving assistance technology relates to aspects such as environmental perception, behavior decision, path planning and motion control. Depending on the cooperative cooperation of the sensors, the vision computing system and the positioning system, a vehicle with an autonomous or assisted driving function can be automatically operated without or with only a small amount of driver action.
The vehicle-road cooperation is a safe, efficient and environment-friendly road traffic system which adopts advanced wireless communication, new-generation internet and other technologies, implements vehicle-vehicle and vehicle-road dynamic real-time information interaction in all directions, develops vehicle active safety control and road cooperative management on the basis of full-time-space dynamic traffic information acquisition and fusion, fully realizes effective cooperation of people and vehicles, ensures traffic safety, and improves traffic efficiency.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The disclosure provides a detection method and device of a roadside sensing system, electronic equipment, a computer readable storage medium, a computer program product, roadside equipment and a cloud control platform.
According to an aspect of the present disclosure, there is provided a method for detecting a roadside sensing system, including: acquiring vehicle-end sensing data output by a vehicle-end sensing system; true value data meeting a preset credibility requirement are extracted from the vehicle-end sensing data; obtaining roadside sensing data which is output by the roadside sensing system and matched with the truth value data; and determining accuracy of the roadside perception data based on the truth data.
According to an aspect of the present disclosure, there is provided a detection apparatus of a roadside sensing system, including: the acquisition module is configured to acquire vehicle-end sensing data output by the vehicle-end sensing system; the extraction module is configured to extract true value data meeting a preset credibility requirement from the vehicle-end sensing data; the matching module is configured to acquire roadside sensing data which is output by the roadside sensing system and matched with the truth value data; and a determination module configured to determine an accuracy of the roadside sensing data based on the truth data.
According to an aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above-described method.
According to an aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the above-described method.
According to an aspect of the disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the above-described method.
According to an aspect of the present disclosure, there is provided a roadside apparatus including the above-described electronic apparatus.
According to an aspect of the present disclosure, a cloud control platform is provided, which includes the above electronic device.
According to one or more embodiments of the present disclosure, accurate detection of a roadside perception system can be achieved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of a detection method of a roadside perception system according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of obstacle matching according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of an anchor point according to an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of a detection process of a roadside sensing system according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of a detection apparatus of a roadside sensing system according to an embodiment of the present disclosure; and
FIG. 7 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
End-of-vehicle sensing systems are provided in autonomous or assisted-driving vehicles, including, for example, high-precision positioning devices, LiDAR (LiDAR), control devices (e.g., various types of computer-readable storage devices or media-communicating processors), and the like. In order to ensure the safety of people in the vehicle, pedestrians and vehicles in the surrounding environment, the driving automatic vehicle or the driving assisting vehicle needs to sense the surrounding environment through a vehicle end sensing system, detect obstacles on the driving path and avoid the obstacles.
Because the sensing range of the vehicle-end sensing system is limited, the situation that the obstacle cannot be sensed or cannot be sensed timely occurs, and the safety and the driving efficiency of the vehicle are influenced. In order to improve safety and driving efficiency, a roadside sensing system is generally disposed on one or more sides of a road, and the roadside sensing system may sense obstacles in the surrounding environment and broadcast the sensed obstacle information. Vehicles near the roadside sensing system can receive the obstacle information broadcast by the roadside sensing system and fuse the received obstacle information with the obstacle information sensed by the vehicles, so that the obstacle information in the surrounding environment is comprehensively acquired, and the safety and the driving efficiency of the vehicles are improved.
In the obstacle information fusion scheme of vehicle-road cooperation, the roadside sensing system outputs accurate obstacle sensing data, which is a precondition for safe and efficient driving of the vehicle. Therefore, the roadside sensing system needs to be detected to determine the accuracy of the sensing data output by the roadside sensing system.
In the related art, the accuracy of the sensing data output by the roadside sensing system is usually determined by comparing the true value with the sensing data output by the roadside sensing system, which is the data sensed by the vehicle-end sensing system installed on the sample vehicle, as the true value. However, the actual road condition is usually complex, and in the sensing range of the vehicle-end sensing system, the obstacles are often blocked, truncated, and the like, so that part of sensing data output by the vehicle-end sensing system is inaccurate, that is, the true value is inaccurate, and the detection result of the road-side sensing system is inaccurate.
In order to solve the above problems, the present disclosure provides a detection method for a roadside sensing system, which can screen sensing data output by a vehicle-end sensing system, and use data with high reliability as truth data, so as to improve accuracy of the truth data, thereby improving accuracy of a detection result of the roadside sensing system.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to FIG. 1, the system 100 includes a vehicle 110 and a roadside sensing system 120.
The vehicle 110 may be any type of motor vehicle, such as a sedan, a Sport Utility Vehicle (SUV), a passenger car, a truck, a bus, etc., or a hybrid vehicle, an electric vehicle, a plug-in hybrid electric vehicle, a hydrogen powered vehicle, and other alternative fuel vehicles (e.g., fuels derived from sources other than petroleum), etc.
Vehicle 110 has an end-of-vehicle sensing system 112 disposed therein. The vehicle-end sensing system 112 is used for sensing the surroundings of the vehicle 110, for example, sensing obstacles around the vehicle 110.
An obstacle may be any object in the road that may affect the travel of the vehicle 110. As shown in fig. 1, the obstacles may include motor vehicles 141, vehicles 142, pedestrians 143 in the road. In some embodiments, the obstacles may also include non-motorized vehicles, barricades (e.g., fences, cones, etc.), airborne floats, and the like.
The vehicle-end sensing system 112 may sense the state of obstacles around the vehicle 110 and output vehicle-end sensing data. The vehicle-end perception data includes, for example, information of the position (coordinates of the center point), size (length, width, height), heading (orientation, such as the orientation of the head of the vehicle, the orientation of the pedestrian, etc.), speed direction, and the like of each obstacle around the vehicle 110.
According to some embodiments, the end-of-vehicle sensing system 112 includes high precision positioning devices, high precision sensors such as lidar, and control devices.
The high precision positioning device may collect information such as the position, speed, direction, etc. of the vehicle 110 itself. The laser radar can detect the edge and shape information of surrounding obstacles, so that the obstacles can be identified and tracked. Due to the doppler effect, lidar can also measure the velocity of obstacles as well as velocity changes.
The control device may include a processor, such as a Central Processing Unit (CPU) or Graphics Processing Unit (GPU), or other special purpose processor, etc., in communication with various types of computer-readable storage devices or media. The control device may be configured to process various data collected by the sensor to further obtain information about the obstacle. For example, based on the information on the position, speed, direction, and the like of the vehicle 110 itself acquired by the high-precision positioning device and the information on the edge shape of the obstacle acquired by the laser radar, the information on the position, speed, direction, and the like of each obstacle is determined, and the position where each obstacle is likely to appear at the next time is predicted.
According to some embodiments, the vehicle-end sensing system 112 may also include other sensors for sensing the surrounding environment, such as a visual camera, an infrared camera, an ultrasonic sensor, and the like.
The roadside sensing system 120 is used to sense the surrounding environment, for example, sense obstacles in the surrounding environment. The roadside sensing system 120 may be located anywhere on the road. For example, as shown in fig. 1, the roadside sensing system 120 may be disposed on a signal light stand of each section of the intersection, which is located adjacent to the signal light 130.
The roadside sensing system 120 may sense a state of an obstacle in the surrounding environment and output roadside sensing data. The roadside awareness data includes information of the position, size, heading, speed direction, and the like of each obstacle in the surrounding environment.
According to some embodiments, the roadside perception system 120 may include an image capture device (e.g., a visual camera, an infrared camera, etc.) and a computing device. The image acquisition equipment is used for acquiring road images. The computing device may include a processor in communication with various types of computer-readable storage devices or media. The computing device may be configured to process the road images captured by the image capture device to derive information about various obstacles in the surrounding environment, including the vehicle 110. For example, the computing device may identify relevant information, such as location, size, heading, etc., of various obstacles in the road image based on a trained perceptual model (e.g., a neural network model). Further, the computing device may determine information of a speed, a speed direction, and the like of each obstacle by processing a plurality of road images acquired in succession, and predict a position where each obstacle may appear at the next time, and the like.
According to some embodiments, the roadside sensing system may also include other sensors for sensing the surrounding environment, such as ultrasonic sensors, millimeter wave radar, lidar, and the like.
According to some embodiments, during the driving of the vehicle 110, the vehicle-end sensing system 112 may sense the surrounding environment at a preset first frequency (e.g., 50Hz, 100Hz, etc.), and each sensing outputs a set of vehicle-end sensing data at a certain timestamp. The roadside sensing system 120 may sense the surrounding environment according to a preset second frequency (e.g., 30Hz, 50Hz, etc.), and output a set of roadside sensing data under a certain timestamp each time. A group of vehicle-end sensing data output by the vehicle-end sensing system 112 and a group of road-side sensing data output by the road-side sensing system 120 at each time may be recorded as a "data frame".
It should be noted that the timestamp refers to a time when the sensor in the vehicle-end sensing system 112 or the roadside sensing system 120 collects the environmental data, and not a time when the control device or the computing device outputs the sensing data.
According to some embodiments, before the vehicle-end sensing system 112 and the roadside sensing system 120 sense the surrounding environment, the two may be clock-calibrated to synchronize their time. In this way, vehicle-end sensing data and roadside sensing data with the same acquisition time can be matched based on the time stamp, so that the accuracy of the corresponding roadside sensing data can be detected based on the vehicle-end sensing data.
According to some embodiments, the vehicle 110 may traverse different lanes and driving directions of each road segment of the same intersection, thereby enabling the truth data to comprehensively and uniformly cover the entire intersection, thereby performing comprehensive and accurate detection and evaluation on the road side sensing system.
Based on the vehicle-end sensing data output by the vehicle-end sensing system 112 and the roadside sensing data output by the roadside sensing system 120, the detection method 200 of the roadside sensing system of the embodiment of the disclosure may be executed to determine the accuracy of the roadside sensing data.
The detection method 200 of the roadside sensing system of the embodiment of the disclosure may be performed at any device, i.e., the execution subject of the method 200 may be any device. For example, method 200 may be performed at a vehicle (e.g., vehicle 110 in fig. 1), a roadside sensing system (e.g., roadside sensing system 120 in fig. 1), a roadside device (e.g., an edge computing device disposed on one side of a road), or a cloud-controlled platform.
Fig. 2 shows a flow chart of a detection method 200 of a roadside sensing system according to an embodiment of the present disclosure. As shown in FIG. 2, the method 200 includes steps 210-240.
In step 210, vehicle-end sensing data output by the vehicle-end sensing system is obtained.
In step 220, true value data meeting the preset reliability requirement is extracted from the vehicle-end sensing data.
In step 230, roadside sensing data output by the roadside sensing system and matching with the true value data is obtained.
In step 240, based on the truth data, the accuracy of the roadside perception data is determined.
According to the embodiment of the disclosure, the data with low reliability in the vehicle-end sensing data can be filtered, and only the data with high reliability (meeting the preset reliability requirement) is used as the true value data, so that the accuracy of the true value data is improved, the detection accuracy of the road-side sensing system is improved, and the accurate detection of the road-side sensing system is realized.
The various steps of method 200 are described in detail below.
In step 210, vehicle-end sensing data output by the vehicle-end sensing system is obtained.
According to some embodiments, the vehicle-end awareness data includes respective first state data of at least one first obstacle. The first obstacle may be, for example, a motor vehicle, a non-motor vehicle, a pedestrian, a barricade, etc. in the road. The first state data includes, for example, information of a position, a position change, a speed direction, a heading, a cutoff tag, an occlusion tag, and the like of the first obstacle.
The velocity direction refers to a direction of the velocity of the obstacle, which may be expressed, for example, as an angle of the direction of the velocity of the obstacle to a baseline direction. Heading refers to the orientation of an obstacle, which may be expressed, for example, as the angle of the orientation of the obstacle from a baseline direction. The base line may for example be east, i.e. 0 ° true east, increasing in the counter clockwise direction with a maximum of 360 ° angle. Typically, the heading of the obstacle coincides with its speed direction. In some cases, the heading of the obstacle may also be different from its speed direction. For example, a vehicle is heading north during travel, i.e., has a 90 ° heading, but the vehicle is drifting east at the same time, i.e., has a 0 ° speed direction.
The truncation label is used for indicating whether a part of the corresponding first barrier is located outside a sensing area of the vehicle-end sensing system, and the blocking label is used for indicating whether the corresponding first barrier is blocked by other barriers. The truncated label and the occlusion label may specifically take the values of yes or no.
According to some embodiments, first state data of a vehicle where the vehicle-end sensing system is located may be acquired, and the first state data of the vehicle may be merged into the vehicle-end sensing data. That is, the vehicle in which the vehicle-end sensing system is located may be a first obstacle, and the vehicle-end sensing data may include first state data of the vehicle.
It can be understood that the actual road conditions are complex and various. Particularly, under complex road conditions such as intersections and overpasses, part of data in vehicle-end sensing data may not be accurate enough, and the reliability is low, so that the data is not enough to be used as true value data for detecting a road-side sensing system. Therefore, in step 220, data with high reliability needs to be extracted from the vehicle-side sensing data, that is, data meeting the preset reliability requirement is used as true value data, and data with low reliability is filtered out, so that the accuracy of the true value data is improved, and the accuracy of the detection result of the roadside sensing system is improved.
The preset credibility requirement is a criterion for judging the credibility of the data. According to some embodiments, the predetermined confidence requirement is determined based on the perception capability of the vehicle-end perception system. It can be understood that the perception capabilities of different vehicle-end perception systems are different. The preset reliability requirement is determined based on the sensing capability of the vehicle-end sensing system, and the judgment standard of the data reliability can be flexibly set for different vehicle-end sensing systems, so that the flexibility and the accuracy of extracting true value data are improved.
According to some embodiments, the preset confidence requirement may be a threshold set for the value of the one or more first state data. For example, the preset reliability requirement may be a distance threshold set for the position information, or a position change threshold set for the position change information, or the like.
Specifically, according to some embodiments, in case the first state data comprises location information, the preset confidence requirement is implemented as a distance threshold, step 220 may comprise: calculating the distance from any one first obstacle in the at least one first obstacle to the vehicle-end sensing system based on the corresponding position information; and determining the first state data of the first obstacle with the distance to the vehicle-end sensing system smaller than the distance threshold value as true value data.
In the above embodiment, the distance threshold is determined based on the sensing capability of the vehicle-end sensing system. Generally, the sensing results of the vehicle-end sensing system for the remote obstacles are inaccurate, and the sensing results of the obstacles are low in reliability and are not enough to be used as truth data for detecting the road-side sensing system. Based on the embodiment, the first state data of the first obstacle with lower reliability far away from the vehicle-end sensing system (namely, the distance is greater than or equal to the distance threshold) can be filtered, and the first state data of the first obstacle with higher reliability near the vehicle-end sensing system (namely, the distance is less than the distance threshold) is used as truth value data, so that the accuracy of the truth value data is improved, and the accuracy of the detection result of the roadside sensing system is improved.
According to some embodiments, in case the first state data comprises location change information, the preset confidence requirement is implemented as a location change threshold, step 220 may comprise: the first state data of the first obstacle whose position change information is smaller than the position change threshold value is determined as true value data.
In the above embodiment, the position change threshold is determined based on the sensing capability of the vehicle-end sensing system. Generally, the sensing results of the vehicle-end sensing system for the obstacles with large position changes are inaccurate, and the sensing results of the obstacles have low credibility and are not enough to be used as truth value data for detecting the road-side sensing system. Based on the above embodiment, the first state data of the first obstacle with a large position change (i.e., greater than or equal to the position change threshold) and a low reliability can be filtered, and the first state data of the first obstacle with a small position change (i.e., smaller than the position change threshold) and a high reliability is used as the truth value data, so that the accuracy of the truth value data is improved, and the accuracy of the detection result of the roadside sensing system is improved.
According to some embodiments, the position change information of the first obstacle may be determined according to the following steps: based on an algorithm deployed in the vehicle-end sensing system, a plurality of positions where the first obstacle may appear at the next time are predicted, and each position is a two-dimensional coordinate vector in the form of (x, y). And calculating the covariance of every two coordinate vectors to obtain a covariance matrix. The eigenvalue of the covariance matrix can indicate the degree of change in the position of the first obstacle, and therefore the eigenvalue of the covariance matrix can be used as the position change information of the first obstacle.
According to further embodiments, the preset confidence requirement may also be a target value set for one or more first state data. For example, the preset confidence requirement may be a target value set for the truncation label of "no".
Specifically, according to some embodiments, in the case that the first state data includes a truncation flag and the preset confidence requirement is implemented as a target value "no" of the truncation flag, the step 220 may include: the first state data of the first obstacle with the truncated tag no is determined as the true value data.
In the above embodiment, the truncation label is used to indicate whether a portion of the corresponding first obstacle is located outside the sensing region of the vehicle-end sensing system. Generally, the sensing results of the vehicle-end sensing system for the intercepted obstacles are inaccurate, and the sensing results of the obstacles are low in reliability and are not enough to be used as truth value data for detecting the road-side sensing system. Based on the above embodiment, the truncated (i.e. the truncated label is "yes") first state data of the first obstacle with low reliability can be filtered, and the non-truncated (i.e. the truncated label is "no") first state data of the first obstacle with high reliability is used as truth value data, so that the accuracy of the truth value data is improved, and the accuracy of the detection result of the roadside sensing system is improved.
Three embodiments of the preset confidence requirement are given above, i.e. the preset confidence requirement can be implemented as a distance threshold, a location change threshold and a specific truncation label ("no").
The three preset reliability requirements may be used alone or in combination. In the case of using a preset reliability requirement alone, as long as a certain first obstacle satisfies the preset reliability requirement, the first state data of the first obstacle is taken as true value data. When a plurality of kinds of preset reliability requirements are used in combination, the first state data of a first obstacle is used as truth data only when the first obstacle satisfies the plurality of kinds of preset reliability requirements at the same time.
Furthermore, it should be understood that other preset confidence requirements may be set in addition to the three approaches described above. The present disclosure does not limit the number and specific types of pre-set confidence requirements.
In step 230, roadside sensing data output by the roadside sensing system and matching with the true value data is obtained.
According to some embodiments, the first acquisition time corresponding to the truth data matches the second acquisition time corresponding to the roadside perception data, i.e., the timestamps of the two match. It should be noted that the first acquisition time is matched with the second acquisition time, which means that the difference between the first acquisition time and the second acquisition time is small (e.g., less than 50ms, which is less than the threshold), and thus the two can be considered approximately the same.
Specifically, based on the first collection time of the truth data, the multi-frame roadside sensing data output by the roadside sensing system may be searched, and the roadside sensing data matched with the first collection time of the truth data may be obtained therefrom.
In step 240, based on the truth data, the accuracy of the roadside perception data is determined.
According to some embodiments, the truth data includes second state data for each of the at least one second obstacle, the roadside perception data includes third state data for each of the at least one third obstacle, and step 240 includes: matching the at least one second obstacle with the at least one third obstacle to obtain at least one obstacle pair, wherein each obstacle pair comprises the matched second obstacle and the matched third obstacle; for any obstacle pair of the at least one obstacle pair, determining an accuracy of the respective third state data based on the second state data corresponding to the obstacle pair.
Based on the above embodiment, the real-value obstacle (i.e., the second obstacle) and the corresponding measured-value obstacle (i.e., the third obstacle) can be accurately paired, so that the sensing accuracy of the roadside sensing system can be accurately evaluated.
According to some embodiments, the second status data and the third status data each comprise a location area in which the corresponding obstacle is located, i.e. the second status data comprises a location area in which the corresponding second obstacle is located, and the third status data comprises a location area in which the corresponding third obstacle is located. Accordingly, the location area of the at least one second obstacle may be matched with the location area of the at least one third obstacle to obtain at least one obstacle pair. For example, the hungarian algorithm may be employed to achieve matching of the second barrier with the third barrier.
The position area where the obstacle is located is determined based on the position information (center point coordinates), size information (length, width), and heading information of the obstacle. The location area may be represented by a Bounding Box (BBox for short).
Fig. 3 shows a schematic diagram of obstacle matching according to an embodiment of the present disclosure. In fig. 3, the solid rectangle box shows the true value of the BBox of the obstacle 321-325 (i.e. the second obstacle) sensed by the vehicle-end sensing system 310, and the solid arrow indicates the heading of the obstacle 321-325. The dashed rectangle shows the BBox of the obstacle 331 and 336 (i.e. the third obstacle) sensed by the roadside sensing system (not shown in fig. 3), and the dashed arrow indicates the heading of the obstacle 331 and 336.
As shown in fig. 3, using the hungarian algorithm, barrier 321 may be matched to barrier 331, barrier 322 may be matched to barrier 332, and barrier 323 may be matched to barrier 333, resulting in barrier pairs (321,331), (322,332), and (323, 333).
One of the obstacle 334 and the obstacle 335 belongs to roadside false detection, and based on IoU (Intersection over Union, also called overlap ratio) maximum matching principle, the obstacle 324 and the obstacle 335 are matched to obtain an obstacle pair (324,335), and the obstacle 334 is determined as false detection data.
The obstacle 325 is located outside the sensing range of the roadside sensing system and is sensed only by the vehicle-end sensing system 310, so that a third obstacle matched with the obstacle does not exist, and an obstacle pair is not formed.
The obstacle 336 is shielded by the obstacle 322, is positioned in the sensing blind area of the vehicle end sensing system 310, and is only sensed by the roadside sensing system, so that a second obstacle matched with the blind area is not provided, and an obstacle pair is not formed.
After obtaining at least one obstacle pair through the above matching step, for each obstacle pair, the accuracy of the corresponding third state data may be determined based on the corresponding second state data of the obstacle pair. For example, if the second state data of the second obstacle a is data1, the third state data of the third obstacle b is data2, and the second obstacle a and the third obstacle b are matched to form an obstacle pair (a, b), then the accuracy of the third state data2 may be determined based on the second state data 1.
The second status data and the third status data may specifically be position information (center point coordinates) of the obstacle, a position area (BBox), size information, speed direction information, heading information, and the like. By comparing the second state data with the corresponding third state data, the accuracy of the third state data can be determined.
It will be appreciated that the above embodiment is equivalent to determining the error of the third state data with the respective second state data being true and the respective third state data being measured for each obstacle.
According to some embodiments, under the condition that the second state data and the third state data are both position areas of the obstacle, anchor point coordinates of the second obstacle and the third obstacle in the obstacle pair can be respectively determined, wherein the anchor point coordinates are coordinates of a vertex, closest to the vehicle-end sensing system, of the position area where the corresponding obstacle is located; and determining the accuracy of the location area in the third state data based on the anchor point coordinates of the second obstacle and the third obstacle, respectively. Specifically, the distance of the anchor point coordinates of the second obstacle and the third obstacle may be calculated, and the smaller the distance, the higher the accuracy of the position area in the third state data.
Fig. 4 shows a schematic diagram of an anchor point according to an embodiment of the present disclosure. The area of the location (BBox) where a certain obstacle is located is shown as a rectangular box 410. The coordinates of the four vertices A, B, C, D of location area 410 may be determined by trigonometric function calculations based on the center coordinates, length, width, and heading of the obstacle (as indicated by the arrows in fig. 4). The vertex B is closest to the vehicle-end sensing system 420, so the vertex B is an anchor point of the obstacle, and the coordinates of the vertex B are the coordinates of the anchor point.
It should be noted that, in the sensing range of the vehicle-end sensing system, the obstacle often appears to be blocked, cut off, and the like, so the vehicle-end sensing system may not sense the obstacle size information (the length and width of the BBox) accurately. If the sensing accuracy of the roadside sensing system on the obstacle position area is determined by directly calculating IoU of the BBox in the truth value data and the BBox in the roadside sensing data, the reliability of the accuracy calculation result is not high.
Based on the above embodiment, by calculating anchor coordinates and determining the accuracy of the location area based on the anchor coordinates, the accuracy and reliability of the accuracy calculation result can be improved.
According to some embodiments, the method 200 may further comprise: the third state data is corrected based on the second state data before determining an accuracy of the corresponding third state data based on the obstacle for the corresponding second state data.
Based on the above-described embodiment, with respect to roadside perception data (third state data) that is less concerned in an actual traffic scene, it may be corrected based on the true value data (second state data) first, and then its accuracy may be evaluated based on the corrected data.
For example, in a vehicle-road coordination scenario, the roadside sensing system is generally required to be able to accurately output information such as the position, speed direction, and the like of an obstacle, and the accuracy of heading information is not required to be excessive. In practice, the heading information output by the roadside sensing system is often opposite to the true speed direction information of the obstacle, i.e., 180 ° apart. In general, the heading and the speed direction of the obstacle are consistent, if the difference between the heading and the speed direction is large, it indicates that the heading information output by the roadside system is likely to be reversed, and the heading information output by the roadside sensing system can be corrected based on the true speed direction information.
Thus, according to some embodiments, the second state data may include speed direction information and the third state data includes heading information, and accordingly, the heading information may be inverted in response to determining that the speed direction information differs from the heading information by greater than or equal to an angle threshold. The angle threshold may be set, for example, to 135 °.
It should be noted that the method 200 describes the detection process of the roadside sensing system from the perspective of a single data frame, that is, the accuracy of the corresponding single-frame roadside sensing data output by the roadside sensing system is detected based on the single-frame vehicle-end sensing data output by the vehicle-end sensing system.
It should be understood that, further, the accuracy of the multiple frames of roadside sensing data output by the roadside sensing system may be detected based on the method 200, and then the detection results of the multiple frames of roadside sensing data are summarized and analyzed to obtain the sensing accuracy of the roadside sensing system for different state data (e.g., position, size, speed direction, heading, etc.). Further, the sensing accuracy of the roadside sensing system for different state data of different types of obstacles (e.g., motor vehicles, non-motor vehicles, pedestrians, etc.) can be obtained.
FIG. 5 shows a schematic diagram of a detection process 500 of a roadside sensing system according to an embodiment of the disclosure. As shown in fig. 5, the process 500 includes steps 502-516.
In step 502, the vehicle-end sensing system collects vehicle-end sensing data, including information such as position, speed direction, heading direction, etc. of various types of obstacles (e.g., motor vehicles, non-motor vehicles, pedestrians, etc.).
According to some embodiments, the vehicle-end sensing system may be disposed on a vehicle. The driver drives the vehicle to traverse different lanes and driving directions (such as straight running, left turning and right turning) of the same intersection, and meanwhile, the vehicle end sensing system senses obstacles in the surrounding environment and outputs multi-frame vehicle end sensing data at the intersection.
In step 504, each frame of vehicle-side perception data is filtered to obtain true value data with high reliability.
In step 506, the roadside sensor collects raw data, for example, a plurality of frames of road image data by an image collecting device.
In step 508, the roadside computing unit runs a roadside sensing algorithm to process the multiple frames of image data collected by the sensor, so as to obtain multiple frames of roadside sensing data.
In step 510, based on the unique identifier (e.g., timestamp) of the data, multiple frames of true value data and multiple frames of roadside sensing data are matched to obtain multiple data pairs. Each data pair includes a matched frame of true data and a frame of roadside sensing data.
In step 512, a hungarian algorithm is adopted to perform obstacle matching on the truth value data and the roadside perception data in each data pair to obtain at least one obstacle pair.
In step 514, an accuracy index of the single-frame roadside sensing data is calculated based on each obstacle pair. The accuracy indicator may be, for example, a perceived error of information such as the position, speed direction, heading, etc. of the obstacle.
In step 516, the calculation results of the accuracy indexes of the multiple frames of roadside sensing data are summarized and analyzed, and the sensing accuracy of the roadside sensing system is evaluated.
According to some embodiments, the index calculation results of multiple frames of roadside sensing data can be collected and analyzed, so as to obtain the sensing accuracy of the roadside sensing system for different information (such as position, size, speed direction, heading and the like). Further, the sensing accuracy of the roadside sensing system for different information of different types of obstacles (e.g., motor vehicles, non-motor vehicles, pedestrians, etc.) can also be obtained.
According to the embodiment of the disclosure, a detection device of the roadside sensing system is further provided. Fig. 6 shows a block diagram of a detection apparatus 600 of the roadside sensing system according to the embodiment of the present disclosure. As shown in fig. 6, the apparatus 600 includes:
the obtaining module 610 is configured to obtain vehicle-end sensing data output by a vehicle-end sensing system;
an extracting module 620 configured to extract truth value data meeting a preset reliability requirement from the vehicle-end sensing data;
a matching module 630, configured to obtain the roadside sensing data output by the roadside sensing system and matched with the true value data; and
a determination module 640 configured to determine an accuracy of the roadside perception data based on the truth data.
According to the embodiment of the disclosure, the data with lower credibility in the vehicle-end sensing data can be filtered, and only the data with higher credibility (meeting the preset credibility requirement) is used as the true value data, so that the accuracy of the true value data is improved, the detection accuracy of the road-side sensing system is improved, and the accurate detection of the road-side sensing system is realized.
According to some embodiments, the predetermined confidence requirement is determined based on the sensing capabilities of the vehicle-end sensing system.
According to some embodiments, the vehicle-end awareness data includes respective first state data of at least one first obstacle.
It should be understood that the various modules or units of the apparatus 600 shown in fig. 6 may correspond to the various steps in the method 200 described with reference to fig. 2. Thus, the operations, features and advantages described above with respect to the method 200 are equally applicable to the apparatus 600 and the modules and units comprised thereby. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
Although specific functionality is discussed above with reference to particular modules, it should be noted that the functionality of the various modules discussed herein may be divided into multiple modules and/or at least some of the functionality of multiple modules may be combined into a single module. For example, the matching module 630 and the determining module 640 described above may be combined into a single module in some embodiments.
It should also be appreciated that various techniques may be described herein in the general context of software hardware elements or program modules. The various modules described above with respect to fig. 6 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the modules may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, the modules may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the modules 610-640 may be implemented together in a System on a Chip (SoC). The SoC may include an integrated circuit chip (which includes one or more components of a Processor (e.g., a Central Processing Unit (CPU), microcontroller, microprocessor, Digital Signal Processor (DSP), etc.), memory, one or more communication interfaces, and/or other circuitry), and may optionally execute received program code and/or include embedded firmware to perform functions.
According to an embodiment of the present disclosure, there is also provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of roadside perception system detection described above.
There is also provided, in accordance with an embodiment of the present disclosure, a non-transitory computer-readable storage medium having stored thereon computer instructions for causing the computer to execute the detection method of the roadside perception system described above.
There is also provided, according to an embodiment of the present disclosure, a computer program product, including a computer program, wherein the computer program, when executed by a processor, implements the detection method of the roadside sensing system described above.
According to an embodiment of the present disclosure, there is also provided a roadside apparatus including the above electronic apparatus.
According to some embodiments, the roadside apparatus may include a communication component and the like in addition to the electronic apparatus, and the electronic apparatus may be integrated with the communication component or may be separately provided. The electronic device can acquire data, such as pictures and videos, of the roadside sensing device (such as a roadside camera), so that image video processing and data calculation are performed, and then processing and calculation results are transmitted to the cloud control platform through the communication component. It is understood that a roadside device is a type of edge computing device.
According to an embodiment of the present disclosure, a cloud control platform is further provided, which includes the above electronic device.
According to some embodiments, the cloud control platform performs processing at the cloud end to perform image video processing and data calculation, and may also be referred to as a Vehicle-road cooperative management platform, a V2X (Vehicle-to-Everything) platform, a cloud computing platform, a central system, a cloud server, and the like.
Referring to fig. 7, a block diagram of a structure of an electronic device 700, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the electronic device 700 includes a computing unit 701, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the electronic device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
A number of components in the electronic device 700 are connected to the I/O interface 705, including: input unit 706, output unit 707, storage unit 708, and communication sheetElement 709. The input unit 706 may be any type of device capable of inputting information to the electronic device 700, and the input unit 706 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote controller. Output unit 707 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 708 may include, but is not limited to, magnetic or optical disks. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver, and/or a chipset, such as bluetoothTMDevices, 802.11 devices, Wi-Fi devices, WiMAX devices, cellular communication devices, and/or the like.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 701 performs the various methods and processes described above, such as the method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. When the computer program is loaded into RAM703 and executed by the computing unit 701, one or more steps of the method 200 described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the method 200 in any other suitable manner (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
While embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely illustrative embodiments or examples and that the scope of the invention is not to be limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, the various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (20)

1. A detection method of a roadside sensing system comprises the following steps:
acquiring vehicle-end sensing data output by a vehicle-end sensing system;
true value data meeting a preset credibility requirement are extracted from the vehicle-end sensing data;
obtaining roadside sensing data which is output by the roadside sensing system and matched with the truth value data; and
based on the truth data, determining accuracy of the roadside perception data.
2. The method of claim 1, wherein the preset confidence requirement is determined based on a perception capability of the vehicle-end perception system.
3. The method of claim 1 or 2, wherein the end-of-vehicle awareness data comprises respective first status data of at least one first obstacle.
4. The method of claim 3, wherein the first status data includes location information, and wherein the extracting true value data from the end-of-vehicle perception data that meets a preset confidence requirement includes:
calculating the distance from any one first obstacle of the at least one first obstacle to the vehicle-end sensing system based on the corresponding position information; and
and determining first state data of a first obstacle with a distance to the vehicle-end sensing system smaller than a distance threshold value as the truth value data.
5. The method of claim 3, wherein the first status data includes location change information, and wherein the extracting true value data from the end-of-vehicle awareness data that meets a preset confidence requirement comprises:
determining first state data of a first obstacle having position change information smaller than a position change threshold as the truth data.
6. The method of claim 3, wherein the first status data includes a cutoff tag indicating whether a portion of the respective first obstacle is outside a sensing zone of the end-of-vehicle sensing system, and wherein the extracting truth data from the end-of-vehicle sensing data that meets a preset confidence requirement includes:
determining the first state data of the first obstacle with the truncation flag being no as the true value data.
7. The method of any of claims 1-6, wherein a first acquisition time corresponding to the truth data matches a second acquisition time corresponding to the roadside awareness data.
8. The method according to any one of claims 1-7, wherein the truth data comprises second state data for each of at least one second obstacle and the roadside awareness data comprises third state data for each of at least one third obstacle, and wherein the determining an accuracy of the roadside awareness data based on the truth data comprises:
matching the at least one second obstacle with the at least one third obstacle to obtain at least one obstacle pair, each obstacle pair comprising one second obstacle and one third obstacle in match;
for any of the at least one obstacle pair, determining an accuracy of the respective third state data based on the second state data corresponding to that obstacle pair.
9. The method of claim 8, wherein the second status data and the third status data each include a location area in which a respective obstacle is located, and wherein the matching the at least one second obstacle with the at least one third obstacle to obtain at least one obstacle pair comprises:
matching the position area of the at least one second obstacle with the position area of the at least one third obstacle to obtain the at least one obstacle pair.
10. The method of claim 9, wherein said determining an accuracy of the respective third state data based on the obstacle to the respective second state data comprises:
respectively determining anchor point coordinates of a second obstacle and a third obstacle in the obstacle pair, wherein the anchor point coordinates are coordinates of a vertex, closest to the vehicle-end sensing system, of a position area where the corresponding obstacle is located; and
determining an accuracy of a location area in the third status data based on anchor point coordinates of the second obstacle and the third obstacle, respectively.
11. The method of claim 8, further comprising:
correcting the third state data based on the second state data before determining the accuracy of the corresponding third state data based on the obstacle for the corresponding second state data.
12. The method of claim 11, wherein the second status data includes speed direction information and the third status data includes heading information, and wherein correcting the corresponding third status data based on the obstacle includes:
reversing the heading information in response to determining that a difference between the speed direction information and the heading information is greater than or equal to an angle threshold.
13. A detection apparatus of a roadside sensing system, comprising:
the acquisition module is configured to acquire vehicle-end sensing data output by the vehicle-end sensing system;
the extraction module is configured to extract true value data meeting a preset credibility requirement from the vehicle-end sensing data;
the matching module is configured to acquire roadside sensing data which is output by the roadside sensing system and matched with the truth value data; and
a determination module configured to determine an accuracy of the roadside perception data based on the truth data.
14. The apparatus of claim 13, wherein the preset confidence requirement is determined based on a perception capability of the vehicle-end perception system.
15. The apparatus of claim 13 or 14, wherein the end-of-vehicle awareness data comprises respective first status data of at least one first obstacle.
16. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-12.
17. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-12.
18. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-12 when executed by a processor.
19. A roadside apparatus comprising the electronic apparatus of claim 16.
20. A cloud controlled platform comprising the electronic device of claim 16.
CN202210255646.9A 2022-03-15 Detection method and device of road side perception system, electronic equipment and road side equipment Active CN114596706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210255646.9A CN114596706B (en) 2022-03-15 Detection method and device of road side perception system, electronic equipment and road side equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210255646.9A CN114596706B (en) 2022-03-15 Detection method and device of road side perception system, electronic equipment and road side equipment

Publications (2)

Publication Number Publication Date
CN114596706A true CN114596706A (en) 2022-06-07
CN114596706B CN114596706B (en) 2024-05-03

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115633085A (en) * 2022-08-31 2023-01-20 东风汽车集团股份有限公司 Driving scene image display method, device, equipment and storage medium

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004272462A (en) * 2003-03-06 2004-09-30 Toyota Motor Corp Preceding vehicle deciding device and inter-vehicle distance controller
CN107330925A (en) * 2017-05-11 2017-11-07 北京交通大学 A kind of multi-obstacle avoidance detect and track method based on laser radar depth image
CN108961798A (en) * 2018-08-10 2018-12-07 长安大学 Unmanned vehicle traffic lights independently perceive capacity test system and test method
JP2019016341A (en) * 2017-07-10 2019-01-31 アンリツ株式会社 Test system and testing method for on-vehicle application
CN109886208A (en) * 2019-02-25 2019-06-14 北京达佳互联信息技术有限公司 Method, apparatus, computer equipment and the storage medium of object detection
KR101970239B1 (en) * 2017-12-18 2019-08-27 한국과학기술원 Method and System of Optimal Protection Level for Local-Area Differential GNSS to Support UAV Navigation
CN110909711A (en) * 2019-12-03 2020-03-24 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for detecting lane line position change
CN111273316A (en) * 2020-02-18 2020-06-12 中国科学院合肥物质科学研究院 Multi-laser radar multi-view object detection method based on profile expansion fusion
CN111469832A (en) * 2018-12-28 2020-07-31 现代自动车株式会社 System, method, infrastructure and vehicle for autonomous valet parking
CN111753765A (en) * 2020-06-29 2020-10-09 北京百度网讯科技有限公司 Detection method, device and equipment of sensing equipment and storage medium
CN112150462A (en) * 2020-10-22 2020-12-29 北京百度网讯科技有限公司 Method, device, equipment and storage medium for determining target anchor point
CN112418092A (en) * 2020-11-23 2021-02-26 中国第一汽车股份有限公司 Fusion method, device, equipment and storage medium for obstacle perception
CN112816954A (en) * 2021-02-09 2021-05-18 中国信息通信研究院 Road side perception system evaluation method and system based on truth value
CN112861833A (en) * 2021-04-26 2021-05-28 禾多科技(北京)有限公司 Vehicle lane level positioning method and device, electronic equipment and computer readable medium
CN113221750A (en) * 2021-05-13 2021-08-06 杭州飞步科技有限公司 Vehicle tracking method, device, equipment and storage medium
CN113419233A (en) * 2021-06-18 2021-09-21 阿波罗智能技术(北京)有限公司 Method, device and equipment for testing perception effect
CN113505687A (en) * 2021-07-08 2021-10-15 北京星云互联科技有限公司 Equipment test method, device, electronic equipment, system and storage medium
CN113823087A (en) * 2021-09-09 2021-12-21 中国信息通信研究院 Method and device for analyzing RSS performance of roadside sensing system and test system
CN113848921A (en) * 2021-09-29 2021-12-28 中国第一汽车股份有限公司 Vehicle road cloud collaborative perception method and system
US20210406564A1 (en) * 2020-12-24 2021-12-30 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Perception data detection method and apparatus
CN113920729A (en) * 2021-10-11 2022-01-11 华录易云科技有限公司 Method for evaluating perception capability of traffic participants based on roadside perception system
CN114037972A (en) * 2021-10-08 2022-02-11 岚图汽车科技有限公司 Target detection method, device, equipment and readable storage medium
CN114120650A (en) * 2021-12-15 2022-03-01 阿波罗智联(北京)科技有限公司 Method and device for generating test result
CN114173307A (en) * 2021-12-17 2022-03-11 浙江海康智联科技有限公司 Roadside perception fusion system based on vehicle-road cooperation and optimization method
CN114238790A (en) * 2021-12-15 2022-03-25 阿波罗智联(北京)科技有限公司 Method, apparatus, device and storage medium for determining maximum perception range

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004272462A (en) * 2003-03-06 2004-09-30 Toyota Motor Corp Preceding vehicle deciding device and inter-vehicle distance controller
CN107330925A (en) * 2017-05-11 2017-11-07 北京交通大学 A kind of multi-obstacle avoidance detect and track method based on laser radar depth image
JP2019016341A (en) * 2017-07-10 2019-01-31 アンリツ株式会社 Test system and testing method for on-vehicle application
KR101970239B1 (en) * 2017-12-18 2019-08-27 한국과학기술원 Method and System of Optimal Protection Level for Local-Area Differential GNSS to Support UAV Navigation
CN108961798A (en) * 2018-08-10 2018-12-07 长安大学 Unmanned vehicle traffic lights independently perceive capacity test system and test method
CN111469832A (en) * 2018-12-28 2020-07-31 现代自动车株式会社 System, method, infrastructure and vehicle for autonomous valet parking
CN109886208A (en) * 2019-02-25 2019-06-14 北京达佳互联信息技术有限公司 Method, apparatus, computer equipment and the storage medium of object detection
CN110909711A (en) * 2019-12-03 2020-03-24 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for detecting lane line position change
CN111273316A (en) * 2020-02-18 2020-06-12 中国科学院合肥物质科学研究院 Multi-laser radar multi-view object detection method based on profile expansion fusion
CN111753765A (en) * 2020-06-29 2020-10-09 北京百度网讯科技有限公司 Detection method, device and equipment of sensing equipment and storage medium
CN112150462A (en) * 2020-10-22 2020-12-29 北京百度网讯科技有限公司 Method, device, equipment and storage medium for determining target anchor point
CN112418092A (en) * 2020-11-23 2021-02-26 中国第一汽车股份有限公司 Fusion method, device, equipment and storage medium for obstacle perception
US20210406564A1 (en) * 2020-12-24 2021-12-30 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Perception data detection method and apparatus
CN112816954A (en) * 2021-02-09 2021-05-18 中国信息通信研究院 Road side perception system evaluation method and system based on truth value
CN112861833A (en) * 2021-04-26 2021-05-28 禾多科技(北京)有限公司 Vehicle lane level positioning method and device, electronic equipment and computer readable medium
CN113221750A (en) * 2021-05-13 2021-08-06 杭州飞步科技有限公司 Vehicle tracking method, device, equipment and storage medium
CN113419233A (en) * 2021-06-18 2021-09-21 阿波罗智能技术(北京)有限公司 Method, device and equipment for testing perception effect
CN113505687A (en) * 2021-07-08 2021-10-15 北京星云互联科技有限公司 Equipment test method, device, electronic equipment, system and storage medium
CN113823087A (en) * 2021-09-09 2021-12-21 中国信息通信研究院 Method and device for analyzing RSS performance of roadside sensing system and test system
CN113848921A (en) * 2021-09-29 2021-12-28 中国第一汽车股份有限公司 Vehicle road cloud collaborative perception method and system
CN114037972A (en) * 2021-10-08 2022-02-11 岚图汽车科技有限公司 Target detection method, device, equipment and readable storage medium
CN113920729A (en) * 2021-10-11 2022-01-11 华录易云科技有限公司 Method for evaluating perception capability of traffic participants based on roadside perception system
CN114120650A (en) * 2021-12-15 2022-03-01 阿波罗智联(北京)科技有限公司 Method and device for generating test result
CN114238790A (en) * 2021-12-15 2022-03-25 阿波罗智联(北京)科技有限公司 Method, apparatus, device and storage medium for determining maximum perception range
CN114173307A (en) * 2021-12-17 2022-03-11 浙江海康智联科技有限公司 Roadside perception fusion system based on vehicle-road cooperation and optimization method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115633085A (en) * 2022-08-31 2023-01-20 东风汽车集团股份有限公司 Driving scene image display method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109927719B (en) Auxiliary driving method and system based on obstacle trajectory prediction
EP3581890B1 (en) Method and device for positioning
US11840239B2 (en) Multiple exposure event determination
CN106503653B (en) Region labeling method and device and electronic equipment
CN113240909B (en) Vehicle monitoring method, equipment, cloud control platform and vehicle road cooperative system
CN109345829B (en) Unmanned vehicle monitoring method, device, equipment and storage medium
KR101446546B1 (en) Display system of vehicle information based on the position
CN111739344A (en) Early warning method and device and electronic equipment
EP4089659A1 (en) Map updating method, apparatus and device
JP2013225295A (en) Collision warning system taking azimuth information into consideration
CN112562406B (en) Method and device for identifying off-line driving
EP4105908A1 (en) Method, apparatus, server, and computer program for collision accident prevention
CN111160132B (en) Method and device for determining lane where obstacle is located, electronic equipment and storage medium
CN110648538B (en) Traffic information sensing system and method based on laser radar network
CN113469115A (en) Method and apparatus for outputting information
CN114298908A (en) Obstacle display method and device, electronic equipment and storage medium
Fakhfakh et al. Weighted v-disparity approach for obstacles localization in highway environments
CN116358584A (en) Automatic driving vehicle path planning method, device, equipment and medium
CN116295496A (en) Automatic driving vehicle path planning method, device, equipment and medium
CN114596706B (en) Detection method and device of road side perception system, electronic equipment and road side equipment
CN114596706A (en) Detection method and device of roadside sensing system, electronic equipment and roadside equipment
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal
CN113762030A (en) Data processing method and device, computer equipment and storage medium
CN112866636A (en) Group fog recognition early warning method and system based on farthest visible distance and electronic equipment
Wang et al. A system of automated training sample generation for visual-based car detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant