CN112147635B - Detection system, method and device - Google Patents

Detection system, method and device Download PDF

Info

Publication number
CN112147635B
CN112147635B CN202011020999.8A CN202011020999A CN112147635B CN 112147635 B CN112147635 B CN 112147635B CN 202011020999 A CN202011020999 A CN 202011020999A CN 112147635 B CN112147635 B CN 112147635B
Authority
CN
China
Prior art keywords
vehicle
laser
detection
scene map
laser point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011020999.8A
Other languages
Chinese (zh)
Other versions
CN112147635A (en
Inventor
雷绳光
宋翠杰
周鹏
张卫涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Liangdao Intelligent Vehicle Technology Co ltd
Original Assignee
Beijing Liangdao Intelligent Vehicle Technology Co ltd
Filing date
Publication date
Application filed by Beijing Liangdao Intelligent Vehicle Technology Co ltd filed Critical Beijing Liangdao Intelligent Vehicle Technology Co ltd
Priority to CN202011020999.8A priority Critical patent/CN112147635B/en
Publication of CN112147635A publication Critical patent/CN112147635A/en
Application granted granted Critical
Publication of CN112147635B publication Critical patent/CN112147635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the invention provides a detection system, a detection method and a detection device, which relate to the technical field of automatic driving, wherein the detection system comprises the following components: the system comprises a plurality of laser radars and a cloud server, wherein the laser radars are used for collecting laser point clouds in real time; the collected laser point cloud is sent to a cloud server; the cloud server is used for receiving the laser point clouds sent by the laser radars and constructing a scene map of the detection area in real time according to the received laser point clouds; receiving the position of a vehicle sent after the vehicle enters a detection area and collecting information of objects around the vehicle collected by the vehicle; determining a vehicle position in the vehicle scene map according to the received vehicle position; and detecting detection indexes of the acquired information according to the vehicle position and the object attribute. The scheme provided by the embodiment of the invention can be used for detecting and evaluating the detection index of the acquisition information of the object acquired by the vehicle.

Description

Detection system, method and device
Technical Field
The present invention relates to the field of automatic driving technologies, and in particular, to a detection system, method, and apparatus.
Background
In the process of running on a road, the vehicle needs to avoid other vehicles, pedestrians or objects such as roadblocks, and the like, so that the running safety of the vehicle is ensured. Therefore, in the automatic driving process of the vehicle, objects around the vehicle need to be identified, and collected information such as the size, the movement speed, the category of the objects, the position relative to the vehicle and the like is collected, so that the driving route, the driving speed and the like of the vehicle are determined according to the collected information, and traffic accidents of the vehicle are avoided.
Because the safety running of the vehicle can be ensured under the condition that the collected information of the objects around the vehicle can be accurately collected, detection indexes such as accuracy, detection rate, false alarm rate and the like of the collected information of the objects need to be detected and evaluated so as to ensure the accuracy, detection rate, false alarm rate and the like of the collected information of the object, such as the object size, movement speed, category and position relative to the vehicle and the like of the object collected by the vehicle in the automatic driving process.
Disclosure of Invention
The embodiment of the invention aims to provide a detection system, a detection method and a detection device for detecting detection indexes of acquisition information of an object acquired by a vehicle. The specific technical scheme is as follows:
in a first aspect, embodiments of the present invention provide a detection system, the system comprising: the system comprises a plurality of laser radars and a cloud server, wherein each laser radar acquires time synchronization of laser point clouds;
The laser radar is used for collecting laser point clouds in real time; sending the collected laser point cloud to the cloud server;
The cloud server is used for receiving laser point clouds sent by all the laser radars, and constructing a scene map of the detection area in real time according to the received laser point clouds, wherein the scene map comprises objects described based on object attributes; receiving the position of the vehicle sent after the vehicle enters the detection area and the acquisition information of objects around the vehicle acquired by the vehicle; determining a vehicle position of the vehicle in the scene map according to the received vehicle position; detecting the detection index of the acquired information according to the vehicle position and the object attribute, wherein the detection area is an area including the projection range of each laser radar, and the object attribute comprises: at least one of the size, the movement speed, the position and the category of the object, wherein the acquired information comprises: at least one of a size, a movement speed, a relative position with respect to the vehicle, a category to which the vehicle belongs, and an object around the vehicle collected by the vehicle, and corresponds to the object attribute, the detection index includes: at least one of accuracy, detection rate and false alarm rate of the acquired information.
In one embodiment of the present invention, in the case that the object attribute includes a position of an object, and the acquired information includes a relative position, the cloud server is specifically configured to determine, in the scene map, a map relative position of the object with respect to the vehicle according to a position of the object in the scene map in the detection area and a vehicle position of the vehicle in the scene map; and detecting the detection index of the relative position according to the relative position of the map.
In one embodiment of the present invention, the cloud server is specifically configured to cluster, for each laser point cloud sent by the laser radar, point cloud data representing the same object in the laser point cloud, so as to obtain a clustering result corresponding to the laser radar; fusing point cloud data representing the same object in clustering results corresponding to different laser radars; and constructing a scene map of the detection area in real time according to the fused laser point cloud.
In one embodiment of the invention, the time for each laser radar to acquire the laser point cloud is synchronous with the working clock of a preset positioning system, and the time for a vehicle to acquire the acquired information is synchronous with the working clock of the preset positioning system.
In a second aspect, an embodiment of the present invention provides a detection method, applied to a cloud server, where the method includes:
Receiving laser point clouds sent by all the laser radars, and constructing a scene map of a detection area in real time according to the received laser point clouds, wherein each laser radar acquires time synchronization of the laser point clouds, the detection area is an area comprising a projection range of each laser radar, an object described based on object attributes is included in the scene map, and the object attributes comprise: at least one of the size, the movement speed, the position and the category of the object;
Receiving the position of a vehicle sent after the vehicle enters the detection area and the collected information of objects around the vehicle, wherein the collected information comprises the following components: at least one of a size, a movement speed, a relative position with respect to the vehicle, a category to which the vehicle collects, of an object around the vehicle, and corresponds to the object attribute;
Determining a vehicle position of the vehicle in the scene map according to the received vehicle position;
Detecting detection indexes of the acquired information according to the vehicle position and the object attribute, wherein the detection indexes comprise: at least one of accuracy, detection rate and false alarm rate of the acquired information.
In one embodiment of the present invention, when the object attribute includes a position of an object and the acquired information includes a relative position, the detecting the detection index of the acquired information according to the vehicle position and the object attribute includes:
Determining a map relative position of an object relative to the vehicle in the scene map according to the position of the object in the scene map in the detection area and the vehicle position of the vehicle in the scene map;
And detecting the detection index of the relative position according to the relative position of the map.
In one embodiment of the present invention, the receiving the laser point clouds sent by each laser radar, and constructing a scene map of the detection area in real time according to the received laser point clouds includes:
Receiving laser point clouds sent by each laser radar, and clustering point cloud data representing the same object in the laser point clouds aiming at the laser point clouds sent by each laser radar to obtain a clustering result corresponding to the laser radar;
Fusing point cloud data representing the same object in clustering results corresponding to different laser radars;
and constructing a scene map of the detection area in real time according to the fused laser point cloud.
In a third aspect, an embodiment of the present invention provides a detection apparatus, applied to a cloud server, where the apparatus includes:
The system comprises a map construction module, a detection module and a display module, wherein the map construction module is used for receiving laser point clouds sent by all laser radars and constructing a scene map of a detection area in real time according to the received laser point clouds, wherein the time synchronization of the laser point clouds is acquired by all the laser radars, the detection area is an area comprising the projection range of all the laser radars, the scene map comprises objects described based on object attributes, and the object attributes comprise: at least one of the size, the movement speed, the position and the category of the object;
The position receiving module is used for receiving the position of the vehicle sent after the vehicle enters the detection area and the collected information of objects around the vehicle, wherein the collected information comprises the following components: at least one of a size, a movement speed, a relative position with respect to the vehicle, a category to which the vehicle collects, of an object around the vehicle, and corresponds to the object attribute;
The position determining module is used for determining the vehicle position of the vehicle in the scene map according to the received position of the vehicle;
The index detection module is used for detecting the detection index of the acquired information according to the vehicle position and the object attribute, wherein the detection index comprises: at least one of accuracy, detection rate and false alarm rate of the acquired information.
In one embodiment of the present invention, when the object attribute includes a position of the object and the acquired information includes a relative position, the index detection module is specifically configured to:
Determining a map relative position of an object relative to the vehicle in the scene map according to the position of the object in the scene map in the detection area and the vehicle position of the vehicle in the scene map;
And detecting the detection index of the relative position according to the relative position of the map.
In one embodiment of the present invention, the map construction module is specifically configured to:
Receiving laser point clouds sent by each laser radar, and clustering point cloud data representing the same object in the laser point clouds aiming at the laser point clouds sent by each laser radar to obtain a clustering result corresponding to the laser radar;
Fusing point cloud data representing the same object in clustering results corresponding to different laser radars;
and constructing a scene map of the detection area in real time according to the fused laser point cloud.
In a fourth aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor configured to implement the method steps of any of the second aspects when executing a program stored on a memory.
In a fifth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method steps of any of the second aspects.
In a sixth aspect, embodiments of the present invention also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method steps of any of the second aspects described above.
The embodiment of the invention has the beneficial effects that:
in the scheme provided by the embodiment of the invention, laser point clouds are collected in real time by the laser radars, a scene map of a detection area is constructed by the laser point clouds sent by each laser radar is received by the cloud server, collected information of objects sent after vehicles enter the detection area is received, and detection indexes of the collected information are detected according to object attributes of the objects positioned around the vehicles in the detection area represented by the scene map.
Because the laser point cloud can accurately reflect objects in the scene, a high-precision scene map can be constructed based on the laser point cloud. On the basis, object attributes of objects represented by a scene map constructed by the laser point cloud are taken as true values of object information, and the acquired information of the objects acquired by the vehicle is compared with the object attributes of the objects taken as the true values, so that detection indexes of the acquired information of the objects acquired by the vehicle can be determined.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a detection system according to an embodiment of the present invention;
fig. 2 is a signaling flow chart of a detection system according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an application scenario of a detection system according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a detection method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a detection device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Because the detection index of the acquired information of the object needs to be detected in order to ensure the safety of the automatic driving process of the vehicle, the embodiment of the invention provides a detection system, a detection method and a detection device for detecting and evaluating the detection index of the acquired information of the object acquired by the vehicle.
The embodiment of the invention provides a detection system, which comprises: the system comprises a plurality of laser radars and a cloud server, wherein each laser radar acquires time synchronization of laser point clouds;
the laser radar is used for collecting laser point clouds in real time; and sending the collected laser point cloud to the cloud server.
The cloud server is used for receiving laser point clouds sent by all the laser radars, and constructing a scene map of the detection area in real time according to the received laser point clouds, wherein the scene map comprises objects described based on object attributes; receiving the position of the vehicle sent after the vehicle enters the detection area and the acquisition information of objects around the vehicle acquired by the vehicle; determining a vehicle position of the vehicle in the scene map according to the received vehicle position; detecting the detection index of the collected information according to the vehicle position and the object attribute, wherein the detection area is an area including the projection range of each laser radar, and the object attribute comprises: at least one of the size, the moving speed, the position and the category of the object, wherein the acquired information comprises: at least one of a size, a movement speed, a relative position with respect to the vehicle, and a category to which the vehicle belongs of the object around the vehicle collected by the vehicle corresponds to the object attribute, and the detection index includes: at least one of accuracy, detection rate and false alarm rate of the acquired information.
Because the laser point cloud can accurately reflect objects in the scene, a high-precision scene map can be constructed based on the laser point cloud. On the basis, object attributes of objects represented by a scene map constructed by the laser point cloud are taken as true values of object information, and acquired information of the objects acquired by the objects is compared with the object attributes of the objects taken as the true values, so that detection indexes of the acquired information of the objects acquired by the vehicles can be determined.
The detection system, the detection method and the detection device provided by the embodiment of the invention are described below through specific embodiments.
Referring to fig. 1, an embodiment of the present invention provides a schematic structural diagram of a detection system, where the system includes a plurality of lidars 101 and a cloud server 102.
Wherein each lidar 101 acquires time synchronization of the laser point cloud.
Specifically, the time synchronization of the laser point clouds acquired between the laser radars 101 can be realized by PTP (Precision Time Protocol, high precision time synchronization protocol) between the individual laser radars 101.
Referring to fig. 2, an embodiment of the present invention provides a signaling flow diagram of a detection system.
Referring to fig. 3, an embodiment of the present invention provides an application scenario schematic diagram of a detection system.
In the figure, a gray part indicates a detection area, an object a indicates the cloud server 102, an object B indicates the laser radar 101, an object C indicates a vehicle, and objects D, E, and F indicate objects around the vehicle, respectively. The arrows in the figure indicate that the lidar 101 and the vehicle send data to the cloud server 102.
The operation of the detection system shown in fig. 1 will be described with reference to fig. 2 and 3.
S201: the laser radar 101 collects the laser point cloud in real time.
Specifically, the laser radar can emit laser, reflection can occur under the condition that the laser encounters the surface of an object in the transmission process, a projection point is formed, the laser radar receives the reflected laser, the distance between the projection point and the laser radar can be determined according to the laser emission time, the laser reflection receiving time and the laser reflectivity, and the position of the projection point is further determined. The set of positions of the individual projected points formed by the laser light emitted by the lidar may be referred to as a laser point cloud.
The laser radar 101 may collect the laser point cloud at a preset frequency, for example, 10 times per second, 20 times per second, etc. Since the laser radar 101 continuously collects the laser point cloud according to the preset frequency, the process of collecting the laser point cloud by the laser radar 101 can be considered to be real-time.
Since the above-described time synchronization of the laser point clouds collected by each of the lidars 101, the frequencies of the laser point clouds collected by the different lidars 101 are the same.
S202: the laser radar 101 transmits the collected laser point cloud to the cloud server 102.
Specifically, the laser radar 101 may send the collected laser point cloud to the cloud server 102 through ethernet.
S203: the cloud server 102 receives the laser point clouds sent by the laser radars 101, and constructs a scene map of the detection area in real time according to the received laser point clouds.
The detection area is an area including the projection range of each laser radar 101. The scene map comprises objects described based on object attributes; the object attributes include: at least one of the size, the movement speed, the position and the belonging category of the object.
In particular, the size of an object may be expressed in terms of the smallest cube that can contain the object.
The position of the object may be represented as a position of the object in the scene map, for example, as coordinate values of coordinate points in a detection area coordinate system covering the detection area. The origin of the detection region coordinate system may be any point in the detection region.
In addition, the position of the object may be represented in the form of the longitude and latitude of the position of the object, for example, the longitude and latitude of each point in the scene map may be determined according to the longitude and latitude of the position of each laser radar, so as to determine the longitude and latitude of the position of the object.
The categories to which the object belongs may include vehicles, pedestrians, bicycles, motorcycles, trees, traffic lights, and the like.
In addition, in the case where the above detection area is the same area as the projection range of each of the lidars 101 or is a sub-area in the projection range of each of the lidars 101, the lidar 101 may project laser light to each position in the detection area, that is, the lidar 101 may be considered to completely cover the entire detection area, so that, according to the laser point clouds transmitted by each of the lidars 101, a scene map of the entire detection area may be constructed, and the constructed scene map may cover each object in the detection area, so that the constructed scene map is more accurate.
Specifically, since the laser point cloud collected by the laser radar 101 is composed of the positions of the projection points of the laser emitted by the laser radar 101, since the projection points of the laser emitted by the laser radar 101 are located on the surface of the object, the position of the object can be determined according to the laser point cloud, the category and the size of the object can be determined according to the shape of the surface of the object represented by the laser point cloud, and the movement speed of the object can be determined according to the positions of the object represented by the laser point cloud at the adjacent time collected in real time. Due to the limitation of the projection area and the projection direction of the laser radar 101, if the object properties of all objects in the detection area are to be determined, a plurality of laser radars 101 need to be used to collect the laser point clouds synchronously from different directions. The object properties of the individual objects in the detection region can thus be determined by the laser point clouds of the different lidars 101 in order to construct a scene map of the detection region from the object properties of the individual objects.
Because the laser radars 101 collect the laser point clouds in time synchronously, a scene map of the detection area at the moment can be constructed according to the laser point clouds collected by the laser radars 101 at the same moment. Since the laser point cloud is collected in real time, a scene map of the detection area can be constructed in real time according to the laser point cloud collected in real time by the laser radar 101.
Since the objects in the detection area may be moving, the scene map of the detection area constructed by the cloud server 102 at different times may be different, so when the scene map of the detection area is constructed in real time, the scene map of the detection area constructed at the previous time may be updated by using the laser point cloud obtained at the current time as the scene map constructed at the current time, which may make the accuracy of the scene map higher. In addition, according to the scene map constructed by the cloud server 102 at the adjacent time, the object in motion may be determined in the detection area, and the motion trajectory of the moving object may be determined according to the position of the object in motion in the scene map constructed at the adjacent time.
In one embodiment of the present invention, the scene map of the detection area can be constructed in real time through the following steps a-B.
Step A: and clustering the point cloud data representing the same object in the laser point clouds aiming at the laser point clouds sent by each laser radar 101 to obtain a clustering result corresponding to the laser radar 101.
Specifically, since each point cloud data in the laser point cloud may represent a position of a projection point of the laser emitted by the laser radar 101, and a distance between the projection points located on the surface of the same object is relatively short, a position value represented by the point cloud data representing the same object is relatively short, and thus the point cloud data are clustered according to the position value represented by the point cloud data, and the point cloud data belonging to the same class in the obtained clustering result are used for representing the same object.
In one embodiment of the invention, the point cloud data can be clustered through an Euclidean clustering algorithm, a breadth-first search algorithm or a KD-Tree neighbor search algorithm and other algorithms, and meanwhile, the object clustering detection can be realized by adopting a deep learning method.
And (B) step (B): and fusing point cloud data representing the same object in clustering results corresponding to different laser radars 101, and constructing a scene map of the detection area in real time according to the fused laser point clouds.
Specifically, there may be an intersection between the projection ranges of different lidars 101, so that the different lidars 101 may both acquire point cloud data representing the same object, the point cloud data representing the same object acquired by the different lidars 101 are fused, and according to object attributes of objects commonly determined by the point cloud data representing the same object in the clustering results corresponding to the different lidars 101, the determined object attributes of the objects refer to the clustering results of the multiple lidars 101, so that the determined object attributes of the objects are more accurate, and further, the scene map of the constructed detection area is more accurate.
For example, lidar a collects first point cloud data representing the top and door of a vehicle and lidar B collects second point cloud data representing the door and tire of the same vehicle. Because the first point cloud data and the second point cloud data both represent the same vehicle, the first point cloud data and the second point cloud data can be fused.
In one embodiment of the present invention, the point cloud data acquired by different lidars and representing the same object may be determined according to the positions of the objects reflected by the point cloud data in the clustering results corresponding to different lidars 101.
S204: the cloud server 102 receives the position of the vehicle transmitted after the vehicle enters the detection area and the collected information of objects around the vehicle collected by the vehicle.
Wherein, the collecting information includes: at least one of a size, a moving speed, a relative position with respect to the vehicle, and a category to which the vehicle belongs of the object around the vehicle collected by the vehicle corresponds to the object attribute.
The position of the vehicle can be acquired through positioning equipment based on GPS or Beidou system and the like and installed on the vehicle, and the position is expressed in terms of longitude and latitude.
In addition, the position of the vehicle can be acquired through the high-precision positioning equipment, so that the acquired position of the vehicle is more accurate. The high-precision positioning device can be a centimeter-level positioning device.
Specifically, the size of the object in the acquired information corresponds to the size of the object in the object attribute, the movement speed of the object in the acquired information corresponds to the movement speed of the object in the object attribute, the relative position of the object in the acquired information corresponds to the position of the object in the object attribute, and the category of the object in the acquired information corresponds to the category of the object in the object attribute.
The relative position may be represented by a coordinate value of a coordinate point in a coordinate system having the vehicle as an origin.
In one embodiment of the invention, the relative positions of objects surrounding the vehicle may be acquired by an environmental awareness module onboard the vehicle. The environment sensing module can be a laser radar, and the relative positions of objects around the vehicle are determined by collecting laser point clouds around the vehicle. The above-mentioned environmental perception module may also be an image acquisition device, which acquires images around the vehicle to identify objects in the images, thereby determining the relative positions of the objects around the vehicle. The environment sensing module can also be an ultrasonic radar, and the relative position of objects around the vehicle is determined through ultrasonic waves. The above-described environment sensing module may also be a millimeter wave radar by which the relative positions of objects surrounding the vehicle are determined. Other means are also possible by which the relative position of objects surrounding the vehicle is determined. The relative position acquired by the environment sensing module mounted on the vehicle may be referred to as a vehicle-acquired relative position.
The object that the vehicle can acquire the relative position is the object within the perception range of the environment perception module, namely the object with the distance smaller than or equal to the perception distance of the vehicle and located within the visual angle range of the environment perception module, namely the object around the vehicle, which is limited by the capability of the environment perception module carried on the vehicle to acquire the position of the object. For example, the perceived distance may be 200m, 300m, or the like, and the viewing angle range may be a sector range of 100 degrees, 150 degrees, or the like.
For example, referring to fig. 3, the above object may be a building D around a vehicle, a traffic light E, other vehicles F, or the like.
The vehicle may transmit the relative position to the cloud server 102 through V2X (Vehicle to Everything, vehicle wireless communication technology).
Referring to fig. 3, a vehicle C may collect relative positions of an object D, an object E, and an object F, and send the collected relative positions to a cloud service a.
Specifically, since the scene map constructed by the cloud server 102 is a map of the detection area, the object attribute of the object in the detection area can be determined as a true value by the scene map, so as to be used for detecting the detection index of the collected information. If the vehicle does not enter the detection area, the collected information collected by the vehicle is not information of the object in the detection area, so that when the vehicle does not enter the detection area, the detection index of the collected information cannot be detected according to the object attribute of the object in the detection area determined by the scene map. Therefore, after the vehicle enters the detection area, the cloud server receives the collected information sent by the vehicle and detects the detection index of the collected information, so that the data volume required to be processed in the process of detecting the detection index of the collected information can be reduced, and the efficiency of detecting the detection index of the collected information is improved.
S205: the cloud server 102 determines a vehicle position of the vehicle in the scene map according to the received vehicle position.
Specifically, since the vehicle is located in the detection area, the position of the vehicle can be determined from the scene map, but other vehicles may be included in the detection area, and it is not possible to directly determine which vehicle in the scene map is the vehicle from the scene map. The vehicle position of the vehicle in the scene map can thus be determined from the position of the vehicle.
In the case that the positions of the objects represented by the scene map and the positions of the vehicles transmitted by the vehicles are represented in the form of longitude and latitude determined by a GPS or a beidou system, the positions of the vehicles transmitted by the vehicles and the positions of the objects represented by the scene map may be matched, and the objects corresponding to the vehicles in the scene map may be determined.
In addition, when the positions of the respective objects represented by the scene map are represented in the form of coordinate values of coordinate points in a detection area coordinate system covering the detection area, the positions of the vehicles transmitted from the vehicles may be converted into positions in the three-dimensional coordinate system, and the positions of the respective objects represented by the scene map may be matched with each other to determine the object corresponding to the vehicle in the scene map.
The position of the object corresponding to the vehicle may be represented by a scene map as the vehicle position. The vehicle position may be an average value of a position of an object corresponding to the vehicle and a position of a vehicle transmitted from the vehicle, which is represented by a scene map.
Because the position of the vehicle is more accurate, which is determined by GPS or Beidou system, the object attribute of the object around the vehicle is more accurate, and the more accurate object attribute is used as the detection index of the reference information detection acquisition information, so that the detection index is more accurate.
S206: the cloud server 102 detects the detection index of the collected information based on the vehicle position and the object attribute.
Wherein, the detection index comprises: at least one of accuracy, detection rate and false alarm rate of the acquired information.
Specifically, the accuracy of the acquired information can be determined by comparing the acquired information of each object with the object attribute. The ratio of the number of objects contained in both the collected information and the object attribute to the number of objects around the vehicle contained in the object attribute may be calculated as the above-described detection rate. The ratio of the number of objects contained in the acquired information but not in the object attribute to the number of objects contained in the object attribute may be calculated as the false alarm rate.
The size of the object in the acquired information can be calculated respectively, the size deviation between the length, the width and the height of the object in the acquired information and the length, the width and the height of the object in the scene map can be determined, and the average value, the maximum value, the minimum value and the like of the size deviation can be determined, so that the accuracy of the size of the object in the acquired information can be determined.
For the movement speed of the object in the acquired information, the speed deviation between the movement speed of the object in the acquired information and the movement speed of the object in the scene map can be calculated, and the average value, the maximum value, the minimum value and the like of the speed deviation are determined, so that the accuracy of the movement speed of the object in the acquired information is determined.
For the position of the object in the acquired information, a positional deviation between the relative position of the object and the vehicle in the scene map and the relative position of the object contained in the acquired information with respect to the vehicle may be calculated, and an average value, a maximum value, a minimum value, etc. of the positional deviation may be determined, thereby determining the accuracy of the relative position of the object in the acquired information with respect to the vehicle.
Specifically, the accuracy of determining the relative position through steps C-D is not described in detail herein.
For the category of the object in the acquired information, the accuracy of the category of the object can be determined by determining the number of objects in the acquired information, the category of which is the same as the category of the object in the scene map, and dividing the number by the total number of objects in the scene map.
Since the above scene map may represent the object properties of each object in the detection area, the object properties of the objects around the vehicle and the object properties of other objects are included. However, the vehicle collects the collected information of the objects around the vehicle, so that when the detection index of the collected information is detected, the object attribute of the objects around the vehicle is selected as the detection basis of the detection index of the collected information.
The object attribute of the object within the preset range with the vehicle position as the reference position may be determined as the object attribute of the object around the vehicle, and the preset range may be matched with the sensing range of the environment sensing module mounted on the vehicle. For example, a circular range having a vehicle position as a center and a radius of 5m may be used as the preset range, or a sector range having a vehicle position as a center, a radius of 3m, and an angle of 100 degrees may be used as the preset range.
Specifically, the time detection index of the acquired information at each acquisition time can be detected separately, and the average value, the maximum value, the minimum value, or the like of the time detection index can be counted as the detection index of the acquired information. The collection time is the time when the vehicle collects the collection information of the object.
In addition, for the relative position in the acquired information, the motion trail of the object in the detection area reflected by the point cloud data acquired by the laser radar can be determined as a true motion trail according to the scene map of each acquisition moment in the process of the vehicle running in the detection area. And determining the motion trail of the object collected by the vehicle according to the relative position in the collected information, comparing the motion trail of the object collected by the vehicle with the true motion trail, and determining the detection index of the relative position. Since the true motion trajectory is obtained after the vehicle travels out of the detection area, the process of determining the detection index of the relative position is performed after the vehicle collects the relative position, rather than in real time, and thus both the true motion trajectory and the motion trajectory of the object collected by the vehicle can be considered as offline motion trajectories.
Since the frequency of the laser radar 101 for collecting the laser point cloud may be different from the frequency of the relative position of the object collected by the vehicle, the laser radar 101 may not collect the laser point cloud at the time of collection, in which case, the laser point cloud collected by the laser radar 101 at the time closest to the time of collection may be determined, and the object attribute of the object in the constructed scene map may be used as the reference information for detecting the detection index of the collected information according to the scene map constructed by the determined laser point cloud. The laser point cloud at the acquisition time can be obtained through simulation calculation through a spherical interpolation algorithm, so that a scene map is constructed according to the laser point cloud obtained through simulation calculation, and the object attribute of the object at the acquisition time is determined.
In addition, the time of each laser radar 101 for collecting the laser point cloud may be synchronized with a preset working clock of the positioning system, and the time of the vehicle for collecting the above-mentioned collected information may be synchronized with the preset working clock of the positioning system. So that the time at which each laser radar 101 collects the laser point cloud is synchronized with the time at which the vehicle collects the relative position. The preset positioning system may be a GPS system, a beidou system or other systems. Therefore, the laser point cloud of the moment when the vehicle collects the relative position of the object can be accurately obtained, the moment when the vehicle collects the collection information of the object can be accurately obtained, the object attribute of the object represented by the scene map is used as the reference information for detecting the detection index of the collection information, and the detection result of the detection index of the collection information obtained by detection can be more accurate.
In another embodiment of the present invention, the object attribute includes a position of the object, and the acquired information includes a relative position, and the detection index of the relative position may be detected by the following steps C to D.
Step C: and determining a map relative position of the object with respect to the vehicle in the scene map based on a position of the object in the detection area in the scene map and a vehicle position of the vehicle in the scene map.
The position of the object represented by the scene map and the vehicle position can be compared to determine the relative position of the map, and the relative position is used as a true value when detecting the detection index of the relative position. For example, the map relative position may be: the object is located at the right side 100m, the front 500m, etc. of the vehicle.
Step D: and detecting the detection index of the relative position according to the relative position of the map.
Since the map relative position is the position of the object relative to the vehicle, which is represented by the scene map, and the relative position transmitted by the vehicle is the position of the object relative to the vehicle, the map relative position of the object can be directly compared with the relative position, and the detection index of the relative position can be detected.
Specifically, for the accuracy in the detection index of the relative position, for each acquisition time, the distance difference between the relative position of each object around the vehicle and the relative position of the map in each direction in the three-dimensional space at the acquisition time may be calculated, the sum, average value, minimum value or maximum value of the distance differences between the objects around the vehicle in each direction is taken as the time distance difference at the acquisition time, and the sum, average value, minimum value or maximum value of the time distance differences at the acquisition time is taken as the total distance difference. The greater the total distance difference, the lower the accuracy of the relative position.
Because the laser point cloud can accurately reflect objects in the scene, a high-precision scene map can be constructed based on the laser point cloud. On the basis, comfort information of an object represented by a scene map constructed by the laser point cloud is used as a true value of the object information, and acquired information of the object acquired by the vehicle is compared with object attributes of the object serving as the true value, so that detection indexes of the acquired information of the object acquired by the vehicle can be determined.
Corresponding to the foregoing detection system, referring to fig. 4, an embodiment of the present invention provides a flow chart of a detection method, which is applied to the cloud server, and the method may be implemented by the following steps S401 to S403.
S401: and receiving laser point clouds sent by each laser radar, and constructing a scene map of the detection area in real time according to the received laser point clouds.
The time synchronization of laser point clouds is collected by each laser radar, the detection area is an area comprising the projection range of each laser radar, the scene map comprises objects described based on object attributes, and the object attributes comprise: at least one of the size, the movement speed, the position and the belonging category of the object.
S402: and receiving the position of the vehicle sent after the vehicle enters the detection area and the acquired information of objects positioned around the vehicle and acquired by the vehicle.
Wherein, gather information includes: at least one of a size, a moving speed, a relative position with respect to the vehicle, and a category to which the vehicle belongs of the object around the vehicle collected by the vehicle corresponds to the object attribute.
S403: and determining the vehicle position of the vehicle in the scene map according to the received vehicle position.
S404: and detecting the detection index of the acquired information according to the vehicle position and the object attribute.
Wherein, the detection index comprises: at least one of accuracy, detection rate and false alarm rate of the acquired information.
Because the laser point cloud can accurately reflect objects in the scene, a high-precision scene map can be constructed based on the laser point cloud. On the basis, object attributes of objects represented by a scene map constructed by the laser point cloud are taken as true values of object information, and the acquired information of the objects acquired by the vehicle is compared with the object attributes of the objects taken as the true values, so that detection indexes of the acquired information of the objects acquired by the vehicle can be determined.
In one embodiment of the present invention, when the object attribute includes a position of the object and the acquired information includes a relative position, the step S404 may be implemented through steps S404A-S404B.
S404A: and determining a map relative position of the object with respect to the vehicle in the scene map based on a position of the object in the detection area in the scene map and a vehicle position of the vehicle in the scene map.
S404B: and detecting the detection index of the relative position according to the relative position of the map.
As described above, since the map relative position is the position of the object with respect to the vehicle, which is represented by the scene map, and the relative position transmitted by the vehicle is the position of the object with respect to the vehicle, the map relative position of the object can be directly compared with the relative position, and the detection index of the relative position can be detected.
In one embodiment of the present invention, the above step S401 may be implemented by the following steps S401A to S401C.
S401A: and receiving laser point clouds sent by each laser radar, and clustering point cloud data representing the same object in the laser point clouds aiming at the laser point clouds sent by each laser radar to obtain a clustering result corresponding to the laser radar.
S401B: and fusing the point cloud data representing the same object in the clustering results corresponding to different laser radars.
S401C: and constructing a scene map of the detection area in real time according to the fused laser point cloud.
As can be seen from the above, since the projection range of each lidar is limited, the point cloud data in the laser point cloud acquired by the same lidar may only represent one part of one object, and another lidar may represent another part of the same object. Therefore, the point cloud data representing the same object in the clustering results corresponding to different laser radars are fused, and all the point cloud data representing the same object can be determined.
Specifically, the detection method applied to the cloud server is the same as the operation flow of the cloud server in the detection system, and will not be described herein.
Corresponding to the foregoing detection system, referring to fig. 5, an embodiment of the present invention provides a schematic structural diagram of a detection device, which is applied to a cloud server, where the device includes:
the map construction module 501 is configured to receive laser point clouds sent by each laser radar, and construct a scene map of a detection area according to the received laser point clouds in real time, where each laser radar collects time synchronization of the laser point clouds, the detection area is an area including a projection range of each laser radar, the scene map includes an object described based on object attributes, and the object attributes include: at least one of the size, the movement speed, the position and the category of the object;
The position receiving module 502 is configured to receive a position of a vehicle sent after the vehicle enters the detection area and collected information of objects around the vehicle collected by the vehicle, where the collected information includes: at least one of a size, a movement speed, a relative position with respect to the vehicle, a category to which the vehicle collects, of an object around the vehicle, and corresponds to the object attribute;
a position determining module 503, configured to determine a vehicle position of the vehicle in the scene map according to the received position of the vehicle;
an index detection module 504, configured to detect a detection index of the collected information according to the vehicle position and the object attribute, where the detection index includes: at least one of accuracy, detection rate and false alarm rate of the acquired information.
Because the laser point cloud can accurately reflect objects in the scene, a high-precision scene map can be constructed based on the laser point cloud. On the basis, object attributes of objects represented by a scene map constructed by the laser point cloud are taken as true values of object information, and the acquired information of the objects acquired by the vehicle is compared with the object attributes of the objects taken as the true values, so that detection indexes of the acquired information of the objects acquired by the vehicle can be determined.
In one embodiment of the present invention, when the object attribute includes a position of the object and the acquired information includes a relative position, the index detection module 504 is specifically configured to:
Determining a map relative position of an object relative to the vehicle in the scene map according to the position of the object in the scene map in the detection area and the vehicle position of the vehicle in the scene map;
And detecting the detection index of the relative position according to the relative position of the map.
As described above, since the map relative position is the position of the object with respect to the vehicle, which is represented by the scene map, and the relative position transmitted by the vehicle is the position of the object with respect to the vehicle, the map relative position of the object can be directly compared with the relative position, and the detection index of the relative position can be detected.
In one embodiment of the present invention, the map construction module 501 is specifically configured to:
Receiving laser point clouds sent by each laser radar, and clustering point cloud data representing the same object in the laser point clouds aiming at the laser point clouds sent by each laser radar to obtain a clustering result corresponding to the laser radar;
Fusing point cloud data representing the same object in clustering results corresponding to different laser radars;
and constructing a scene map of the detection area in real time according to the fused laser point cloud.
As can be seen from the above, since the projection range of each lidar is limited, the point cloud data in the laser point cloud acquired by the same lidar may only represent one part of one object, and another lidar may represent another part of the same object. Therefore, the point cloud data representing the same object in the clustering results corresponding to different laser radars are fused, and all the point cloud data representing the same object can be determined.
Specifically, the detection device applied to the cloud server is the same as the operation flow of the cloud server in the detection system, and will not be described herein.
The embodiment of the invention also provides an electronic device, as a cloud server, as shown in fig. 6, comprising a processor 601, a communication interface 602, a memory 603 and a communication bus 604, wherein the processor 601, the communication interface 602 and the memory 603 complete communication with each other through the communication bus 604,
A memory 603 for storing a computer program;
the processor 601 is configured to implement any of the method steps described above in the detection method when executing the program stored in the memory 603.
When the electronic equipment provided by the embodiment of the invention is used as the cloud server for detection, the laser point cloud can accurately reflect objects in a scene, so that a high-precision scene map can be constructed based on the laser point cloud. On the basis, object attributes of objects represented by a scene map constructed by the laser point cloud are taken as true values of object information, and the acquired information of the objects acquired by the vehicle is compared with the object attributes of the objects taken as the true values, so that detection indexes of the acquired information of the objects acquired by the vehicle can be determined.
The communication bus mentioned above for the electronic device may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In yet another embodiment of the present invention, a computer readable storage medium is provided, in which a computer program is stored, the computer program implementing any of the above method steps applied to a cloud server detection method when executed by a processor.
In the case of executing the computer program stored in the computer readable storage medium applied to the cloud server provided by the embodiment of the invention for detection, the laser point cloud can accurately reflect objects in the scene, so that a high-precision scene map can be constructed based on the laser point cloud. On the basis, object attributes of objects represented by a scene map constructed by the laser point cloud are taken as true values of object information, and the acquired information of the objects acquired by the vehicle is compared with the object attributes of the objects taken as the true values, so that detection indexes of the acquired information of the objects acquired by the vehicle can be determined.
In yet another embodiment of the present invention, a computer program product comprising instructions, which when run on a computer, causes the computer to perform the method steps of any of the above detection methods applied to a cloud server is also provided.
Under the condition that the computer program applied to the cloud server provided by the embodiment of the invention is executed for detection, the laser point cloud can accurately reflect objects in a scene, so that a high-precision scene map can be constructed based on the laser point cloud. On the basis, object attributes of objects represented by a scene map constructed by the laser point cloud are taken as true values of object information, and the acquired information of the objects acquired by the vehicle is compared with the object attributes of the objects taken as the true values, so that detection indexes of the acquired information of the objects acquired by the vehicle can be determined.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the method, apparatus, electronic device, computer readable storage medium and computer program product, the description is relatively simple as it is substantially similar to the system embodiments, as relevant points are found in the partial description of the system embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (9)

1. A detection system, the system comprising: the system comprises a plurality of entity laser radars and a cloud server, wherein each laser radar acquires time synchronization of laser point clouds;
the laser radar is used for collecting laser point clouds of a detection area of an entity in real time; the collected laser point clouds are sent to the cloud server, and different laser radars emit laser to collect the laser point clouds of the object in the detection area from different directions;
The cloud server is used for receiving laser point clouds sent by all the laser radars, and constructing a scene map of a detection area of the entity in real time according to the received laser point clouds, wherein the scene map comprises objects described based on object attributes; receiving the position of a vehicle sent by an entity after the vehicle enters the detection area and the acquired information of objects around the vehicle, which are acquired by the vehicle; determining a vehicle position of the vehicle in the scene map according to the received vehicle position; comparing the acquired information of the object acquired by the vehicle with the object attribute of the object as the true value, and detecting the detection index of the acquired information, wherein the detection area is an area comprising the projection range of each laser radar, and the object attribute comprises: at least one of the size, the movement speed, the position and the category of the object, wherein the acquired information comprises: at least one of a size, a movement speed, a relative position with respect to the vehicle, a category to which the vehicle belongs, and an object around the vehicle collected by the vehicle, and corresponds to the object attribute, the detection index includes: compared with the object attribute in the scene map, at least one of the accuracy, the detection rate and the false alarm rate of the acquired information acquired by the vehicle;
And when the object attribute includes the position of the object and the acquired information includes the relative position, the cloud server is specifically configured to determine, in the scene map, the map relative position of the object with respect to the vehicle according to the position of the object in the detection area in the scene map and the vehicle position of the vehicle in the scene map, and detect the detection index of the relative position according to the map relative position.
2. The system of claim 1, wherein the system further comprises a controller configured to control the controller,
The cloud server is specifically configured to cluster, for each laser point cloud sent by the laser radar, point cloud data representing the same object in the laser point cloud, so as to obtain a clustering result corresponding to the laser radar; fusing point cloud data representing the same object in clustering results corresponding to different laser radars; and constructing a scene map of the detection area in real time according to the fused laser point cloud.
3. The system according to claim 1 or 2, wherein the time at which each laser radar collects the laser point cloud is synchronized with a preset operating clock of the positioning system, and the time at which the vehicle collects the collected information is synchronized with the preset operating clock of the positioning system.
4. A detection method, characterized by being applied to a cloud server, the method comprising:
receiving laser point clouds sent by laser radars of all entities, and constructing a scene map of a detection area of the entities in real time according to the received laser point clouds, wherein the time synchronization of the laser point clouds is acquired by all the laser radars, the detection area is an area comprising the projection range of all the laser radars, the scene map comprises objects described based on object attributes, and the object attributes comprise: at least one of the size, the moving speed, the position and the category of the object, different laser radars emit laser to collect laser point clouds of the object in the detection area from different directions;
Receiving the position of a vehicle sent by an entity after the vehicle enters the detection area and the acquired information of objects around the vehicle, wherein the acquired information comprises the following components: at least one of a size, a movement speed, a relative position with respect to the vehicle, a category to which the vehicle collects, of an object around the vehicle, and corresponds to the object attribute;
Determining a vehicle position of the vehicle in the scene map according to the received vehicle position;
And comparing the acquired information of the object acquired by the vehicle with the object attribute of the object as the true value, and detecting the detection index of the acquired information, wherein the detection index comprises: compared with the object attribute in the scene map, at least one of the accuracy, the detection rate and the false alarm rate of the acquired information acquired by the vehicle;
when the object attribute includes the position of the object and the acquired information includes the relative position, the detecting the detection index of the acquired information according to the vehicle position and the object attribute includes:
Determining a map relative position of an object relative to the vehicle in the scene map according to the position of the object in the scene map in the detection area and the vehicle position of the vehicle in the scene map;
And detecting the detection index of the relative position according to the relative position of the map.
5. The method according to claim 4, wherein the receiving the laser point clouds sent by the laser radars of the respective entities, and constructing the scene map of the detection area of the entity according to the received laser point clouds in real time, includes:
Receiving laser point clouds sent by the laser radars of all the entities, and clustering the point cloud data representing the same object in the laser point clouds aiming at the laser point clouds sent by each laser radar to obtain a clustering result corresponding to the laser radars;
Fusing point cloud data representing the same object in clustering results corresponding to different laser radars;
And constructing a scene map of the detection area of the entity in real time according to the fused laser point cloud.
6. A detection apparatus, for use with a cloud server, the apparatus comprising:
The system comprises a map construction module, a detection module and a display module, wherein the map construction module is used for receiving laser point clouds sent by laser radars of all entities and constructing a scene map of a detection area of the entities according to the received laser point clouds in real time, wherein the time synchronization of the laser point clouds is acquired by all the laser radars, the detection area is an area comprising a projection range of all the laser radars, an object described based on object attributes is included in the scene map, and the object attributes comprise: at least one of the size, the moving speed, the position and the category of the object, different laser radars emit laser to collect laser point clouds of the object in the detection area from different directions;
The position receiving module is used for receiving the position of the vehicle sent after the vehicle of the entity enters the detection area and the collected information of objects around the vehicle, wherein the collected information comprises: at least one of a size, a movement speed, a relative position with respect to the vehicle, a category to which the vehicle collects, of an object around the vehicle, and corresponds to the object attribute;
The position determining module is used for determining the vehicle position of the vehicle in the scene map according to the received position of the vehicle;
The system comprises an index detection module, a detection module and a control module, wherein the index detection module is used for comparing acquired information of an object acquired by a vehicle with object attributes of the object serving as true values by taking the object attributes of the object represented by a scene map constructed by the laser point cloud as the true values, and detecting the detection index of the acquired information, wherein the detection index comprises: compared with the object attribute in the scene map, at least one of the accuracy, the detection rate and the false alarm rate of the acquired information acquired by the vehicle;
The index detection module is specifically configured to:
Determining a map relative position of an object relative to the vehicle in the scene map according to the position of the object in the scene map in the detection area and the vehicle position of the vehicle in the scene map;
And detecting the detection index of the relative position according to the relative position of the map.
7. The apparatus of claim 6, wherein the map construction module is specifically configured to:
Receiving laser point clouds sent by the laser radars of all the entities, and clustering the point cloud data representing the same object in the laser point clouds aiming at the laser point clouds sent by each laser radar to obtain a clustering result corresponding to the laser radars;
Fusing point cloud data representing the same object in clustering results corresponding to different laser radars;
And constructing a scene map of the detection area of the entity in real time according to the fused laser point cloud.
8. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the detection method according to any one of claims 4-5 when executing a program stored on a memory.
9. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the detection method according to any of claims 4-5.
CN202011020999.8A 2020-09-25 Detection system, method and device Active CN112147635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011020999.8A CN112147635B (en) 2020-09-25 Detection system, method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011020999.8A CN112147635B (en) 2020-09-25 Detection system, method and device

Publications (2)

Publication Number Publication Date
CN112147635A CN112147635A (en) 2020-12-29
CN112147635B true CN112147635B (en) 2024-05-31

Family

ID=

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016118672A2 (en) * 2015-01-20 2016-07-28 Solfice Research, Inc. Real time machine vision and point-cloud analysis for remote sensing and vehicle control
CN108279428A (en) * 2017-01-05 2018-07-13 武汉四维图新科技有限公司 Map datum evaluating apparatus and system, data collecting system and collecting vehicle and acquisition base station
CN108958266A (en) * 2018-08-09 2018-12-07 北京智行者科技有限公司 A kind of map datum acquisition methods
CN110400363A (en) * 2018-04-24 2019-11-01 北京京东尚科信息技术有限公司 Map constructing method and device based on laser point cloud
CN110609290A (en) * 2019-09-19 2019-12-24 北京智行者科技有限公司 Laser radar matching positioning method and device
CN110779730A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 L3-level automatic driving system testing method based on virtual driving scene vehicle on-ring
WO2020154967A1 (en) * 2019-01-30 2020-08-06 Baidu.Com Times Technology (Beijing) Co., Ltd. Map partition system for autonomous vehicles
CN111580116A (en) * 2020-05-20 2020-08-25 湖北亿咖通科技有限公司 Method for evaluating target detection performance of vehicle-mounted system and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016118672A2 (en) * 2015-01-20 2016-07-28 Solfice Research, Inc. Real time machine vision and point-cloud analysis for remote sensing and vehicle control
CN108279428A (en) * 2017-01-05 2018-07-13 武汉四维图新科技有限公司 Map datum evaluating apparatus and system, data collecting system and collecting vehicle and acquisition base station
CN110400363A (en) * 2018-04-24 2019-11-01 北京京东尚科信息技术有限公司 Map constructing method and device based on laser point cloud
CN108958266A (en) * 2018-08-09 2018-12-07 北京智行者科技有限公司 A kind of map datum acquisition methods
WO2020154967A1 (en) * 2019-01-30 2020-08-06 Baidu.Com Times Technology (Beijing) Co., Ltd. Map partition system for autonomous vehicles
CN110779730A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 L3-level automatic driving system testing method based on virtual driving scene vehicle on-ring
CN110609290A (en) * 2019-09-19 2019-12-24 北京智行者科技有限公司 Laser radar matching positioning method and device
CN111580116A (en) * 2020-05-20 2020-08-25 湖北亿咖通科技有限公司 Method for evaluating target detection performance of vehicle-mounted system and electronic equipment

Similar Documents

Publication Publication Date Title
CN107015559B (en) Probabilistic inference of target tracking using hash weighted integration and summation
CN113012187B (en) Method and foreground extraction system for a vehicle and storage medium
US9079587B1 (en) Autonomous control in a dense vehicle environment
US11814039B2 (en) Vehicle operation using a dynamic occupancy grid
CN112382131B (en) Airport scene safety collision avoidance early warning system and method
CN111724616B (en) Method and device for acquiring and sharing data based on artificial intelligence
CN105787502A (en) Target Grouping Techniques For Object Fusion
EP4089659A1 (en) Map updating method, apparatus and device
CN113743171A (en) Target detection method and device
US11887324B2 (en) Cross-modality active learning for object detection
US20230109909A1 (en) Object detection using radar and lidar fusion
CN115683134A (en) Method and system for vehicle positioning and storage medium
WO2021217669A1 (en) Target detection method and apparatus
CN112147635B (en) Detection system, method and device
CN110411499B (en) Evaluation method and evaluation system for detection and identification capability of sensor
US20230077863A1 (en) Search algorithms and safety verification for compliant domain volumes
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN112147635A (en) Detection system, method and device
CN117178309A (en) Method for creating a map with collision probability
US20230041031A1 (en) Systems and methods for efficient vehicle extent estimation
CN113056715A (en) Autonomous vehicle field theory-based awareness
US20230089897A1 (en) Spatially and temporally consistent ground modelling with information fusion
CN114363308B (en) Map data transmission method and device
US20230387976A1 (en) Antenna monitoring and selection
US11675362B1 (en) Methods and systems for agent prioritization

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant