CN114842445A - Target detection method, device, equipment and medium based on multi-path fusion - Google Patents

Target detection method, device, equipment and medium based on multi-path fusion Download PDF

Info

Publication number
CN114842445A
CN114842445A CN202210379065.6A CN202210379065A CN114842445A CN 114842445 A CN114842445 A CN 114842445A CN 202210379065 A CN202210379065 A CN 202210379065A CN 114842445 A CN114842445 A CN 114842445A
Authority
CN
China
Prior art keywords
information
path
sensing
perception
millimeter wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210379065.6A
Other languages
Chinese (zh)
Inventor
魏源伯
王祎男
吕颖
关瀛洲
刘汉旭
付仁涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202210379065.6A priority Critical patent/CN114842445A/en
Publication of CN114842445A publication Critical patent/CN114842445A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The embodiment of the invention discloses a target detection method, a device, equipment and a medium based on multi-path fusion. The method is performed by object detection electronics, the object detection electronics being configured with the vehicle; the vehicle is also configured with at least two cameras, at least two millimeter wave radars, and at least one lidar. The method comprises the following steps: if the sensing information acquired by at least two ways of a camera, a millimeter wave radar and a laser radar aiming at the obstacle is detected, and the obstacle is in a camera sensing overlapping area or a millimeter wave radar sensing overlapping area, carrying out same way matching on overlapping sensing data; obtaining the same path matching result, and performing different path matching with the perception information of different paths; and determining the attribute information of the barrier in the matching result of the different paths according to the preset specific gravity of each path. According to the technical scheme, the accuracy and the information richness of target detection can be improved through multi-path information fusion, and the driving safety is improved.

Description

Target detection method, device, equipment and medium based on multi-path fusion
Technical Field
The invention relates to the technical field of automatic driving, in particular to a target detection method, a device, equipment and a medium based on multi-path fusion.
Background
With the rapid development of automatic driving technology, people put higher demands on the safety of vehicle driving. In the automatic driving process, obstacles such as pedestrians, motor vehicles or non-motor vehicles and the like may appear in front of the vehicle, and the safe driving of the vehicle is seriously influenced. Therefore, how to accurately detect the obstacle so as to better avoid the driving risk is an urgent problem to be solved in the driving process of the vehicle.
In the related art, a convolutional neural network is adopted to detect and identify an obstacle target. However, the scheme is greatly influenced by the performance of the convolutional neural network, so that the detection accuracy of the obstacle is easily deteriorated, and the detection path of the obstacle is single, so that the richness of the detection information is limited.
Disclosure of Invention
The invention provides a target detection method, a device, equipment and a medium based on multi-path fusion, which can improve the accuracy and the information richness of target detection through multi-path information fusion, thereby being beneficial to the safe driving of vehicles.
According to an aspect of the present invention, there is provided a multi-pathway fusion-based object detection method, the method being performed by object detection electronics, the object detection electronics being configured in a vehicle; the vehicle is also provided with at least two cameras, at least two millimeter wave radars and at least one laser radar; the method comprises the following steps:
if the sensing information acquired by at least two ways of a camera, a millimeter wave radar and a laser radar aiming at the obstacle is detected, and the obstacle is in a camera sensing overlapping area or a millimeter wave radar sensing overlapping area, carrying out same way matching on overlapping sensing data;
obtaining the same path matching result, and performing different path matching with the perception information of different paths;
and determining the attribute information of the barrier in the matching result of the different paths according to the preset specific gravity of each path.
Optionally, if the at least two cameras acquire perception information, acquiring a perception overlapping area of the cameras;
correspondingly, the same path matching is carried out on the overlapping perception data, and the method comprises the following steps:
calculating a similarity between the at least two cameras for each overlapping perception data;
And correlating the two overlapped sensing data with the maximum similarity, and determining a matching result of the overlapped sensing data of the camera according to the correlation result.
Optionally, if the at least two millimeter wave radars acquire sensing information, acquiring a sensing overlapping region of the millimeter wave radars;
correspondingly, the same path matching is carried out on the overlapping perception data, and the method comprises the following steps:
performing cluster analysis on each overlapped sensing data to obtain at least two cluster groups; wherein each cluster group contains at least one overlapping perceptual data;
calculating a similarity between each two overlapping perceptual data for the at least two cluster groups;
and associating the cluster groups where the two overlapping sensing data with the maximum similarity are located, and determining the matching result of the millimeter wave radar overlapping sensing data according to the association result.
Optionally, the similarity is calculated by using a mahalanobis distance;
the formula for calculating the mahalanobis distance is as follows:
Figure BDA0003591503430000021
wherein D is ij Is the covariance distance, M, between two perceptual data j And X i Respectively, two sets of measurement matrices, M, corresponding to the perception information j -X i Is the dimension difference matrix corresponding to the two sets of perception information, and W is the sum of the covariance matrix corresponding to the two sets of perception information and the system covariance matrix.
Optionally, obtaining the same path matching result, and performing different path matching with perception information of different paths, including:
if the perception information acquired by the two ways is detected, calculating the similarity between every two perception data according to the same way matching result and the perception information of the different ways, and matching the two perception data with the maximum similarity;
and if the sensing information acquired by the three ways is detected, fusing the camera matching result with the sensing information acquired by the laser radar, and fusing again according to the fusion result and the millimeter wave radar matching result.
Optionally, the attribute information includes position information and speed information;
correspondingly, according to the preset specific gravity of each path, determining the attribute information of the barrier in the matching result of the different paths, including:
determining position information of the barrier in the matching result of the different paths according to the first preset specific gravity of each path;
and determining the speed information of the barrier in the different path matching result according to the second preset specific gravity of each path.
Optionally, the boundary of the perceptual overlap region is expanded outwards based on a preset expansion parameter;
Accordingly, obtaining a perceptual overlap region comprises:
and acquiring a region of the perception overlapping region after the region is expanded outwards based on preset expansion parameters, and taking the region as the perception overlapping region.
According to another aspect of the present invention, there is provided a multi-path fusion-based object detection apparatus, the apparatus being configured with object detection electronics configured with a vehicle; the vehicle is also provided with at least two cameras, at least two millimeter wave radars and at least one laser radar; the device comprises:
the same-path matching module is used for performing same-path matching on the overlapped sensing data if sensing information acquired by at least two paths of the camera, the millimeter wave radar and the laser radar aiming at the obstacle is detected, and the obstacle is in a sensing overlapping area of the camera or a sensing overlapping area of the millimeter wave radar;
the different path matching module is used for obtaining the same path matching result and carrying out different path matching with the perception information of different paths;
and the attribute information determining module is used for determining the attribute information of the barrier in the matching result of the different paths according to the preset specific gravity of each path.
According to another aspect of the present invention, there is provided a multi-pathway fusion-based object detection electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform a multi-pass fusion based object detection method according to any of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement a multi-pass fusion based object detection method according to any one of the embodiments of the present invention when executed.
According to the technical scheme of the embodiment of the invention, if the sensing information acquired by at least two ways of a camera, a millimeter wave radar and a laser radar configured by a vehicle aiming at the obstacle is detected, and the obstacle is in a camera sensing overlapping area or a millimeter wave radar sensing overlapping area, the overlapping sensing data is subjected to the same way matching; obtaining the same path matching result, and performing different path matching with the perception information of different paths; and determining the attribute information of the barrier in the matching result of the different paths according to the preset specific gravity of each path. According to the technical scheme, the accuracy and the information richness of target detection can be improved through multi-path information fusion, so that safe driving of vehicles is facilitated.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart of a method for detecting a target based on multi-way fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a perceptual overlap region according to an embodiment of the present invention;
FIG. 3 is a flowchart of a multi-path fusion-based target detection method according to a second embodiment of the present invention;
FIG. 4 is a flowchart of a preferred multi-pathway fusion-based target detection method according to the second embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a target detection apparatus based on multi-pass fusion according to a third embodiment of the present invention;
Fig. 6 is a schematic structural diagram of an electronic device for target detection based on multi-path fusion, which implements the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," "target," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of a method for detecting a target based on multi-path fusion according to an embodiment of the present invention, where the method is applicable to a case where obstacle information detected by multiple paths is fused, and the method may be executed by a target detection apparatus based on multi-path fusion, where the target detection apparatus based on multi-path fusion may be implemented in a form of hardware and/or software, and the target detection apparatus based on multi-path fusion may be configured in an electronic device with data processing capability. As shown in fig. 1, the method includes:
and S110, if the sensing information acquired by at least two ways of the camera, the millimeter wave radar and the laser radar aiming at the obstacle is detected, and the obstacle is in a camera sensing overlapping area or a millimeter wave radar sensing overlapping area, carrying out same way matching on overlapping sensing data.
The technical scheme of the embodiment can be executed by target detection electronic equipment, and the target detection electronic equipment is configured on a vehicle; the vehicle is also configured with at least two cameras, at least two millimeter wave radars, and at least one lidar. This technical scheme can match the integration to the barrier information of a plurality of ways that are detected by camera, millimeter wave radar and laser radar, can combine the detection advantage of a plurality of sensors, has improved the detection precision of barrier information, has strengthened the detection abundance of barrier information to can discern the barrier better, help the vehicle safety to travel.
The obstacle may refer to an obstacle that affects normal driving of the vehicle, and may be an animal, a pedestrian, a vehicle, a road block, or the like. Millimeter-wave radar may refer to a target detection radar system operating in the millimeter-wave band (at frequencies of 30-300 GHz). Lidar may refer to a radar system that detects target information by emitting a laser beam. The sensing information may refer to sensing identification information of a vehicle sensor (such as a camera, a millimeter wave radar, or a laser radar) on a detection target. Illustratively, the target perception information acquired by the camera is image information, and the target perception information acquired by the millimeter wave radar or the laser radar is point cloud information.
It should be noted that the vehicle is equipped with at least two cameras, at least two millimeter wave radars, and at least one lidar. The cameras and the millimeter wave radar may provide target perception information of a single sensor, for example, each camera may obtain image information separately, or each millimeter wave radar may obtain point cloud information separately. If a plurality of laser radars exist, after each laser radar acquires target perception information, multi-path original point cloud splicing processing is completed at first, and therefore the target perception information of the plurality of laser radars is comprehensively provided. For each camera or each millimeter wave radar, due to the difference in the installation angle or detection angle thereof, the detection area for the same obstacle may be different, but there may be an overlapping area. The sensing overlapping area can be understood as an overlapping detection area in which different cameras or different millimeter wave radars can detect obstacles.
Fig. 2 is a schematic diagram of a sensing overlap region according to an embodiment of the present invention. Where A and B are two sensors on the vehicle, a1, a2 are the detection zone boundaries of sensor A (solid line), and B1, B2 are the detection zone boundaries of sensor B (dashed line). At this time, it can be determined that b1 and a2 are two boundaries of the sensing overlap region, and the included angle range formed by b1 and a2 is the sensing overlap region. It should be noted that, since the camera and the millimeter wave radar provide target sensing information of a single sensor, and the lidar provides overall target sensing information, the sensing overlap region is involved in the detection by the camera and the millimeter wave radar, and the sensing overlap region is not involved in the detection by the lidar.
In this embodiment, the overlapped sensing data may be sensing data obtained by performing data processing on sensing information acquired by a camera or a millimeter wave radar in a sensing overlapped region. For example, the camera overlap sensing data is obtained by processing image information acquired in the camera sensing overlap region, or the millimeter wave radar overlap sensing data is obtained by processing point cloud information acquired in the millimeter wave radar sensing overlap region. The same approach may be understood as the way of obstacle detection by the same type of sensor, for example by multiple cameras, or by multiple millimeter wave radars. It should be noted that, since the sensing overlap region is involved in the obstacle detection by the camera and the millimeter wave radar, only the camera and the millimeter wave radar need to perform the same path matching on the overlapped sensing data.
In this embodiment, when sensing information collected by at least two approaches of a camera, a millimeter wave radar and a laser radar for an obstacle is detected, and the obstacle is in a camera sensing overlapping area or a millimeter wave radar sensing overlapping area, the same approach matching is performed on overlapping sensing data. It is understood that when the perception information collected by at least two approaches of the camera, the millimeter wave radar and the laser radar for the obstacle is detected, the perception information of at least one approach of the camera and the millimeter wave radar is necessarily included. Meanwhile, if the obstacle appears in the camera sensing overlapping area or the millimeter wave radar sensing overlapping area, that is, the obstacle is detected by at least two cameras or at least two millimeter wave radars, the overlapping sensing data of the cameras or the millimeter wave radars need to be matched in the same way.
In this embodiment, before performing the same-path matching on the overlapped sensing data, it is first necessary to perform time synchronization on the same type of sensors and perform unified calibration on the overlapped sensing data. It will be appreciated that the time reference needs to be aligned because of differences in the pulse sampling rate and the on-time of the sensors, which can cause timing errors. Specifically, the sensors can be subjected to unified time service through the GPS, the update frequency of each sensor is ensured to be consistent, and a group of data timestamps t with the latest time is adopted i As a common processing time, let the kth sensor at time t j Is synchronized to a common processing time. For example, the time synchronization of the sensors can be achieved by the following formula: z k (t i )=Z k (t j )+V×(t i -t j ). Wherein Z is k (t i ) Is observed state data of the sensor k after time synchronization, V is the speed of the obstacle, Z k (t j ) Is t j The time of day is the observed state data from sensor k. In addition, the overlapped sensing data can be calibrated by adopting a unified coordinate system.For example, a coordinate system of the vehicle may be established with a connecting line center of the rear wheels of the vehicle as an origin and with a front side and a front left side of the vehicle as an x axis and a y axis, respectively, so as to implement unified calibration of the overlay perception data.
It should be noted that, when the cameras are used for detection, each camera can identify a detection point for one obstacle. However, in the case of the millimeter wave radar, each millimeter wave radar can recognize a plurality of detection points for one obstacle. In this embodiment, performing the same-path matching on the overlapping sensing data may be understood as performing matching on the overlapping sensing data of at least two cameras or at least two millimeter wave radars. The result of the same path matching is different for the camera and the millimeter wave radar. Specifically, if the sensing data of the overlapped cameras are matched, the sensing data of a single detection point detected by each camera in the sensing overlapped area of the cameras can be screened, and the sensing data of one detection point is selected from the screening as a matching result of the camera. If the millimeter wave radar overlapping sensing data are matched, the sensing data of a group of detection points detected by each millimeter wave radar in the millimeter wave radar overlapping sensing area can be screened, and the sensing data of the group of detection points is selected as a matching result of the millimeter wave radar. For example, the screening condition may be set according to the distance between the detection points and the host vehicle or the similarity between the detection points, which is not limited herein.
And S120, obtaining the same path matching result, and performing different path matching with the perception information of different paths.
The different approaches may be understood as a way of detecting obstacles by different types of sensors, for example, three approaches of a camera, a millimeter wave radar and a laser radar. It should be noted that there may be a variety of situations with the divergent pathways. Specifically, the different pathways may include two pathways, or may include three pathways. Illustratively, if the detection is performed through two ways of a camera and a laser radar, the camera and the laser radar are different ways at the moment; if the detection is carried out through the three ways of the camera, the millimeter wave radar and the laser radar, the camera, the millimeter wave radar and the laser radar are different ways.
In this embodiment, before performing the different path matching with the perception information of the different path, the heterogeneous sensors need to be time-synchronized. Specifically, a group of data time with the latest timestamp in the heterogeneous sensors can be selected as a reference, other sensors are synchronized to a reference time, and the reference time is used as the system time in the matching process of the heterogeneous paths. And after time synchronization of the sensors is completed, carrying out different path matching on the same path matching result and the perception information of different paths. Specifically, if the detection is performed through two approaches, the same approach matching result is directly matched with the perception information of the different approaches. For example, when detection is performed through two ways of a camera and a laser radar, a camera matching result, namely sensing data of a certain camera, is obtained first, point cloud sensing data of the laser radar is screened according to the camera matching result, one point cloud sensing data is selected from the point cloud sensing data, and the point cloud sensing data is associated with the camera matching result, so that matching of the camera and the laser radar is achieved. If the detection is carried out through three ways, firstly, the matching result of the camera is matched with the perception information of the laser radar, and then the matching is carried out according to the matching result and the matching result of the millimeter wave radar.
And S130, determining the attribute information of the barrier in the matching result of the different paths according to the preset specific gravity of each path.
The preset specific gravity may be understood as a ratio of each path set in advance when different paths are matched. The attribute information may refer to information for characterizing an attribute of the obstacle. For example, the attribute information may be size information, category information, position information, speed information, and the like. Specifically, the size information may include a size and a shape; the category information may be used to identify the kind of the obstacle, for example, when the obstacle is a vehicle, it may be classified into a passenger car, a truck, a car, a tricycle, an electric vehicle, etc.; the location information may include a location and a heading angle; the velocity information may include velocity and acceleration.
In this embodiment, the attribute information of the obstacle in the different route matching result may be determined according to the preset specific gravity of each route. It should be noted that, because the attributes of the obstacle are many, when the preset specific gravity is set, different attributes need to be flexibly set according to actual application requirements. For example, if the size information of the obstacle is concerned, the laser radar may be given a higher weight; if the speed information of the obstacle is concerned, the millimeter wave radar can be given a higher weight.
Optionally, the attribute information includes position information and speed information; correspondingly, according to the preset specific gravity of each path, determining the attribute information of the barrier in the matching result of the different paths, including: determining position information of the barrier in the matching result of the different paths according to the first preset specific gravity of each path; and determining the speed information of the barrier in the different path matching result according to the second preset specific gravity of each path.
Wherein the first preset specific gravity may refer to a preset specific gravity associated with the obstacle position information, and the second preset specific gravity may refer to a preset specific gravity associated with the obstacle speed information. In this embodiment, different preset specific gravities may be set according to different approaches for detection advantages of different attributes of the obstacle, so as to determine the position information and the speed information of the obstacle in the matching result of different approaches. For example, assuming that detection is performed through three ways, namely, a camera, a millimeter wave radar and a laser radar, when determining the position information of an obstacle, the specific gravity of the laser radar can be appropriately increased due to higher detection accuracy of the laser radar on the position information, for example, the preset specific gravities of the three ways are respectively set to 20%, 10% and 70%, that is, the specific gravity of the laser radar is higher; when the speed information of the obstacle is determined, since the detection accuracy of the millimeter wave radar for the speed information is higher, the specific gravity of the millimeter wave radar can be appropriately increased, for example, the preset specific gravities of the above three approaches are set to 5%, 75%, and 20%, respectively, that is, the specific gravity of the millimeter wave radar is larger.
According to the scheme, the attribute information of the barrier can be determined by utilizing the detection advantages of different attributes of the barrier in sufficiently different ways, and the detection precision of the barrier can be improved while the richness of the barrier information is ensured.
According to the technical scheme of the embodiment of the invention, if the sensing information acquired by at least two ways of a camera, a millimeter wave radar and a laser radar configured by a vehicle aiming at the obstacle is detected, and the obstacle is in a camera sensing overlapping area or a millimeter wave radar sensing overlapping area, the overlapping sensing data is subjected to the same way matching; obtaining the same path matching result, and performing different path matching with the perception information of different paths; and determining the attribute information of the barrier in the matching result of the different paths according to the preset specific gravity of each path. According to the technical scheme, the accuracy and the information richness of target detection can be improved through multi-path information fusion, so that safe driving of vehicles is facilitated.
In this embodiment, optionally, if the at least two cameras acquire the perception information, obtaining a perception overlapping area of the cameras; correspondingly, the same path matching is carried out on the overlapping perception data, and the method comprises the following steps: calculating a similarity between the at least two cameras for each overlapping perception data; and correlating the two overlapped sensing data with the maximum similarity, and determining a matching result of the overlapped sensing data of the camera according to the correlation result.
Wherein the similarity can be used to characterize the degree of similarity between two objects. It can be understood that when the cameras are used for detecting the obstacle, if the similarity between the perception information collected by the two cameras is higher, the probability that the two cameras detect the same obstacle is higher.
In this embodiment, when the at least two cameras acquire the perception information, the perception overlapping regions of the cameras are acquired, the similarity is calculated between the at least two cameras according to each overlapped perception data in the perception overlapping regions, and then correlation matching is performed according to the similarity. It should be noted that, in order to enable the same obstacle detected by each camera to be free of overlapping, an association threshold may be preset, and association matching of overlapping sensing data of the cameras is achieved by calculating similarity within the association threshold. The association threshold may refer to a range of a preset matching area. For example, the correlation threshold may be a circular region 50 cm away from a certain detection point, or may be a rectangular region centered on a certain detection point. Specifically, the similarity between the perception data corresponding to each two cameras can be calculated within the association threshold, and the two perception data with the maximum similarity are selected for association. Further, the sensing data of the sensing point closest to the vehicle may be used as the matching result of the camera overlap sensing data according to the distance between the sensing point corresponding to the two sensing data and the vehicle.
This scheme can filter and match the perception information of the same barrier that a plurality of cameras detected through such setting to the realization is to the quick matching of this way of camera.
In this embodiment, optionally, if the at least two millimeter wave radars acquire the sensing information, a sensing overlap region of the millimeter wave radars is obtained; correspondingly, the same path matching is carried out on the overlapping perception data, and the method comprises the following steps: performing cluster analysis on each overlapped sensing data to obtain at least two cluster groups; wherein each cluster group contains at least one overlapping perceptual data; calculating a similarity between each two overlapping perceptual data for the at least two cluster groups; and associating the cluster groups where the two overlapping sensing data with the maximum similarity are located, and determining a matching result of the overlapping sensing data of the millimeter wave radar according to the association result.
In this embodiment, after the millimeter wave radar sensing overlap region is obtained, a DBSCAN algorithm may be used to perform cluster analysis on each sensing data in the sensing overlap region, and remove an independent noise point to implement a data denoising function, so that at least two cluster groups can be finally obtained. Wherein each cluster group contains at least one overlapping perceptual data. Similarly, in order to enable the same obstacle detected by each millimeter wave radar to be free of overlapping, an association threshold may be preset, and similarity may be calculated in the association threshold to implement association matching of overlapping sensing data of the millimeter wave radars. Specifically, the similarity between every two pieces of sensing data in the cluster group may be calculated within the association threshold, and the cluster group where the two pieces of sensing data with the largest similarity are located is associated. Further, according to the distance between the detection point corresponding to the two pieces of sensing data and the vehicle, the cluster group in which the sensing data of the detection point closest to the vehicle is located may be used as the matching result of the millimeter wave radar overlay sensing data.
According to the scheme, through the arrangement, noise can be removed through cluster analysis, noise interference is avoided, perception information of the same barrier detected by a plurality of millimeter wave radars can be screened and matched, and therefore rapid matching of the millimeter wave radars in the path is achieved.
Optionally, the similarity is calculated by using a mahalanobis distance; the formula for calculating the mahalanobis distance is as follows:
Figure BDA0003591503430000131
wherein D is ij Is the covariance distance, M, between two perceptual data j And X i Respectively, two sets of measurement matrices, M, corresponding to the perception information j -X i Is the dimension difference matrix corresponding to the two sets of perception information, and W is the sum of the covariance matrix corresponding to the two sets of perception information and the system covariance matrix.
The mahalanobis distance can be a method capable of effectively calculating the similarity of two unknown sample sets, and can be used for characterizing the covariance distance between data. In this embodiment, the similarity may be determined by calculating the mahalanobis distance of the two perception data. It should be noted that the mahalanobis distance and the similarity are in an inverse relationship, that is, the smaller the mahalanobis distance, the greater the similarity between the two data is, that is, the greater the possibility that the perception information acquired through the two paths belongs to the same obstacle is.
According to the scheme, the Mahalanobis distance is adopted, the similarity can be represented by considering the position information and the speed information at the same time, the similarity between the perception information can be reflected visually by synthesizing different attribute information of the barrier, and the precision and the information richness of target detection are further improved.
Optionally, the boundary of the perceptual overlap region is expanded outwards based on a preset expansion parameter; accordingly, obtaining a perceptual overlap region comprises: and acquiring a region of the perception overlapping region after the region is expanded outwards based on preset expansion parameters, and taking the region as the perception overlapping region.
The preset dilation parameter may refer to a preset boundary dilation parameter. For example, the preset expansion parameter may be an expansion distance or an expansion angle, and is not specifically limited herein. In this embodiment, on the basis of the obtained original perceptual overlapping area, the area boundary may be expanded outward based on a preset expansion parameter, and the expanded area may be determined as the perceptual overlapping area. Illustratively, the two acquired boundaries of the perceptual overlap region are respectively shifted outwards by a distance of half own vehicle length, that is, the region boundaries are expanded outwards by a distance of one own vehicle length, so as to determine the perceptual overlap region.
According to the scheme, the range of the sensing overlapping area can be enlarged, the data integrity of the sensing overlapping area can be guaranteed to a certain extent, and important detection data are prevented from being omitted.
Example two
Fig. 3 is a flowchart of a target detection method based on multi-path fusion according to a second embodiment of the present invention, which is optimized based on the second embodiment. The concrete optimization is as follows: obtaining the same path matching result, and performing different path matching with the perception information of different paths, including: if the perception information acquired by the two ways is detected, calculating the similarity between every two perception data according to the same way matching result and the perception information of the different ways, and matching the two perception data with the maximum similarity; and if the sensing information acquired by the three ways is detected, fusing the camera matching result with the sensing information acquired by the laser radar, and fusing again according to the fusion result and the millimeter wave radar matching result.
As shown in fig. 3, the method of this embodiment specifically includes the following steps:
and S310, if the sensing information acquired by at least two ways of the camera, the millimeter wave radar and the laser radar aiming at the obstacle is detected, and the obstacle is in a camera sensing overlapping area or a millimeter wave radar sensing overlapping area, carrying out same way matching on overlapping sensing data.
And S320, judging whether the perception information acquired by the two ways is detected, if so, executing S330, and otherwise, executing S340.
S330, according to the same path matching result and the perception information of different paths, the similarity between every two pieces of perception data is calculated, and the two pieces of perception data with the maximum similarity are matched.
In this embodiment, if the perception information collected by the two approaches is detected, the dissimilar approach matching may be performed according to the similarity between the perception data. For example, assuming that sensing information collected by a camera and a laser radar is detected, after a camera matching result is obtained, similarity between camera matching data and cloud data of each point of the laser radar can be calculated respectively, and two data with the maximum similarity are matched. One data is camera matching data, and the other data is laser radar point cloud data in matching.
And S340, fusing the camera matching result with the perception information acquired by the laser radar, and fusing again according to the fusion result and the millimeter wave radar matching result.
In this embodiment, if sensing information acquired in three ways is detected, the camera matching result and the sensing information acquired by the laser radar may be fused first, and then the camera matching result and the millimeter wave radar matching result are fused again according to the fusion result. Specifically, the similarity between the camera matching data and the cloud data of each point of the laser radar can be calculated respectively, and the two data with the maximum similarity are matched to realize information fusion of the camera and the laser radar. And then respectively calculating the similarity between the camera matching data and each data in the millimeter wave radar matching cluster group, matching the two data with the maximum similarity, and simultaneously abandoning the sensing data which are not matched in the cluster group so as to realize the information fusion of the camera, the millimeter wave radar and the laser radar.
And S350, determining the attribute information of the barrier in the matching result of the different paths according to the preset specific gravity of each path.
According to the technical scheme of the embodiment of the invention, if the sensing information acquired by at least two ways of a camera, a millimeter wave radar and a laser radar configured by a vehicle aiming at the obstacle is detected, and the obstacle is in a camera sensing overlapping area or a millimeter wave radar sensing overlapping area, the overlapping sensing data is subjected to the same way matching; judging whether the perception information acquired by the two ways is detected or not, if so, calculating the similarity between every two perception data according to the same way matching result and the perception information of the different ways, matching the two perception data with the maximum similarity, otherwise, fusing the camera matching result and the perception information acquired by the laser radar, and fusing again according to the fusion result and the millimeter wave radar matching result; and determining the attribute information of the barrier in the matching result of the different paths according to the preset specific gravity of each path. According to the technical scheme, the accuracy and the information richness of target detection can be improved through multi-path information fusion, so that safe driving of vehicles is facilitated.
Fig. 4 is a flowchart of a preferred target detection method based on multi-pathway fusion according to the second embodiment of the present invention. The obstacle information detection method comprises three ways of detecting obstacle information, including at least two cameras, at least two millimeter wave radars and a laser radar. The scheme can give full play to the detection advantages of different types of sensors, and the accuracy and the information richness of target detection can be further improved through the information fusion of three ways.
EXAMPLE III
Fig. 5 is a schematic structural diagram of a target detection apparatus based on multi-path fusion according to a third embodiment of the present invention, where the apparatus is configured in a target detection electronic device, and the target detection electronic device is configured in a vehicle; the vehicle is also configured with at least two cameras, at least two millimeter wave radars, and at least one lidar. The device can execute the target detection method based on multi-path fusion provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. As shown in fig. 5, the apparatus includes:
the same-path matching module 510 is configured to perform the same-path matching on the overlapped sensing data if sensing information acquired by at least two paths of the camera, the millimeter wave radar and the laser radar for the obstacle is detected, and the obstacle is in a sensing overlapping area of the camera or a sensing overlapping area of the millimeter wave radar;
A different path matching module 520, configured to obtain a same path matching result, and perform different path matching with perception information of a different path;
an attribute information determining module 530, configured to determine attribute information of the obstacle in the matching result of the different routes according to the preset specific gravity of each route.
Optionally, if the at least two cameras acquire perception information, acquiring a perception overlapping area of the cameras;
accordingly, the same path matching module 510 includes:
calculating a similarity between the at least two cameras for each overlapping perception data;
and correlating the two overlapped sensing data with the maximum similarity, and determining a matching result of the overlapped sensing data of the camera according to the correlation result.
Optionally, if the at least two millimeter wave radars acquire sensing information, acquiring a sensing overlapping region of the millimeter wave radars;
accordingly, the same path matching module 510 includes:
performing cluster analysis on each overlapped sensing data to obtain at least two cluster groups; wherein each cluster group contains at least one overlapping perceptual data;
calculating a similarity between each two overlapping perceptual data for the at least two cluster groups;
And associating the cluster groups where the two overlapping sensing data with the maximum similarity are located, and determining the matching result of the millimeter wave radar overlapping sensing data according to the association result.
Optionally, the similarity is calculated by using a mahalanobis distance;
the formula for calculating the mahalanobis distance is as follows:
Figure BDA0003591503430000171
wherein D is ij Is the covariance distance, M, between two perceptual data j And X i Respectively, two sets of measurement matrices, M, corresponding to the perception information j -X i Is the dimension difference matrix corresponding to the two sets of perception information, and W is the sum of the covariance matrix corresponding to the two sets of perception information and the system covariance matrix.
Optionally, the distinct pathway matching module 520 includes:
if the perception information acquired by the two ways is detected, calculating the similarity between every two perception data according to the same way matching result and the perception information of the different ways, and matching the two perception data with the maximum similarity;
and if the sensing information acquired by the three ways is detected, fusing the camera matching result with the sensing information acquired by the laser radar, and fusing again according to the fusion result and the millimeter wave radar matching result.
Optionally, the attribute information includes position information and speed information;
Accordingly, the attribute information determining module 530 includes:
determining position information of the barrier in the matching result of the different paths according to the first preset specific gravity of each path;
and determining the speed information of the barrier in the different path matching result according to the second preset specific gravity of each path.
Optionally, the boundary of the perceptual overlap region is expanded outwards based on a preset expansion parameter;
accordingly, obtaining a perceptual overlap region comprises:
and acquiring a region of the perception overlapping region after the region is expanded outwards based on preset expansion parameters, and taking the region as the perception overlapping region.
The multi-path fusion-based target detection device provided by the embodiment of the invention can execute the multi-path fusion-based target detection method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
FIG. 6 illustrates a schematic structural diagram of an electronic device 10 that may be used to implement an embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM)12, a Random Access Memory (RAM)13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM)12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 11 performs the various methods and processes described above, such as a multi-pass fusion based target detection method.
In some embodiments, the multi-pass fusion based object detection method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the multi-pass fusion based object detection method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the multi-pass fusion based object detection method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A multi-path fusion-based target detection method is characterized in that the method is executed by target detection electronic equipment, and the target detection electronic equipment is configured in a vehicle; the vehicle is also provided with at least two cameras, at least two millimeter wave radars and at least one laser radar; the method comprises the following steps:
if the sensing information acquired by at least two ways of a camera, a millimeter wave radar and a laser radar aiming at the obstacle is detected, and the obstacle is in a camera sensing overlapping area or a millimeter wave radar sensing overlapping area, carrying out same way matching on overlapping sensing data;
Acquiring the same path matching result, and performing different path matching with the perception information of different paths;
and determining the attribute information of the barrier in the matching result of the different paths according to the preset specific gravity of each path.
2. The method according to claim 1, wherein if the at least two cameras acquire perception information, a camera perception overlap region is obtained;
correspondingly, the same path matching is carried out on the overlapping perception data, and the method comprises the following steps:
calculating a similarity between the at least two cameras for each overlapping perception data;
and correlating the two overlapped sensing data with the maximum similarity, and determining a matching result of the overlapped sensing data of the camera according to the correlation result.
3. The method according to claim 1, wherein if the at least two millimeter wave radars acquire perception information, a millimeter wave radar perception overlap region is obtained;
correspondingly, the same path matching is carried out on the overlapping perception data, and the method comprises the following steps:
performing cluster analysis on each overlapped sensing data to obtain at least two cluster groups; wherein each cluster group contains at least one overlapping perceptual data;
calculating a similarity between each two overlapping perceptual data for the at least two cluster groups;
And associating the cluster groups where the two overlapping sensing data with the maximum similarity are located, and determining the matching result of the millimeter wave radar overlapping sensing data according to the association result.
4. The method according to claim 2 or 3, wherein the similarity is calculated using Mahalanobis distance;
the formula for calculating the Mahalanobis distance is as follows:
Figure FDA0003591503420000021
wherein D is ij Is the covariance distance, M, between two perceptual data j And X i Respectively, two sets of measurement matrices, M, corresponding to the perception information j -X i Is the dimension difference matrix corresponding to the two sets of perception information, and W is the sum of the covariance matrix corresponding to the two sets of perception information and the system covariance matrix.
5. The method of claim 1, wherein obtaining the same pathway matching result and performing different pathway matching with perception information of different pathways comprises:
if the perception information acquired by the two ways is detected, calculating the similarity between every two perception data according to the same way matching result and the perception information of the different ways, and matching the two perception data with the maximum similarity;
and if the sensing information acquired by the three ways is detected, fusing the camera matching result with the sensing information acquired by the laser radar, and fusing again according to the fusion result and the millimeter wave radar matching result.
6. The method of claim 1, wherein the attribute information includes location information and velocity information;
correspondingly, according to the preset specific gravity of each path, determining the attribute information of the barrier in the matching result of the different paths, including:
determining position information of the barrier in the matching result of the different paths according to the first preset specific gravity of each path;
and determining the speed information of the barrier in the different path matching result according to the second preset specific gravity of each path.
7. The method according to claim 2 or 3, wherein the boundaries of the perceptual overlap region are outwardly dilated based on a preset dilation parameter;
accordingly, obtaining a perceptual overlap region comprises:
and acquiring a region of the perception overlapping region after the region is expanded outwards based on preset expansion parameters, and taking the region as the perception overlapping region.
8. An object detection device based on multi-path fusion is characterized in that the device is configured on an object detection electronic device, and the object detection electronic device is configured on a vehicle; the vehicle is also provided with at least two cameras, at least two millimeter wave radars and at least one laser radar; the device comprises:
The same-path matching module is used for performing same-path matching on the overlapped sensing data if sensing information acquired by at least two paths of the camera, the millimeter wave radar and the laser radar aiming at the obstacle is detected, and the obstacle is in a sensing overlapping area of the camera or a sensing overlapping area of the millimeter wave radar;
the different path matching module is used for obtaining the same path matching result and carrying out different path matching with the perception information of different paths;
and the attribute information determining module is used for determining the attribute information of the barrier in the matching result of the different paths according to the preset specific gravity of each path.
9. An electronic device for object detection based on multi-way fusion, the electronic device for object detection comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the multi-pass fusion based object detection method of any one of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a processor to implement the multi-pass fusion based object detection method of any one of claims 1-7 when executed.
CN202210379065.6A 2022-04-12 2022-04-12 Target detection method, device, equipment and medium based on multi-path fusion Pending CN114842445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210379065.6A CN114842445A (en) 2022-04-12 2022-04-12 Target detection method, device, equipment and medium based on multi-path fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210379065.6A CN114842445A (en) 2022-04-12 2022-04-12 Target detection method, device, equipment and medium based on multi-path fusion

Publications (1)

Publication Number Publication Date
CN114842445A true CN114842445A (en) 2022-08-02

Family

ID=82563868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210379065.6A Pending CN114842445A (en) 2022-04-12 2022-04-12 Target detection method, device, equipment and medium based on multi-path fusion

Country Status (1)

Country Link
CN (1) CN114842445A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115183782A (en) * 2022-09-13 2022-10-14 毫末智行科技有限公司 Multi-modal sensor fusion method and device based on joint space loss
CN115436899A (en) * 2022-08-31 2022-12-06 中国第一汽车股份有限公司 Method, device, equipment and storage medium for processing millimeter wave radar detection data
CN115900771A (en) * 2023-03-08 2023-04-04 小米汽车科技有限公司 Information determination method and device, vehicle and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115436899A (en) * 2022-08-31 2022-12-06 中国第一汽车股份有限公司 Method, device, equipment and storage medium for processing millimeter wave radar detection data
CN115183782A (en) * 2022-09-13 2022-10-14 毫末智行科技有限公司 Multi-modal sensor fusion method and device based on joint space loss
CN115183782B (en) * 2022-09-13 2022-12-09 毫末智行科技有限公司 Multi-modal sensor fusion method and device based on joint space loss
CN115900771A (en) * 2023-03-08 2023-04-04 小米汽车科技有限公司 Information determination method and device, vehicle and storage medium

Similar Documents

Publication Publication Date Title
US10035508B2 (en) Device for signalling objects to a navigation module of a vehicle equipped with this device
CN114842445A (en) Target detection method, device, equipment and medium based on multi-path fusion
CN113715814B (en) Collision detection method, device, electronic equipment, medium and automatic driving vehicle
KR20210151724A (en) Vehicle positioning method, apparatus, electronic device and storage medium and computer program
CN111783905B (en) Target fusion method and device, storage medium and electronic equipment
CN112580571A (en) Vehicle running control method and device and electronic equipment
US10353398B2 (en) Moving object detection device, program, and recording medium
CN114179832A (en) Lane changing method for autonomous vehicle
CN114677655A (en) Multi-sensor target detection method and device, electronic equipment and storage medium
US20230072632A1 (en) Obstacle detection method, electronic device and storage medium
CN114018269B (en) Positioning method, positioning device, electronic equipment, storage medium and automatic driving vehicle
CN113177980B (en) Target object speed determining method and device for automatic driving and electronic equipment
CN114528941A (en) Sensor data fusion method and device, electronic equipment and storage medium
CN115546597A (en) Sensor fusion method, device, equipment and storage medium
CN114394111B (en) Lane changing method for automatic driving vehicle
CN115861959A (en) Lane line identification method and device, electronic equipment and storage medium
CN115817466A (en) Collision risk assessment method and device
CN111198370B (en) Millimeter wave radar background detection method and device, electronic equipment and storage medium
CN110969058B (en) Fusion method and device for environment targets
CN115346374B (en) Intersection holographic perception method and device, edge computing equipment and storage medium
CN114584949B (en) Method and equipment for determining attribute value of obstacle through vehicle-road cooperation and automatic driving vehicle
CN113721235B (en) Object state determining method, device, electronic equipment and storage medium
CN118035788A (en) Target vehicle relative position classification method, device, equipment and storage medium
CN117934561A (en) Target tracking method, device, equipment and medium for vehicle
CN116563811A (en) Lane line identification method and device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination