CN111783905B - Target fusion method and device, storage medium and electronic equipment - Google Patents

Target fusion method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111783905B
CN111783905B CN202010925757.7A CN202010925757A CN111783905B CN 111783905 B CN111783905 B CN 111783905B CN 202010925757 A CN202010925757 A CN 202010925757A CN 111783905 B CN111783905 B CN 111783905B
Authority
CN
China
Prior art keywords
target
fusion
homologous
kalman filter
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010925757.7A
Other languages
Chinese (zh)
Other versions
CN111783905A (en
Inventor
张勇
张益�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN ANNGIC TECHNOLOGY Co.,Ltd.
Original Assignee
Chengdu Anzhijie Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Anzhijie Technology Co ltd filed Critical Chengdu Anzhijie Technology Co ltd
Priority to CN202010925757.7A priority Critical patent/CN111783905B/en
Publication of CN111783905A publication Critical patent/CN111783905A/en
Application granted granted Critical
Publication of CN111783905B publication Critical patent/CN111783905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application relates to the technical field of target detection, and provides a target fusion method, a target fusion device, a storage medium and electronic equipment. The target fusion method comprises the following steps: acquiring a first target detected by a first sensor; matching the first target with the targets in the fusion target set, and updating the attribute of the first fusion target by using the first target if the first target is successfully matched with the first fusion target; and if the first target is not successfully matched with any target in the fusion target set, matching the first target with the targets in the heterogeneous target set, and if the first target is successfully matched with the first heterogeneous target, fusing the first target and the first heterogeneous target to generate a second fusion target, and adding the second fusion target to the fusion target set. The method has the advantages that the generated fusion target is combined with multi-sensor data, so that the estimation of the target state is accurate, and the target fusion realizes the sensor information complementation, so that the method is also favorable for improving the credibility of the detected target.

Description

Target fusion method and device, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of target detection, in particular to a target fusion method, a target fusion device, a storage medium and electronic equipment.
Background
An Advanced Driving Assistance System (ADAS) utilizes various sensors (such as millimeter wave/laser radar, single/binocular camera, and the like) mounted on an automobile to perform operations such as target detection and the like in real time during the Driving process of the automobile, so as to improve the safety of the Driving process of the automobile.
In the prior art, various sensors independently detect a target and output a detection result, and although the detection technology is mature, the sensors have respective advantages and disadvantages, such as:
radar: the radar with the built-in target detection algorithm has the advantages of long detection distance, accurate detection result, strong environmental interference resistance and the like in target detection. However, the radar cannot effectively identify the target type, is difficult to accurately judge the target size, and has poor detection performance in special environments such as a small target reflection surface and strong reflection interference around the target.
A camera: the camera with the built-in target detection algorithm can classify targets, output the confidence degree of the targets, estimate the size of the targets and the like, so that the defects of the radar are well overcome, and the threshold of the target detection technology based on the visual sensor is relatively low. However, the camera is susceptible to weather and environment, so that the stability of the camera in the use process is not high.
It can be seen that various sensors have complementarity in target detection, but no mature solution exists at present for how to fuse multi-sensor data and obtain more reliable and valuable target detection results.
Disclosure of Invention
An object of the embodiments of the present application is to provide a target fusion method, an apparatus, a storage medium, and an electronic device, so as to solve the above technical problems.
In order to achieve the above purpose, the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application provides a target fusion method, including: acquiring a first target detected by a first sensor; matching the first target with targets in a fusion target set, and if the first target is successfully matched with a first fusion target in the fusion target set, updating the attribute of the first fusion target by using the first target; if the first target is not successfully matched with any target in the fused target set, matching the first target with targets in a heterogeneous target set, if the first target is successfully matched with a first heterogeneous target in the heterogeneous target set, fusing the first target and the first heterogeneous target to generate a second fused target, and adding the second fused target to the fused target set; wherein the fused target set is a set of fused targets generated after fusion of targets detected by different sensors, and the heterogeneous target set is a set of targets detected by different sensors than the first sensor.
In the method, the newly input first target is preferentially matched with the targets in the fusion target set, because the targets in the fusion target set are the result of multi-sensor fusion, which is equivalent to the confirmation of multi-sensors, the reliability is high, if the matching is successful, the first target and the first fusion target refer to the same object, and the attribute of the first target represents the latest state of the object, so that the attribute of the first fusion target can be updated by using the first target. If the matching fails, it is likely that the first target was previously detected by the single sensor and thus no target fusion has been performed, and therefore an attempt is made to match it with the targets in the heterogeneous target set, which are detected from a different source than the first target according to the above definition, so that the detected data is complementary to the detected data of the first sensor. If the matching succeeds, the first target and the first heterogeneous target refer to the same object, and the attribute of the second fused target generated after the fusion of the first target and the second heterogeneous target is combined with the data of the multiple sensors, so that the complementarity of the data is grounded. And after the second fusion target is generated, adding the second fusion target into the fusion target set, thereby realizing the updating of the fusion target set.
The method provides a brand-new and effective target fusion scheme, the target generated after fusion is combined with multi-sensor data, so that the estimation of the target state (represented by the attribute of the target) is more accurate than that of a single sensor, and the target fusion realizes the information complementation of the sensors, reduces the uncertainty in the target detection process, and is favorable for improving the reliability and the utilization value of the detected target.
In one implementation of the first aspect, the attributes of each target include at least one of: an ID of the object, a lateral distance between the object and a sensor carrier, a longitudinal distance between the object and a sensor carrier, an orientation angle between the object and a sensor carrier, a velocity of the object, an acceleration of the object, a category of the object, a confidence of the object, and a size of the object.
The detected targets may have different attributes depending on the characteristics of different kinds of sensors, for example, for image targets detected by a vision sensor, the attributes may include ID, lateral distance, longitudinal distance, direction angle, velocity, category, confidence, and size, and for radar targets detected by radar, the attributes may include ID, lateral distance, longitudinal distance, direction angle, velocity, and acceleration. Of course, in specific practice, a unified data model (e.g., a structure) may be used for all targets, where the data model includes all of the above attributes, and if a target detected by a certain type of sensor does not have some attributes, the values of the attributes in its corresponding data model may be null values or invalid values.
In an implementation manner of the first aspect, the matching the first target with the targets in the fused target set includes: representing the first target and the targets in the fused target set as coordinate points by using respective transverse distances and longitudinal distances; matching the coordinate point corresponding to the first target with the coordinate point corresponding to the target in the fusion target set by using a Hungarian matching algorithm to obtain a matching point pair; the Hungarian matching algorithm is based on Euclidean distances between coordinate points during matching, and two coordinate points in the matching point pair respectively correspond to the first target and a first candidate fusion target in the fusion target set; and judging whether the horizontal coordinate difference value and the vertical coordinate difference value of two coordinate points in the matching point pair are smaller than a first threshold and a second threshold, if the two coordinate difference values are respectively smaller than the respective thresholds, determining the first candidate fusion target as the first fusion target, and otherwise, determining that the first target is not matched with any target in the fusion target set.
The Hungarian matching algorithm can obtain the optimal matching relation between two point sets, so that in order to use the Hungarian matching algorithm, a target needs to be converted into a two-dimensional point representation. According to the principle of the Hungarian matching algorithm, the algorithm can output a matching point pair in a mathematical sense, but whether a first target and a first candidate fusion target corresponding to the matching point pair should be considered as a match in an engineering sense needs to be screened through a first threshold and a second threshold, otherwise, the determined first fusion target may not have practical value (can be called as a false match).
In an implementation manner of the first aspect, values of the first threshold and the second threshold are both related to the abscissa difference and/or the ordinate difference.
In this implementation, the first threshold and the second threshold are both dynamic thresholds, and the inventor finds that the dynamic threshold is better than the fixed threshold in the screening of the target of the mismatch.
In an implementation manner of the first aspect, the updating the attribute of the first fusion objective by using the first objective includes: updating an attribute of the first fusion target based on the first target and a Kalman filter.
The Kalman filtering is an algorithm for performing optimal estimation on the system state by using a linear system state equation and inputting and outputting observation data through a system.
In an implementation manner of the first aspect, the updating the attribute of the first fusion target based on the first target and the kalman filter includes: calculating an observed value of a transverse distance by using the direction angle of the first target and the longitudinal distance of the first fusion target, inputting the observed value of the transverse distance into the Kalman filter, and taking an estimated value of the transverse distance output by the Kalman filter as the updated transverse distance of the first fusion target; taking the predicted value of the longitudinal distance output by the Kalman filter as the updated longitudinal distance of the first fusion target; and directly taking the direction angle, the category, the confidence coefficient and the size of the first target as the updated direction angle, the category, the confidence coefficient and the size of the first fusion target.
In an implementation manner of the first aspect, the updating the attribute of the first fusion target based on the first target and the kalman filter includes: inputting the longitudinal distance and the speed of the first target into the Kalman filter as an observed value of the longitudinal distance and an observed value of the speed respectively, and taking an estimated value of the longitudinal distance and an estimated value of the speed output by the Kalman filter as the updated longitudinal distance and speed of the first fusion target; taking the predicted value of the transverse distance output by the Kalman filter as the transverse distance of the first fusion target after updating; and directly taking the acceleration of the first target as the updated acceleration of the first fusion target.
In one implementation manner of the first aspect, the generating a second fused target based on the fusion of the first target and the first heterogeneous target includes: calculating the transverse distance of the second fusion target by using the direction angle of the image target and the longitudinal distance of the radar target; directly taking the direction angle, the category, the confidence coefficient and the size of the image target as the direction angle, the category, the confidence coefficient and the size of the second fusion target; and directly taking the longitudinal distance, the speed and the acceleration of the radar target as the longitudinal distance, the speed and the acceleration of the second fusion target.
In one implementation manner of the first aspect, the acquiring a first target detected by a first sensor includes: a first target detected by a first sensor is acquired in an asynchronous manner.
When the target fusion is performed, the targets detected by the sensors can be input in an asynchronous mode, and the asynchronous mode has the advantages that: firstly, the problem of complex sensor time alignment is avoided (strict alignment of a plurality of sensors in time cannot be achieved); secondly, the sensor is convenient to be compatible with any number and any kind of sensors; and thirdly, even if part of the sensors are in fault, the rest sensors can still work normally and are not influenced.
In one implementation form of the first aspect, the method further comprises: if the first target is not successfully matched with any target in the heterogeneous target set, matching the first target with a target in a homologous target set, and if the first target is successfully matched with a first homologous target in the homologous target set, updating the attribute of the first homologous target by using the first target; wherein the set of homologous objects is a set of objects detected by the first sensor prior to detecting the first object; and if the first target is not successfully matched with any target in the homologous target set, adding the first target to the homologous target set.
In the implementation manner, if the first target is successively matched with the targets in the fusion target set and the targets in the heterogeneous target set, but both the first target and the targets in the fusion target set and the targets in the heterogeneous target set cannot be successfully matched, it is likely that the target is never detected by other sensors except the first sensor before, and therefore the target is tried to be matched with the targets in the homogeneous target set again, if the matching is successful, the first target and the first homologous target refer to the same object, and the attribute of the first target represents the latest state of the object, so that the attribute of the first homologous target can be updated by using the first target. If the matching fails, the target is newly appeared and is not detected by any sensor before, so that the target can be added into the homologous target set, and the update of the homologous target set is realized.
It should be noted that the homologous object set and the heterologous object set are relative concepts, depending on what sensor the first object currently input is detected by, and if a second object is input, the homologous object set and the heterologous object set may be interchanged, so that the updating of the homologous object set is actually also updating the heterologous object set.
In an implementation manner of the first aspect, the matching the first target with targets in a same source target set includes: judging whether the IDs of the targets in the homologous target set are the same as the ID of the first target, if so, determining the target with the ID in the homologous target set as the first homologous target, otherwise, determining that the first target is not matched with any target in the homologous target set.
The single sensor also has simple target recognition capability when detecting a target, for example, the vision sensor can determine whether a target detected in two consecutive frames of images refers to the same object. If the objects are the same, the newly detected object may be assigned the same ID as the corresponding old object. Thus, when matching a first object with objects in a set of homologous objects, matching can be performed via the ID of the objects.
In an implementation manner of the first aspect, the updating, with the first target, the attribute of the first homologous target includes: updating an attribute of the first homologous target based on the first target and a Kalman filter.
In an implementation manner of the first aspect, the updating the attribute of the first homologous target based on the first target and a kalman filter includes: inputting the transverse distance and the longitudinal distance of the first target into the Kalman filter as an observed value of the transverse distance and an observed value of the longitudinal distance respectively, and taking a transverse distance estimated value and a longitudinal distance estimated value output by the Kalman filter as the transverse distance and the longitudinal distance of the first homologous target after updating; and directly taking the speed, the direction angle, the category, the confidence coefficient and the size of the first target as the updated speed, direction angle, category, confidence coefficient and size of the first homologous target.
In an implementation manner of the first aspect, the updating the attribute of the first homologous target based on the first target and a kalman filter includes: inputting the transverse distance and the longitudinal distance of the first target into the Kalman filter as an observed value of the transverse distance and an observed value of the longitudinal distance respectively, and taking a transverse distance estimated value and a longitudinal distance estimated value output by the Kalman filter as the transverse distance and the longitudinal distance of the first homologous target after updating; and directly taking the direction angle, the speed and the acceleration of the first target as the direction angle, the speed and the acceleration of the updated first homologous target.
In one implementation of the first aspect, the targets in each target set a lifecycle that characterizes a target lifetime, the method further comprising: if the life cycle of any target is detected to be not less than the output threshold, outputting the target; if the life cycle of any target is not greater than the discarding threshold, removing the target from the target set; wherein the discard threshold is less than the output threshold.
The longer the life cycle of the object, i.e. the object is continuously detected, the higher its confidence level is, and vice versa the lower the confidence level is. For a target with high reliability (the life cycle is not less than the output threshold), the target can be output to a subsequent system for use, such as a Front Collision Warning (FCW), an automatic Emergency braking system (AEB), and other systems; for a low-confidence target (the life cycle is not greater than the discard threshold), it may be removed from the target set, or the target may be false-detected, or the target may exist only momentarily, in any case, it is meaningless to continue maintaining the target, and the storage space occupied by the target may be released. By setting the life cycle, the effective management and maintenance of the target concentration can be realized.
In one implementation manner of the first aspect, for any object in the object set, if matching with a newly detected object is successful, the life cycle of the object is increased by 1, otherwise, the life cycle of the object is decreased by 1.
The above implementation provides a specific update rule of the life cycle, if the matching of the target in the target set with the newly detected target is successful, it indicates that the target is still detected at the latest time, so the life cycle should be extended, and if the non-matching is successful, it indicates that the target is not detected at the latest time, so the life cycle should be shortened. As regards the specific number of elongations or shortenings, taking 1 is the simplest, although other values are certainly not excluded.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where computer program instructions are stored on the computer-readable storage medium, and when the computer program instructions are read and executed by a processor, the computer program instructions perform the steps of the method provided in the first aspect or any one of the possible implementation manners of the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: a memory in which computer program instructions are stored, and a processor, where the computer program instructions, when read and executed by the processor, perform the steps of the method provided by the first aspect or any one of the possible implementations of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 illustrates a schematic diagram of a sensor carrier provided by an embodiment of the present application;
FIG. 2 is a flow chart illustrating a target fusion method provided by an embodiment of the present application;
FIG. 3 is a functional block diagram of a target fusion device according to an embodiment of the present disclosure;
fig. 4 shows a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Also, the terms "first," "second," and the like, are used solely to distinguish one entity or action from another entity or action without necessarily being construed as indicating or implying any actual such relationship or order between such entities or actions.
Fig. 1 shows a schematic diagram of a sensor carrier provided in an embodiment of the present application. Referring to fig. 1, a sensor carrier 100 is a platform on which various sensors 110 are mounted, which may be used for real-time object detection. The sensor carrier 100 may have mobile capabilities, e.g. may be a vehicle (e.g. car, aircraft, boat), robot, etc., but may be a stationary installation, e.g. a monitoring device, a smart street lamp, etc. The sensor 110 is not limited in kind, and may be, for example, a radar (e.g., millimeter wave radar, laser radar), a vision sensor (e.g., single/binocular camera, depth camera), an infrared sensor, etc., and the radar and the vision sensor are mainly described below as examples.
In the present application, the sensor 110 is considered to have integrated a conventional target detection algorithm and to be able to output a detected target, and should not be considered to be a mere data acquisition tool. There may be one or more of each type of sensor 110 on the sensor carrier 100, and for simplicity, when explained below, it is considered that there is only one of each type of sensor 110 (or the same type of sensor is considered as one), i.e., each sensor 110 on the sensor carrier 100 is of a different type.
The target fusion method provided in the embodiment of the present application may be performed on the sensor carrier 100, that is, the sensor carrier 100 directly performs target fusion after detecting a target, or the sensor carrier 100 may also be used only for detecting a target and perform target fusion on another electronic device based on a target detection result, and fig. 4 provides a schematic diagram of an electronic device that may be used for performing the target fusion method, which may refer to the following description. Further, the target fusion may be performed while detecting the target, or the target fusion may be performed collectively based on the collected detection data after the target detection is performed for a certain period of time, which is not limited in the present application.
In the present application, the term "target" should be understood to have a dual meaning, both referring to a physical object, such as a person, vehicle, obstacle, etc., that is detected by a sensor; meanwhile, the detection result of the sensor may also be referred to, for example, a data model for describing a physical object, where the model may exist in the form of a structure in the computer, and the data model has a plurality of attributes (corresponding to members in the structure), and the attributes are used for describing the state of the target. Hereinafter, when no ambiguity occurs, the above two meanings of the "target" are not particularly distinguished, and the "target" can be understood in combination with a specific scene.
In one implementation, the attributes of the target may include at least one of: an ID of the object (for uniquely identifying a certain object), a lateral distance between the object and the sensor carrier, a longitudinal distance between the object and the sensor carrier, an orientation angle between the object and the sensor carrier, a velocity of the object, an acceleration of the object, a category of the object, a confidence of the object, and a size of the object. The sensor may assign a value to the attribute of the target upon detection of the target, and the attribute of the target may be updated thereafter. In particular, for the fused object mentioned later, the attribute is calculated from the attribute of the object as the fusion source, and is not given by the sensor.
The detected objects may have different attributes depending on the characteristics of the different kinds of sensors, for example, for image objects detected by a vision sensor, the attributes may include ID, lateral distance, longitudinal distance, orientation angle, velocity, category, confidence, and size; for radar targets detected by the radar, the attributes may include ID, lateral distance, longitudinal distance, direction angle, velocity, and acceleration; in particular, for the fusion target, all of the above-described attributes may be possessed. In a specific practice, a unified data model may be used for all targets, and the data model includes all the above attributes, and if a target detected by a certain type of sensor does not have some attributes, the values of the attributes in its corresponding data model may be null values or invalid values.
Fig. 2 shows a flowchart of an object fusion method provided in an embodiment of the present application, and in fig. 2, like numerals indicate that related contents belong to the same step. Referring to fig. 2, the method includes:
step S200: a first target detected by a first sensor is acquired.
The sensors mounted on the sensor carrier are responsible for collecting real-time data (which may be collected at a fixed frequency, for example) and for target detection based on the collected data, the detected target being taken as input to the target fusion method. The processing flow of any object detected by any sensor at any time is similar, so that the first object detected by the first sensor at a certain time is taken as an example for explanation.
After detecting the first target, the first sensor may initialize the first target, including assigning values to attributes of the target: for example, the ID of the first target may be assigned by the first sensor, and other attributes (e.g., longitudinal distance, lateral distance, etc.) may be taken from the actual measurements of the first sensor. In step S200, the initialized first target is acquired as an input of the target fusion method.
Optionally, an asynchronous mode may be adopted when the first target is acquired, where the asynchronous mode is specific to each sensor, and when the first sensor detects the first target, the first target may be acquired as an input of the target fusion method without concern about whether other sensors detect the target and are inputting. The asynchronous transmission mode has the advantages that: firstly, the problem of complex sensor time alignment is avoided (strict alignment of a plurality of sensors in time cannot be achieved); secondly, the sensor is convenient to be compatible with any number and any kind of sensors; and thirdly, even if part of the sensors are in fault, the rest sensors can still work normally and are not influenced.
Step S210: and matching the first target with the targets in the fusion target set, and updating the attribute of the first fusion target by using the first target if the first target is successfully matched with the first fusion target in the fusion target set.
In the scheme of the application, a target set is set for each sensor, and is used for caching targets detected by the sensor. For a first object detected by the first sensor, a heterogeneous set of objects and a homogeneous set of objects of the first object may be defined among the sets of objects: wherein, the heterogeneous target set refers to a set of targets detected by a sensor different from the first sensor, and the targets in the heterogeneous target set can be referred to as heterogeneous targets of the first target; the set of homologous targets refers to a set of targets detected by the first sensor before detecting the first target, and the targets in the set of homologous targets may be referred to as homologous targets of the first target. Since there may be a plurality of sensors different from the first sensor, there may be a plurality of heterogeneous target sets, and for convenience, only one heterogeneous target set is taken as an example, and if there are a plurality of heterogeneous target sets, it is not necessary to repeat the operations performed on one heterogeneous target set for several times.
It will be readily seen that the definition of the heterogeneous and homogeneous object sets is relevant to the first object: for example, two sensors, namely a radar sensor and a vision sensor, are arranged on the sensor carrier, if the first target is a radar target detected by the radar, the heterogeneous target set is a set of image targets detected by the vision sensor, and the homologous target set is a set formed by radar targets detected by the radar before the first target is detected; in contrast, if the first target is an image target detected by the vision sensor, the heterogeneous target set is a set of radar targets detected by the radar, and the homogeneous target set is a set of image targets detected by the vision sensor before the first target is detected.
In addition, a fusion target set can be set for caching fusion targets generated after the targets detected by different sensors are fused. How to add new targets into the fusion target set, the heterogeneous target set and the homologous target set is introduced in the subsequent steps, and for convenience of explanation, a certain number of targets exist in the three target sets when the first target is input. In addition, it should be noted that in the fusion process for one object, the three object sets are not necessarily used at the same time, but only one (the fusion object set) or two (the fusion object set and the heterogeneous object set) may be used.
For the first target, the first target is matched with the targets in the fusion target set by using a preset matching strategy a, the matching only generates two results, or only matches one fusion target in the fusion target set, which is not called as the first fusion target, or does not match any target, and the step S220 is continuously executed.
In one implementation, the matching policy A may employ a Hungarian matching algorithm in conjunction with a threshold value. The Hungarian matching algorithm can obtain the optimal matching relation between two point sets, the result is a matching point pair, and two points in the point pair are respectively from one point set. To use the hungarian matching algorithm, the objects to be matched are first represented as points in a mathematical sense. Optionally, the target may include two attributes of a transverse distance and a longitudinal distance, so that the first target and the target in the fusion target set may be represented as two-dimensional coordinate points by the respective transverse distance and the longitudinal distance, and then the coordinate point corresponding to the first target and the coordinate point corresponding to the target in the fusion target set are matched by using a hungarian matching algorithm, a matching point pair is finally obtained in the matching process based on the euclidean distance between the coordinate points, and two coordinate points in the matching point pair respectively correspond to the first target and the first candidate fusion target in the fusion target set. It will be appreciated that more attributes can be used to represent targets as high-dimensional points (dimensions greater than 2) for the Hungarian matching algorithm.
According to the principle of the Hungarian matching algorithm, the algorithm can output a matching point pair in a mathematical sense, but whether a first target and a first candidate fusion target corresponding to the matching point pair should be considered as matching in an engineering sense needs further screening, otherwise, the first candidate fusion target is directly determined as the first fusion target, which may cause mismatching, namely, although the first target and the first fusion target do not point to the same actual object (like a person, the same vehicle and the like), the first target and the first candidate fusion target are directly considered to point to the same actual object according to the result of the Hungarian matching algorithm.
One of the methods to avoid the mismatch is to set a threshold. The method can be specifically operated as follows: and judging whether the horizontal coordinate difference value (the absolute value) of the two coordinate points in the matching point pair is smaller than a first threshold and whether the vertical coordinate difference value is smaller than a second threshold, and if the two coordinate difference values are respectively smaller than the respective thresholds, determining the first candidate fusion target as the first fusion target. Otherwise, the first candidate fusion target is considered not to be matched with the first target (that is, the first candidate fusion target is a target which is in a wrong match, which is equivalent to negating the result of the hungarian matching algorithm), and it can be confirmed that the first target is not matched with any target in the fusion target set at this time, and the step S220 is continuously executed.
Further, in an implementation manner, the first threshold and the second threshold are both dynamic thresholds, that is, the threshold value may change with a change of the abscissa difference value and/or the ordinate difference value. The inventor finds through long-term research experiments that the effect is better when the target with the error matching is screened out by adopting the dynamic threshold than the fixed threshold.
For example, the first threshold and the second threshold are both set in relation to the difference value of the vertical coordinates of two coordinate points in the matching point pair, and the relation is assumed to be expressed as
Figure 806912DEST_PATH_IMAGE002
Wherein x 'and y' are respectively a first threshold and a second threshold, and y is a difference value of the vertical coordinate.
Let x denote the horizontal coordinate difference between two coordinate points in the matching point pair, and let three target-corresponding coordinate points be a (1, 16), B (1.08, 17), and C (1, 34), respectively. For a (coordinate point corresponding to the first target) and B (coordinate point corresponding to the first candidate target), calculating to obtain y =1, x =0.08, x '= 1.01, y' =2.1, the threshold requirement is met, and the targets corresponding to a and B are matched; for a (coordinate point corresponding to the first target) and C (coordinate point corresponding to the first candidate target), it is calculated that y =18, x =0, x '= 1.18, y' =3.8, the threshold requirement is not met, and the targets corresponding to a and C do not match.
If the first target is successfully matched with the first fusion target in the fusion target set, it is indicated that the first target and the first fusion target point to the same actual object, and the first target is newly detected, so that the attribute of the first target represents the latest state of the object, and thus the attribute of the first fusion target can be updated by using the first target, and the first fusion target after the attribute update can be regarded as an effective estimation of the current state of the object. After the attributes of the first fused target are updated, the first target may be discarded.
In one implementation, the attributes of the first fusion objective may be updated based on the first objective and a Kalman filter. The Kalman filtering is an algorithm for performing optimal estimation on the system state by using a linear system state equation and inputting and outputting observation data through a system.
The following describes the update rule of the attribute of the first fusion target by taking the first target as an image target detected by the vision sensor and a radar target detected by the radar as examples respectively:
(1) the first object being an image object
Calculating an observed value of a transverse distance (a product of the longitudinal distance and a direction angle tangent value) by using the direction angle of the first target and the longitudinal distance of the first fusion target, inputting the observed value of the transverse distance into a Kalman filter, and taking an estimated value of the transverse distance output by the Kalman filter as the transverse distance of the updated first fusion target;
taking the predicted value of the longitudinal distance output by the Kalman filter as the updated longitudinal distance of the first fusion target;
directly taking the direction angle, the category, the confidence coefficient and the size of the first target as the updated direction angle, the category, the confidence coefficient and the size of the first fusion target;
the ID of the first fusion target remains unchanged.
(2) The first target being a radar target
Inputting the longitudinal distance and the speed of the first target into a Kalman filter as an observed value of the longitudinal distance and an observed value of the speed respectively, and taking an estimated value of the longitudinal distance and an estimated value of the speed output by the Kalman filter as the longitudinal distance and the speed of the updated first fusion target;
taking the predicted value of the transverse distance output by the Kalman filter as the transverse distance of the updated first fusion target;
and directly taking the acceleration of the first target as the updated acceleration of the first fusion target.
The ID of the first fusion target remains unchanged.
Note that the reason why the kalman filter can output the predicted value of the longitudinal distance in (1) is that the observed value of the longitudinal distance of the radar target has been previously input to the kalman filter ((2) has a correlation step); (2) the reason why the middle kalman filter can output the predicted value of the lateral distance is that the observed value of the lateral distance of the image target has been previously input to the kalman filter ((1) has a correlation step).
Step S220: and if the first target is not successfully matched with any target in the fusion target set, matching the first target with the targets in the heterogeneous target set, if the first target is successfully matched with the first heterogeneous target in the heterogeneous target set, fusing the first target and the first heterogeneous target to generate a second fusion target, and adding the second fusion target to the fusion target set.
For the case that the first target is not successfully matched with any target in the fused target set, the first target is matched with the targets in the heterogeneous target set by using a preset matching strategy B, and the matching only generates two results, or only matches one heterogeneous target in the heterogeneous target set, which is not called as the first heterogeneous target, or does not match any target, and the step S230 is continuously executed.
In an implementation manner, the matching policy B may adopt a hungarian matching algorithm in combination with a threshold value, which has been described in the foregoing description of the matching policy a and is not repeated, but it is to be noted that the selection of the threshold value in the matching policy B is not necessary and the same value is taken in the matching policy a.
If the first target is successfully matched with the first heterogeneous target, indicating that the first target detected by the first sensor was previously detected by other sensors, the first target and the first heterogeneous target can be fused to obtain a second fused target, and the second fused target has higher credibility due to combination of multi-sensor data. One possible fusion method is given below (the order of execution of the steps is not necessarily in the following order), taking as an example the case where the first target and the first heterologous target are one of an image target and the other a radar target:
newly creating an object as a second fusion object, wherein the ID of the object can be redistributed to one;
calculating the transverse distance (the product of the longitudinal distance and the direction tangent value) of the second fusion target by using the direction angle of the image target and the longitudinal distance of the radar target;
directly taking the direction angle, the category, the confidence coefficient and the size of the image target as the direction angle, the category, the confidence coefficient and the size of the second fusion target;
and directly taking the longitudinal distance, the speed and the acceleration of the radar target as the longitudinal distance, the speed and the acceleration of the second fusion target.
And after the target fusion is successful, adding the second fusion target into the fusion target set to update the fusion target set. After the second fused target is generated, the first target may be discarded.
Step S230: if the first target is not successfully matched with any target in the heterogeneous target set, matching the first target with the targets in the homologous target set, and if the first target is successfully matched with the first homologous target in the homologous target set, updating the attribute of the first homologous target by using the first target; and if the first target is not successfully matched with any target in the homologous target set, adding the first target to the homologous target set.
For the case that the first target is not successfully matched with any target in the heterogeneous target set, the first target is matched with the targets in the homologous target set by using a preset matching strategy C, and the matching only generates two results, or one homologous target in the homologous target set is uniquely matched, which is not called as the first homologous target, or no target is matched.
In one implementation, the matching policy C may employ ID matching. The method specifically comprises the following steps: judging whether the IDs of the targets in the homologous target set are the same as the ID of the first target, if so, determining the targets with the IDs in the homologous target set as the first homologous target, otherwise, indicating that the first target is not matched with any target in the homologous target set.
The ID matching is actually a matching capability provided by a single sensor, and the sensor also has simple target identification capability (or capability of identifying the identity of a target) when carrying out target detection based on a target detection algorithm of the sensor, so that a reasonable value can be given to the ID of the target: for example, a target M detected by the vision sensor in the K-th captured image is assigned an ID, which is assumed to be M; then, a target M ' detected by the vision sensor in the collected K +1 frame image is identified by the vision sensor, and the M ' and the M point to the same object, so that the ID attribute of the M ' can be directly set as M; if M 'is used as a first target, matching with the targets in the fusion target set and the heterogeneous target set is performed in sequence, both matching fails, and finally matching is successful when the M' is matched with the targets in the homologous target set, because the M has the same ID as the ID cached in the homologous target set, the M is the first homologous target.
If the first target is successfully matched with the first homologous target, the first target indicates that the first sensor has previously detected the actual object pointed by the target, and only continues to detect now, and the attribute of the first target represents the latest state of the object because the first target is newly detected, so that the attribute of the first target can be updated by using the first target, and the first homologous target after the attribute update can be regarded as an effective estimation on the current state of the object. After the attributes of the first homologous target are updated, the first target may be discarded.
In one implementation, an attribute of the first homologous target may be updated based on the first target and a Kalman filter. The following describes the update rule of the attribute of the first homologous target by taking the first target as an image target detected by the vision sensor and a radar target detected by the radar as examples respectively:
(1) the first object being an image object
Inputting the transverse distance and the longitudinal distance of the first target into a Kalman filter as an observed value of the transverse distance and an observed value of the longitudinal distance respectively, and taking a transverse distance estimated value and a longitudinal distance estimated value output by the Kalman filter as the transverse distance and the longitudinal distance of the updated first homologous target;
directly taking the speed, the direction angle, the category, the confidence coefficient and the size of the first target as the speed, the direction angle, the category, the confidence coefficient and the size of the updated first homologous target;
the ID of the first peer destination remains unchanged.
(2) The first target being a radar target
Inputting the transverse distance and the longitudinal distance of the first target into a Kalman filter as an observed value of the transverse distance and an observed value of the longitudinal distance respectively, and taking a transverse distance estimated value and a longitudinal distance estimated value output by the Kalman filter as the transverse distance and the longitudinal distance of the updated first homologous target;
directly taking the direction angle, the speed and the acceleration of the first target as the direction angle, the speed and the acceleration of the updated first homologous target;
the ID of the first peer destination remains unchanged.
If the first target fails to match any of the targets in the set of homologous targets successfully, indicating that the target is a new occurrence that has not been previously detected by any sensor (also because both sets of targets were successfully matched), it may be added to the set of homologous targets, thereby achieving an update of the set of homologous targets.
It has been mentioned before that the homologous object set and the heterologous object set are relative concepts, depending on what sensor the first object of the current input is detected by, so it is true that the homologous object set and the heterologous object set are both object sets for a certain sensor, and the object set for each sensor is updated at this step.
Briefly summarizing the above method flow, for a newly input first target, matching with the targets in the fusion target set is preferentially performed, because the targets in the fusion target set are the result of multi-sensor fusion, which is equivalent to confirmation by multiple sensors, the reliability is high, if matching succeeds, it is indicated that the first target and the first fusion target point to the same actual object, and the attribute of the first target represents the latest state of the object, so that the attribute of the first fusion target can be updated by using the first target.
If the matching fails in the fused target set, it is likely that the first target is detected only by the single sensor before and target fusion is not performed, so that the matching of the first target with the target in the heterogeneous target set is attempted again, the target in the heterogeneous target set is detected from a different source than the first target, so that the detection data and the detection data of the first sensor have complementarity, and the matching can be preferentially performed compared with the homologous target set, so that the sensor data complementation is promoted. If the matching succeeds, the first target and the first heterogeneous target point to the same actual object, and the attributes of the second fusion target generated after the first target and the second heterogeneous target are fused are combined with the data of the multiple sensors, so that the data complementarity can be grounded. And after the second fusion target is generated, adding the second fusion target into the fusion target set, thereby realizing the updating of the fusion target set.
If the matching is successful, the first target and the first homologous target point to the same actual object, and the attribute of the first target represents the latest state of the object, so that the attribute of the first homologous target can be updated by using the first target. If the matching fails, the target is newly appeared and is not detected by any sensor before, so that the target can be added into the homologous target set, and the update of the homologous target set is realized.
Therefore, the method provided by the embodiment of the application provides a brand-new and effective target fusion scheme, the target generated after fusion is combined with multi-sensor data, so that the estimation of the target state (represented by the attribute of the target) is more accurate than that of a single sensor, and the target fusion realizes the information complementation of the sensors and reduces the uncertainty in the target detection process, so that the reliability and the utilization value of the detected target are improved.
Further, with continued reference to fig. 2, in some implementation manners, effective management and maintenance of the target set may also be implemented by setting a life cycle, which is specifically implemented as follows:
step S240: if the life cycle of any target is detected to be not less than the output threshold, outputting the target; if it is detected that the life cycle of any one object is not greater than the drop threshold, the object is removed from the set of objects.
For each object added into the object set (which may be any one of the fusion object set, the heterogeneous object set, and the homologous object set), a life cycle is set for the object, which characterizes the duration of the object in the system, and the longer the life cycle of the object is, the higher the reliability of the object is, and the lower the reliability is otherwise.
The life cycle of the target may be updated each time a new target is detected (i.e. target input in step S200): for example, in one alternative, for any object in any set of objects, if a match with a newly detected object is successful (the matching process is as described above), it indicates that the object is still detected at the latest time, and therefore the lifetime should be extended, and if a mismatch is successful, it indicates that the object is not detected at the latest time, and therefore the lifetime should be shortened. In one of the simplest processing modes, 1 may be added to the life cycle extension and 1 may be subtracted from the life cycle shortening, although other values are not excluded in other processing modes.
For the target with high reliability reflected by the life cycle (the life cycle is not less than the output threshold), the target can be output to a subsequent system for use, for example, for an automobile, the target can be output to a forward collision warning FCW, an automatic emergency braking system AEB and other systems for decision making.
It should be noted that, since the fused target has the highest confidence, outputting only the fused target meeting the output condition to the subsequent system is a possible implementation, but a more common implementation is outputting all targets meeting the output condition in the target set, because not every target can be fused successfully. For example, at night, the target is mainly detected by the radar, the detected target is added into the target set corresponding to the radar and cannot be fused with the target detected by the vision sensor (because the vision sensor cannot effectively detect the target basically in the night environment), and if the target detected by the radar independently is not output to subsequent systems, the systems cannot work normally at night.
For an object with a low reliability reflected by the life cycle (the life cycle is not greater than the discarding threshold), the object may be removed from the target set, for example, the object may be false detected, or the object exists only instantly, or the like, in short, it is meaningless to continuously maintain the object, so that the occupied storage space may be released, and after all, considerable amount of storage resources may be consumed by caching a large amount of objects in the target set.
Fig. 3 is a functional block diagram of a target fusion apparatus 300 according to an embodiment of the present disclosure. Referring to fig. 3, the object fusion apparatus 300 includes:
a target acquiring module 310, configured to acquire a first target detected by a first sensor;
a first target matching module 320, configured to match the first target with a target in a fusion target set, and if the first target is successfully matched with a first fusion target in the fusion target set, update an attribute of the first fusion target by using the first target;
a second target matching module 330, configured to match the first target with a target in a heterogeneous target set if the first target is not successfully matched with any target in the fused target set, and generate a second fused target based on the fusion of the first target and a first heterogeneous target in the heterogeneous target set and add the second fused target to the fused target set if the matching of the first target with the first heterogeneous target is successful;
wherein the fused target set is a set of fused targets generated after fusion of targets detected by different sensors, and the heterogeneous target set is a set of targets detected by different sensors than the first sensor.
In one implementation of the object fusion apparatus 300, the attributes of each object include at least one of: an ID of the object, a lateral distance between the object and a sensor carrier, a longitudinal distance between the object and a sensor carrier, an orientation angle between the object and a sensor carrier, a velocity of the object, an acceleration of the object, a category of the object, a confidence of the object, and a size of the object.
In one implementation of the object fusion apparatus 300, the matching the first object with the objects in the fusion object set by the first object matching module 320 includes: representing the first target and the targets in the fused target set as coordinate points by using respective transverse distances and longitudinal distances; matching the coordinate point corresponding to the first target with the coordinate point corresponding to the target in the fusion target set by using a Hungarian matching algorithm to obtain a matching point pair; the Hungarian matching algorithm is based on Euclidean distances between coordinate points during matching, and two coordinate points in the matching point pair respectively correspond to the first target and a first candidate fusion target in the fusion target set; and judging whether the horizontal coordinate difference value and the vertical coordinate difference value of two coordinate points in the matching point pair are smaller than a first threshold and a second threshold, if the two coordinate difference values are respectively smaller than the respective thresholds, determining the first candidate fusion target as the first fusion target, and otherwise, determining that the first target is not matched with any target in the fusion target set.
In one implementation of the target fusion apparatus 300, values of the first threshold and the second threshold are both related to the abscissa difference and/or the ordinate difference.
In one implementation of the object fusion apparatus 300, the first object matching module 320 updates the attribute of the first fusion object with the first object, including: updating an attribute of the first fusion target based on the first target and a Kalman filter.
In one implementation of the object fusion apparatus 300, the first object is an image object detected by a visual sensor, and the first object matching module 320 updates the attribute of the first fusion object based on the first object and a kalman filter, including: calculating an observed value of a transverse distance by using the direction angle of the first target and the longitudinal distance of the first fusion target, inputting the observed value of the transverse distance into the Kalman filter, and taking an estimated value of the transverse distance output by the Kalman filter as the updated transverse distance of the first fusion target; taking the predicted value of the longitudinal distance output by the Kalman filter as the updated longitudinal distance of the first fusion target; and directly taking the direction angle, the category, the confidence coefficient and the size of the first target as the updated direction angle, the category, the confidence coefficient and the size of the first fusion target.
In one implementation of the target fusion apparatus 300, the first target is a radar target detected by a radar, and the updating the attribute of the first fusion target by the first target matching module 320 based on the first target and the kalman filter includes: inputting the longitudinal distance and the speed of the first target into the Kalman filter as an observed value of the longitudinal distance and an observed value of the speed respectively, and taking an estimated value of the longitudinal distance and an estimated value of the speed output by the Kalman filter as the updated longitudinal distance and speed of the first fusion target; taking the predicted value of the transverse distance output by the Kalman filter as the transverse distance of the first fusion target after updating; and directly taking the acceleration of the first target as the updated acceleration of the first fusion target.
In one implementation of the target fusion apparatus 300, the first target and the first alien target are one of an image target detected by a vision sensor and another one of a radar target detected by a radar, and the second target matching module 330 generates a second fusion target based on the fusion of the first target and the first alien target, including: calculating the transverse distance of the second fusion target by using the direction angle of the image target and the longitudinal distance of the radar target; directly taking the direction angle, the category, the confidence coefficient and the size of the image target as the direction angle, the category, the confidence coefficient and the size of the second fusion target; and directly taking the longitudinal distance, the speed and the acceleration of the radar target as the longitudinal distance, the speed and the acceleration of the second fusion target.
In one implementation of the object fusion device 300, the acquiring module 310 acquires a first object detected by a first sensor, including: a first target detected by a first sensor is acquired in an asynchronous manner.
In one implementation of the object fusion apparatus 300, the apparatus further comprises:
a third target matching module, configured to match the first target with a target in a homologous target set if the first target is not successfully matched with any target in the heterologous target set, update an attribute of the first homologous target using the first target if the first target is successfully matched with a first homologous target in the homologous target set, and add the first target to the homologous target set if the first target is not successfully matched with any target in the homologous target set; wherein the set of homologous objects is a set of objects detected by the first sensor prior to detecting the first object;
in one implementation of the object fusion apparatus 300, the third object matching module matches the first object with objects in the homologous object set, including: judging whether the IDs of the targets in the homologous target set are the same as the ID of the first target, if so, determining the target with the ID in the homologous target set as the first homologous target, otherwise, determining that the first target is not matched with any target in the homologous target set.
In one implementation of the object fusion apparatus 300, the third object matching module updates the attribute of the first homologous object by using the first object, including: updating an attribute of the first homologous target based on the first target and a Kalman filter.
In an implementation manner of the target fusion apparatus 300, the first target is an image target detected by a visual sensor, and the third target matching module updates the attribute of the first homologous target based on the first target and a kalman filter, including: inputting the transverse distance and the longitudinal distance of the first target into the Kalman filter as an observed value of the transverse distance and an observed value of the longitudinal distance respectively, and taking a transverse distance estimated value and a longitudinal distance estimated value output by the Kalman filter as the transverse distance and the longitudinal distance of the first homologous target after updating; and directly taking the speed, the direction angle, the category, the confidence coefficient and the size of the first target as the updated speed, direction angle, category, confidence coefficient and size of the first homologous target.
In an implementation manner of the target fusion apparatus 300, the first target is a radar target detected by a radar, and the updating, by the third target matching module, the attribute of the first homologous target based on the first target and the kalman filter includes: inputting the transverse distance and the longitudinal distance of the first target into the Kalman filter as an observed value of the transverse distance and an observed value of the longitudinal distance respectively, and taking a transverse distance estimated value and a longitudinal distance estimated value output by the Kalman filter as the transverse distance and the longitudinal distance of the first homologous target after updating; and directly taking the direction angle, the speed and the acceleration of the first target as the direction angle, the speed and the acceleration of the updated first homologous target.
In one implementation of the object fusion apparatus 300, the objects in each object set are provided with a life cycle characterizing the duration of the objects, the apparatus further comprising:
the life cycle management module is used for outputting the target if the life cycle of any target is detected to be not less than the output threshold, and removing the target from the target set if the life cycle of any target is detected to be not more than the discarding threshold; wherein the discard threshold is less than the output threshold.
In one implementation of the object fusion apparatus 300, the lifecycle management module is further configured to: for any target in the target set, if the matching with the newly detected target is successful, the life cycle of the target is increased by 1, otherwise, the life cycle of the target is subtracted by 1.
The implementation principle and the resulting technical effect of the target fusion device 300 provided in the embodiment of the present application have been introduced in the foregoing method embodiments, and for the sake of brief description, portions of the device embodiments that are not mentioned may refer to corresponding contents in the method embodiments.
Fig. 4 shows a possible structure of an electronic device 400 provided in an embodiment of the present application. Referring to fig. 4, the electronic device 400 includes: a processor 410, a memory 420, and a communication interface 430, which are interconnected and in communication with each other via a communication bus 440 and/or other form of connection mechanism (not shown).
The Memory 420 includes one or more (Only one is shown in the figure), which may be, but not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The processor 410, as well as possibly other components, may access, read, and/or write data to the memory 420.
The processor 410 includes one or more (only one shown) which may be an integrated circuit chip having signal processing capabilities. The Processor 410 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Micro Control Unit (MCU), a Network Processor (NP), or other conventional processors; or a special-purpose Processor, including a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, and a discrete hardware component.
Communication interface 430 includes one or more (only one shown) devices that can be used to communicate directly or indirectly with other devices for data interaction. The communication interface 430 may be an ethernet interface; may be a high-speed network interface (such as an Infiniband network); may be a mobile communications network interface, such as an interface for a 3G, 4G, 5G network; the interface CAN be various bus interfaces, such as USB, CAN, I2C, SPI and the like; or may be other types of interfaces having data transceiving functions.
One or more computer program instructions may be stored in memory 420 and read and executed by processor 410 to implement the steps of the targeted fusion method provided by the embodiments of the present application, as well as other desired functions.
It will be appreciated that the configuration shown in fig. 4 is merely illustrative and that electronic device 400 may include more or fewer components than shown in fig. 4 or have a different configuration than shown in fig. 4. The components shown in fig. 4 may be implemented in hardware, software, or a combination thereof. The electronic device 400 may be a physical device, such as a PC, a laptop, a tablet, a cell phone, a robot, a server, an embedded device, etc., or may be a virtual device, such as a virtual machine, a container, etc. The electronic device 400 is not limited to a single device, and may be a combination of a plurality of devices or a cluster including a large number of devices. In the embodiment of the present application, the sensor carrier 100 in fig. 1 can be implemented by using the structure of the electronic device 400 (of course, the sensor 110 is added to the electronic device 400).
The embodiment of the present application further provides a computer-readable storage medium, where computer program instructions are stored on the computer-readable storage medium, and when the computer program instructions are read and executed by a processor of a computer, the steps of the target fusion method provided in the embodiment of the present application are executed. The computer-readable storage medium may be implemented as, for example, memory 420 in electronic device 400 in fig. 4.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (9)

1. An object fusion method, comprising:
acquiring a first target detected by a first sensor;
matching the first target with targets in a fusion target set, and if the first target is successfully matched with a first fusion target in the fusion target set, updating the attribute of the first fusion target by using the first target;
if the first target is not successfully matched with any target in the fused target set, matching the first target with targets in a heterogeneous target set, if the first target is successfully matched with a first heterogeneous target in the heterogeneous target set, fusing the first target and the first heterogeneous target to generate a second fused target, and adding the second fused target to the fused target set;
if the first target is not successfully matched with any target in the heterogeneous target set, matching the first target with a target in a homologous target set, and if the first target is successfully matched with a first homologous target in the homologous target set, updating the attribute of the first homologous target by using the first target; if the first target is not successfully matched with any target in the homologous target set, adding the first target to the homologous target set;
wherein the fused target set is a set of fused targets generated after fusing targets detected by different sensors, the heterogeneous target set is a set of targets detected by a sensor different from a first sensor, and the homologous target set is a set of targets detected by the first sensor before detecting the first target;
one of the first target and the first alien target is an image target detected by a vision sensor, and the other is a radar target detected by a radar, and the generating of the second fused target based on the fusion of the first target and the first alien target comprises:
calculating the transverse distance of the second fusion target by using the direction angle of the image target and the longitudinal distance of the radar target;
directly taking the direction angle, the category, the confidence coefficient and the size of the image target as the direction angle, the category, the confidence coefficient and the size of the second fusion target;
directly taking the longitudinal distance, the speed and the acceleration of the radar target as the longitudinal distance, the speed and the acceleration of the second fusion target;
the updating the attribute of the first fusion target by using the first target comprises:
updating an attribute of the first fusion target based on the first target and a Kalman filter;
if the first target is an image target detected by a visual sensor, updating the attribute of the first fusion target based on the first target and a kalman filter includes:
calculating an observed value of a transverse distance by using the direction angle of the first target and the longitudinal distance of the first fusion target, inputting the observed value of the transverse distance into the Kalman filter, and taking an estimated value of the transverse distance output by the Kalman filter as the updated transverse distance of the first fusion target;
taking the predicted value of the longitudinal distance output by the Kalman filter as the updated longitudinal distance of the first fusion target;
directly taking the direction angle, the category, the confidence coefficient and the size of the first target as the updated direction angle, the category, the confidence coefficient and the size of the first fusion target;
if the first target is a radar target detected by a radar, updating the attribute of the first fusion target based on the first target and a kalman filter includes:
inputting the longitudinal distance and the speed of the first target into the Kalman filter as an observed value of the longitudinal distance and an observed value of the speed respectively, and taking an estimated value of the longitudinal distance and an estimated value of the speed output by the Kalman filter as the updated longitudinal distance and speed of the first fusion target;
taking the predicted value of the transverse distance output by the Kalman filter as the transverse distance of the first fusion target after updating;
directly taking the acceleration of the first target as the updated acceleration of the first fusion target;
the updating the attribute of the first homologous target by the first target comprises: updating an attribute of the first homologous target based on the first target and a Kalman filter;
if the first target is an image target detected by a visual sensor, the updating the attribute of the first homologous target based on the first target and a kalman filter includes:
inputting the transverse distance and the longitudinal distance of the first target into the Kalman filter as an observed value of the transverse distance and an observed value of the longitudinal distance respectively, and taking a transverse distance estimated value and a longitudinal distance estimated value output by the Kalman filter as the transverse distance and the longitudinal distance of the first homologous target after updating;
directly taking the speed, the direction angle, the category, the confidence coefficient and the size of the first target as the updated speed, direction angle, category, confidence coefficient and size of the first homologous target;
if the first target is a radar target detected by a radar, updating the attribute of the first homologous target based on the first target and a kalman filter includes:
inputting the transverse distance and the longitudinal distance of the first target into the Kalman filter as an observed value of the transverse distance and an observed value of the longitudinal distance respectively, and taking a transverse distance estimated value and a longitudinal distance estimated value output by the Kalman filter as the transverse distance and the longitudinal distance of the first homologous target after updating;
and directly taking the direction angle, the speed and the acceleration of the first target as the direction angle, the speed and the acceleration of the updated first homologous target.
2. The object fusion method of claim 1, wherein the attributes of each object include at least one of: an ID of the object, a lateral distance between the object and a sensor carrier, a longitudinal distance between the object and a sensor carrier, an orientation angle between the object and a sensor carrier, a velocity of the object, an acceleration of the object, a category of the object, a confidence of the object, and a size of the object.
3. The method of object fusion according to claim 2, wherein said matching the first object with objects in a set of fused objects comprises:
representing the first target and the targets in the fused target set as coordinate points by using respective transverse distances and longitudinal distances;
matching the coordinate point corresponding to the first target with the coordinate point corresponding to the target in the fusion target set by using a Hungarian matching algorithm to obtain a matching point pair; the Hungarian matching algorithm is based on Euclidean distances between coordinate points during matching, and two coordinate points in the matching point pair respectively correspond to the first target and a first candidate fusion target in the fusion target set;
and judging whether the horizontal coordinate difference value and the vertical coordinate difference value of two coordinate points in the matching point pair are smaller than a first threshold and a second threshold, if the two coordinate difference values are respectively smaller than the respective thresholds, determining the first candidate fusion target as the first fusion target, and otherwise, determining that the first target is not matched with any target in the fusion target set.
4. The object fusion method of claim 3, wherein values of the first threshold and the second threshold are both related to the abscissa difference and/or the ordinate difference.
5. The object fusion method of claim 1, wherein the acquiring the first object detected by the first sensor comprises:
a first target detected by a first sensor is acquired in an asynchronous manner.
6. The method of object fusion according to claim 1, wherein matching the first object with objects in a homologous set of objects comprises:
judging whether the IDs of the targets in the homologous target set are the same as the ID of the first target, if so, determining the target with the ID in the homologous target set as the first homologous target, otherwise, determining that the first target is not matched with any target in the homologous target set.
7. The method of object fusion according to claim 1, wherein the objects in each set of objects are provided with a life cycle characterizing the lifetime of the objects, the method further comprising:
if the life cycle of any target is detected to be not less than the output threshold, outputting the target;
if the life cycle of any target is not greater than the discarding threshold, removing the target from the target set;
wherein the discard threshold is less than the output threshold.
8. The method of claim 7, wherein for any object in the set of objects, if matching with the newly detected object is successful, the lifetime of the object is increased by 1, otherwise the lifetime of the object is decreased by 1.
9. An object fusion device, comprising:
the target acquisition module is used for acquiring a first target detected by the first sensor;
the first target matching module is used for matching the first target with targets in a fusion target set, and if the first target is successfully matched with a first fusion target in the fusion target set, updating the attribute of the first fusion target by using the first target;
a second target matching module, configured to match the first target with a target in a heterogeneous target set if the first target is not successfully matched with any target in the fused target set, and generate a second fused target based on the fusion of the first target and a first heterogeneous target in the heterogeneous target set and add the second fused target to the fused target set if the matching of the first target with the first heterogeneous target in the heterogeneous target set is successful;
a third target matching module, configured to match the first target with a target in a homologous target set if the first target is not successfully matched with any target in the heterologous target set, update an attribute of the first homologous target using the first target if the first target is successfully matched with a first homologous target in the homologous target set, and add the first target to the homologous target set if the first target is not successfully matched with any target in the homologous target set;
wherein the fused target set is a set of fused targets generated after fusing targets detected by different sensors, the heterogeneous target set is a set of targets detected by a sensor different from a first sensor, and the homologous target set is a set of targets detected by the first sensor before detecting the first target;
the first target and the first alien target are one of an image target detected by a vision sensor and another one of a radar target detected by a radar, and the second target matching module generates a second fused target based on the fusion of the first target and the first alien target, including:
calculating the transverse distance of the second fusion target by using the direction angle of the image target and the longitudinal distance of the radar target;
directly taking the direction angle, the category, the confidence coefficient and the size of the image target as the direction angle, the category, the confidence coefficient and the size of the second fusion target;
directly taking the longitudinal distance, the speed and the acceleration of the radar target as the longitudinal distance, the speed and the acceleration of the second fusion target;
the first target matching module updates the attributes of the first fused target with the first target, including:
updating an attribute of the first fusion target based on the first target and a Kalman filter;
if the first target is an image target detected by a visual sensor, the first target matching module updates the attribute of the first fusion target based on the first target and a kalman filter, including:
calculating an observed value of a transverse distance by using the direction angle of the first target and the longitudinal distance of the first fusion target, inputting the observed value of the transverse distance into the Kalman filter, and taking an estimated value of the transverse distance output by the Kalman filter as the updated transverse distance of the first fusion target;
taking the predicted value of the longitudinal distance output by the Kalman filter as the updated longitudinal distance of the first fusion target;
directly taking the direction angle, the category, the confidence coefficient and the size of the first target as the updated direction angle, the category, the confidence coefficient and the size of the first fusion target;
if the first target is a radar target detected by a radar, the first target matching module updates the attribute of the first fusion target based on the first target and a kalman filter, including:
inputting the longitudinal distance and the speed of the first target into the Kalman filter as an observed value of the longitudinal distance and an observed value of the speed respectively, and taking an estimated value of the longitudinal distance and an estimated value of the speed output by the Kalman filter as the updated longitudinal distance and speed of the first fusion target;
taking the predicted value of the transverse distance output by the Kalman filter as the transverse distance of the first fusion target after updating;
directly taking the acceleration of the first target as the updated acceleration of the first fusion target;
the third target matching module updates the attribute of the first homologous target by using the first target, and comprises the following steps: updating an attribute of the first homologous target based on the first target and a Kalman filter;
if the first target is an image target detected by a visual sensor, the third target matching module updates the attribute of the first homologous target based on the first target and a kalman filter, including:
inputting the transverse distance and the longitudinal distance of the first target into the Kalman filter as an observed value of the transverse distance and an observed value of the longitudinal distance respectively, and taking a transverse distance estimated value and a longitudinal distance estimated value output by the Kalman filter as the transverse distance and the longitudinal distance of the first homologous target after updating;
directly taking the speed, the direction angle, the category, the confidence coefficient and the size of the first target as the updated speed, direction angle, category, confidence coefficient and size of the first homologous target;
if the first target is a radar target detected by a radar, the third target matching module updates the attribute of the first homologous target based on the first target and a kalman filter, including:
inputting the transverse distance and the longitudinal distance of the first target into the Kalman filter as an observed value of the transverse distance and an observed value of the longitudinal distance respectively, and taking a transverse distance estimated value and a longitudinal distance estimated value output by the Kalman filter as the transverse distance and the longitudinal distance of the first homologous target after updating;
and directly taking the direction angle, the speed and the acceleration of the first target as the direction angle, the speed and the acceleration of the updated first homologous target.
CN202010925757.7A 2020-09-07 2020-09-07 Target fusion method and device, storage medium and electronic equipment Active CN111783905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010925757.7A CN111783905B (en) 2020-09-07 2020-09-07 Target fusion method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010925757.7A CN111783905B (en) 2020-09-07 2020-09-07 Target fusion method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111783905A CN111783905A (en) 2020-10-16
CN111783905B true CN111783905B (en) 2021-01-08

Family

ID=72762264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010925757.7A Active CN111783905B (en) 2020-09-07 2020-09-07 Target fusion method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111783905B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112590808B (en) * 2020-12-23 2022-05-17 东软睿驰汽车技术(沈阳)有限公司 Multi-sensor fusion method and system and automatic driving vehicle
CN113269260B (en) * 2021-05-31 2023-02-03 岚图汽车科技有限公司 Multi-sensor target fusion and tracking method and system for intelligent driving vehicle
CN113611112B (en) * 2021-07-29 2022-11-08 中国第一汽车股份有限公司 Target association method, device, equipment and storage medium
CN114092778A (en) * 2022-01-24 2022-02-25 深圳安智杰科技有限公司 Radar camera data fusion system and method based on characterization learning
CN116824549B (en) * 2023-08-29 2023-12-08 所托(山东)大数据服务有限责任公司 Target detection method and device based on multi-detection network fusion and vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109946483A (en) * 2019-04-15 2019-06-28 北京市计量检测科学研究院 Test the speed standard set-up for a kind of scene
CN109960264A (en) * 2019-03-28 2019-07-02 潍柴动力股份有限公司 A kind of target identification method and system
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101590B (en) * 2016-06-23 2019-07-19 上海无线电设备研究所 The detection of radar video complex data and processing system and detection and processing method
CN108280442B (en) * 2018-02-10 2020-07-28 西安交通大学 Multi-source target fusion method based on track matching
CN110969058B (en) * 2018-09-30 2023-05-05 毫末智行科技有限公司 Fusion method and device for environment targets
CN109829386B (en) * 2019-01-04 2020-12-11 清华大学 Intelligent vehicle passable area detection method based on multi-source information fusion
CN110850403B (en) * 2019-11-18 2022-07-26 中国船舶重工集团公司第七0七研究所 Multi-sensor decision-level fused intelligent ship water surface target feeling knowledge identification method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109960264A (en) * 2019-03-28 2019-07-02 潍柴动力股份有限公司 A kind of target identification method and system
CN109946483A (en) * 2019-04-15 2019-06-28 北京市计量检测科学研究院 Test the speed standard set-up for a kind of scene
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud

Also Published As

Publication number Publication date
CN111783905A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111783905B (en) Target fusion method and device, storage medium and electronic equipment
CN110781949B (en) Asynchronous serial multi-sensor-based flight path data fusion method and storage medium
CN110869936A (en) Method and system for distributed learning and adaptation in autonomous vehicles
CN108021891B (en) Vehicle environment identification method and system based on combination of deep learning and traditional algorithm
CN111222568A (en) Vehicle networking data fusion method and device
JP2019185347A (en) Object recognition device and object recognition method
WO2023142814A1 (en) Target recognition method and apparatus, and device and storage medium
KR102446712B1 (en) Cross-camera obstacle tracking method, system and medium
CN114842445A (en) Target detection method, device, equipment and medium based on multi-path fusion
CN110866544A (en) Sensor data fusion method and device and storage medium
CN114677655A (en) Multi-sensor target detection method and device, electronic equipment and storage medium
CN114528941A (en) Sensor data fusion method and device, electronic equipment and storage medium
CN111538918B (en) Recommendation method and device, electronic equipment and storage medium
JP2022537557A (en) Method and apparatus for determining drivable area information
CN116563802A (en) Tunnel scene judging method, device, equipment and storage medium
CN115546597A (en) Sensor fusion method, device, equipment and storage medium
CN113611112B (en) Target association method, device, equipment and storage medium
Shanshan et al. An evaluation system based on user big data management and artificial intelligence for automatic vehicles
CN115494494A (en) Multi-sensor target fusion method and device, electronic equipment and storage medium
CN110969058B (en) Fusion method and device for environment targets
CN116097303A (en) Three-dimensional point cloud clustering method, three-dimensional point cloud clustering device, computer equipment and storage medium
CN115827925A (en) Target association method and device, electronic equipment and storage medium
CN116363615B (en) Data fusion method, device, vehicle and storage medium
CN115457282A (en) Point cloud data processing method and device
CN118015559A (en) Object identification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210601

Address after: 518000 4th floor, building 23, Baotian Industrial Zone, Baotian Third Road, Xixiang street, Bao'an District, Shenzhen City, Guangdong Province

Patentee after: SHENZHEN ANNGIC TECHNOLOGY Co.,Ltd.

Address before: 610041 second floor, building 1, No. 30, Xixin Avenue, high tech Zone, Chengdu, Sichuan

Patentee before: CHENGDU ANZHIJIE TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right