CN118133222A - Method and device for fusion processing of sensing data of vehicle sensor equipment - Google Patents

Method and device for fusion processing of sensing data of vehicle sensor equipment Download PDF

Info

Publication number
CN118133222A
CN118133222A CN202410264593.6A CN202410264593A CN118133222A CN 118133222 A CN118133222 A CN 118133222A CN 202410264593 A CN202410264593 A CN 202410264593A CN 118133222 A CN118133222 A CN 118133222A
Authority
CN
China
Prior art keywords
coordinate
determining
iteration
coordinates
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410264593.6A
Other languages
Chinese (zh)
Inventor
夏宇峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Anting Horizon Intelligent Transportation Technology Co ltd
Original Assignee
Shanghai Anting Horizon Intelligent Transportation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Anting Horizon Intelligent Transportation Technology Co ltd filed Critical Shanghai Anting Horizon Intelligent Transportation Technology Co ltd
Priority to CN202410264593.6A priority Critical patent/CN118133222A/en
Publication of CN118133222A publication Critical patent/CN118133222A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • G06F18/15Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method and apparatus for fusion processing of sensory data of a vehicle sensor device is disclosed. The method comprises the following steps: determining at least two reference coordinate sequences associated with an observed object of a sensor device based on data acquired by the sensor device disposed on the vehicle; determining initial iteration coordinates and initial iteration directions for the observed object based on at least two reference coordinate sequences; determining a target iteration coordinate based on at least two reference coordinate sequences, an initial iteration coordinate and an initial iteration direction; determining a target fitting coordinate sequence associated with the observed object based on the initial iteration coordinate and the target iteration coordinate; and controlling the vehicle to run based on the target fitting coordinate sequence. The embodiment of the disclosure can better ensure the application effect of the data acquired by the sensor equipment.

Description

Method and device for fusion processing of sensing data of vehicle sensor equipment
Technical Field
The disclosure relates to driving technology, in particular to a method and a device for fusion processing of perception data of vehicle sensor equipment.
Background
During the driving of the vehicle, data collected by sensor devices provided in the vehicle may be used. Sensor devices provided to the vehicle may include, but are not limited to, cameras, lidar, and the like. How to ensure the operational effect on the data collected by the sensor devices provided in the vehicle is a matter of concern to those skilled in the art.
Disclosure of Invention
In order to solve the technical problems, the disclosure provides a method and a device for fusion processing of sensing data of a vehicle sensor device.
According to an aspect of the embodiments of the present disclosure, there is provided a method of fusion processing of sensing data of a vehicle sensor device, including:
Determining at least two reference coordinate sequences associated with an observed object of a sensor device provided to a vehicle based on data acquired by the sensor device;
Determining initial iteration coordinates and initial iteration directions for the observed object based on at least two of the reference coordinate sequences;
determining a target iteration coordinate based on at least two reference coordinate sequences, the initial iteration coordinate and the initial iteration direction;
determining a target fit coordinate sequence associated with the observed object based on the initial iteration coordinate and the target iteration coordinate;
and processing the target fitting coordinate sequence according to a processing mode matched with the object type of the observed object.
According to another aspect of the embodiments of the present disclosure, there is provided an apparatus for fusion processing of sensing data of a vehicle sensor device, including:
A first determination module for determining at least two reference coordinate sequences associated with an observation object of a sensor device provided to a vehicle based on data acquired by the sensor device;
The second determining module is used for determining initial iteration coordinates and initial iteration directions for the observed object based on at least two reference coordinate sequences determined by the first determining module;
The third determining module is used for determining a target iteration coordinate based on at least two reference coordinate sequences determined by the first determining module, the initial iteration coordinate determined by the second determining module and the initial iteration direction determined by the second determining module;
A fourth determining module, configured to determine a target fitting coordinate sequence associated with the observed object based on the initial iteration coordinate determined by the second determining module and the target iteration coordinate determined by the third determining module;
And the processing module is used for processing the target fitting coordinate sequence determined by the fourth determining module according to a processing mode matched with the object type of the observed object.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the above-described method of fusion processing of sensory data of a vehicle sensor device.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing the processor-executable instructions;
The processor is used for reading the executable instructions from the memory and executing the instructions to realize the method for fusing the perception data of the vehicle sensor equipment.
According to yet another aspect of the disclosed embodiments, a computer program product is provided, which when executed by a processor, performs the above-described method of fusion processing of sensory data of a vehicle sensor device.
Based on the method, the device, the storage medium, the electronic device and the computer program product for fusion processing of the sensing data of the vehicle sensor device provided by the embodiment of the disclosure, the target fitting coordinate sequence can be regarded as a fusion result obtained by fusing at least two reference coordinate sequences through non-parametric fitting processing, so that iterative operation implementation based on priori knowledge in the related art is avoided, adverse effects possibly caused by jitter and jitter of the data to be fused (i.e. the reference coordinate sequences) are reduced, adverse effects possibly caused by influence of shielding factors on the integrity of the data to be fused are reduced, the complementary advantages and the advantages of multiple sensor characteristics are also facilitated, optimal solutions of precision, distance, visual angle range, noise, stability and the like are realized, and the accuracy, reliability and integrity of the target fitting coordinate sequence serving as the fusion result can be better ensured. In this way, the target fitting coordinate sequence is processed according to the processing mode matched with the object type of the observed object, so that the application effect of the data acquired by the sensor equipment is guaranteed. For example, if the observation object is a specific road surface element, real-time sensing can be effectively performed on the surrounding environment of the vehicle, so that prediction of the vehicle behavior, decision of the vehicle, path planning and the like can be reasonably performed.
Drawings
Fig. 1 is a flow chart of a method for fusion processing of sensory data of a vehicle sensor device according to some exemplary embodiments of the present disclosure.
Fig. 2 is a flow chart of a method of determining iteration coordinates of a target provided by some exemplary embodiments of the present disclosure.
Fig. 3 is a flow chart of a method of determining a first window area provided by some exemplary embodiments of the present disclosure.
Fig. 4-1 is a schematic diagram of an iterative algorithm in some exemplary embodiments of the present disclosure.
Fig. 4-2 are schematic diagrams of iterative algorithms in further exemplary embodiments of the present disclosure.
Fig. 5 is a flow chart of a method of determining iteration coordinates of a target provided by some exemplary embodiments of the present disclosure.
Fig. 6 is a flow chart of a method for obtaining fused coordinates provided by some exemplary embodiments of the present disclosure.
Fig. 7 is a flow chart of a method of determining a target fit coordinate sequence provided by some exemplary embodiments of the present disclosure.
Fig. 8-1 is a flow chart of a method of determining initial iteration coordinates and initial iteration directions provided by some exemplary embodiments of the present disclosure.
Fig. 8-2 is a flow chart of a method of determining initial iteration coordinates and initial iteration directions provided by further exemplary embodiments of the present disclosure.
Fig. 9 is a schematic diagram of determining initial iteration coordinates and initial iteration directions in some exemplary embodiments of the present disclosure.
Fig. 10 is a flow chart of a method of determining a target fit coordinate sequence provided by further exemplary embodiments of the present disclosure.
Fig. 11 is a flow chart of a method for fusing at least two reference coordinate sequences provided by some exemplary embodiments of the present disclosure.
Fig. 12 is a schematic structural view of an apparatus for fusion processing of sensing data of a vehicle sensor device according to some exemplary embodiments of the present disclosure.
Fig. 13 is a schematic structural view of a third determination module in some exemplary embodiments of the present disclosure.
Fig. 14 is a schematic structural view of a fourth determination module in some exemplary embodiments of the present disclosure.
Fig. 15 is a schematic structural view of a second determination module in some exemplary embodiments of the present disclosure.
Fig. 16 is a schematic structural view of a fourth determination module in other exemplary embodiments of the present disclosure.
Fig. 17 is a schematic structural diagram of an electronic device provided in some exemplary embodiments of the present disclosure.
Detailed Description
For the purpose of illustrating the present disclosure, exemplary embodiments of the present disclosure will be described in detail below with reference to the drawings, it being apparent that the described embodiments are only some, but not all embodiments of the present disclosure, and it is to be understood that the present disclosure is not limited by the exemplary embodiments.
It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
Summary of the application
During driving of a vehicle (which may also be referred to as a host vehicle hereinafter), data may be acquired by a sensor device provided to the vehicle. Sensor devices provided to the vehicle may include, but are not limited to, environmental sensing type sensors, motion sensing type sensors, and the like. The data collected by the sensor devices disposed on the vehicle may include, but is not limited to, environmental awareness data collected by environmental awareness type sensors, motion awareness data collected by motion awareness type sensors, and the like.
Alternatively, the environmental perception sensor may include, but is not limited to, a camera, a lidar, etc., and accordingly, the environmental perception data may include, but is not limited to, image data acquired by the camera, point cloud data acquired by the lidar, etc.
Alternatively, the motion-aware sensors may include, but are not limited to, inertial measurement units (Inertial Measurement Unit, IMU), global satellite navigation system (Global Navigation SATELLITE SYSTEM, GNSS) sensors, wheel speed meters, and the like, and accordingly, the motion-aware data may include, but are not limited to, angular velocity data and acceleration data acquired by the IMU, position data acquired by the GNSS sensors, wheel speed data acquired by the wheel speed meters, and the like.
After data is collected by the sensor device provided to the vehicle, the data collected by the sensor device provided to the vehicle may be used. For example, according to the environment sensing data, the surrounding environment of the vehicle can be sensed in real time, and the prediction of the vehicle behavior, the decision of the vehicle, the path planning and the like can be performed. For another example, road condition analysis may be performed according to the motion sensing data. How to ensure the operational effect on the data collected by the sensor device is a matter of concern to those skilled in the art.
Exemplary System
In an embodiment of the present disclosure, during driving of a vehicle, a specific object may be observed based on a sensor device provided to the vehicle to obtain at least two reference coordinate sequences associated with the specific object.
In some optional embodiments of the present disclosure, the specific object may be a host vehicle, and each of the at least two reference coordinate sequences may be used to represent a trajectory of the host vehicle over a certain period of time through motion perception. Different reference coordinate sequences may correspond to the same time period, or different reference coordinate sequences may correspond to different time periods.
In other alternative embodiments of the present disclosure, the specific object may be a specific pavement element in the surrounding environment of the host vehicle, including but not limited to a continuous pavement element such as a lane line, a physical boundary line of a road, etc., and each of the at least two reference coordinate sequences may be used to represent a partial composition of the specific pavement element obtained through environmental perception and environmental reconstruction. For example, the particular object may be a lane line, and each reference coordinate sequence may be used to represent a partial sub-segment of the lane line. As another example, the particular object may be a road physical boundary line, and each reference coordinate sequence may be used to represent a partial sub-segment of the road physical boundary line.
After obtaining the at least two reference coordinate sequences, the at least two reference coordinate sequences may be fused to obtain a target fit coordinate sequence.
In one example, the particular object is a host vehicle and the different reference coordinate sequences correspond to the same time period, and the target fit coordinate sequence may be used to represent the exact trajectory of the host vehicle within the same time period.
In another example, the specific object is a host vehicle, different reference coordinate sequences correspond to different time periods, a union of the time periods respectively corresponding to at least two reference coordinate sequences is called a target time period, and the target fit coordinate sequence can be used for representing an accurate track of the host vehicle in the target time period.
In yet another example, the particular object is a particular road surface element and the target fit coordinate sequence may be used to represent an accurate complete particular road surface element.
It should be noted that, the target fitting coordinate sequence is a fusion result of at least two reference coordinate sequences, and the accuracy, reliability and integrity of the target fitting coordinate sequence can be better ensured. Therefore, the target fitting coordinate sequence is processed according to the processing mode matched with the object type of the specific object, and the application effect of the data acquired by the sensor equipment can be effectively ensured.
Exemplary method
Fig. 1 is a flow chart of a method for fusion processing of sensory data of a vehicle sensor device according to some exemplary embodiments of the present disclosure. The method shown in fig. 1 may include step 110, step 120, step 130, step 140, and step 150.
Step 110, determining at least two reference coordinate sequences associated with an observed object of a sensor device provided to a vehicle based on data acquired by the sensor device.
Alternatively, the observed object of the sensor device may depend on the type of sensor device. For example, if the sensor device is an environmental perception type sensor, the observed object of the sensor device may be a specific road surface element. As another example, if the sensor device is a motion-aware sensor, the object of observation of the sensor device may be a host vehicle. The object of observation of the sensor device may be regarded as the predetermined object above.
Alternatively, at least two reference coordinate sequences associated with the observed object may be represented as N reference coordinate sequences. Each of the N reference coordinate sequences may include several coordinates arranged in sequence. The number of coordinates comprised by the different reference coordinate sequences may be the same or different. Each of the N reference coordinate sequences may be considered a vector curve. In addition, the N reference coordinate systems may be assigned to the same coordinate system, which may be referred to as a reference coordinate system hereinafter for convenience of explanation. The reference coordinate system may be either a two-dimensional coordinate system or a three-dimensional coordinate system (e.g., a world coordinate system). It should be noted that, any coordinate in the reference coordinate sequence and any coordinate in the target fitting coordinate sequence referred to in the embodiments of the present disclosure may refer to a coordinate in the reference coordinate system, and any direction referred to in the embodiments of the present disclosure may refer to a direction in the reference coordinate system.
In one example, the sensor device may be an environmental perception type sensor, which may include a camera and a lidar. At the same time point, an environmental image can be acquired through a camera, point cloud data can be acquired through a laser radar, and the same pavement elements in the surrounding environment of the vehicle can exist in the environmental image and the point cloud data, and can be used as the specific pavement elements. For the environment image, a semantic segmentation algorithm can be utilized to determine each pixel point used for representing the same pavement element in the environment image, the coordinates of the pixel points are sequentially arranged to obtain a first initial coordinate sequence, and then the first initial coordinate sequence is converted into a reference coordinate system to obtain a reference coordinate sequence. For the point cloud data, each point used for representing the same road surface element in the point cloud data can be determined, the coordinates of the points are sequentially arranged, a second initial coordinate sequence can be obtained, and then the second initial coordinate sequence is converted into a reference coordinate system, so that another reference coordinate sequence can be obtained. In this example, the camera and lidar may be considered as different data sources, and the different reference coordinate sequences may be considered from observations of the different data sources at the same point in time. In some embodiments, the location map may also be used as a data source.
In another example, the sensor device may be an environmental awareness type sensor, which may include a camera. At a first acquisition time point, a first environmental image may be acquired by the camera, and at a second acquisition time point, which is a next acquisition time point of the first acquisition time point, a second environmental image may be acquired by the camera, and identical pavement elements in the surrounding environment of the host vehicle may exist in the first environmental image and the second environmental image, and may be used as the specific pavement elements above. Then, a reference coordinate sequence may be determined based on the first environmental image, another reference coordinate sequence may be determined based on the second environmental image (the specific manner of determining the reference coordinate sequence based on any environmental image may be referred to in the upper section for related description), so that two reference coordinate sequences are obtained, and the two reference coordinate sequences are often different due to the influence of factors such as position and shielding. In this example, different reference coordinate sequences may be considered to be from observations of the same data source at different points in time.
In yet another example, the sensor device may be a motion sensing type sensor, which may include a GNSS sensor and a wheel speed meter. The coordinates of a plurality of time points of the vehicle in a certain time period under the reference coordinate system can be obtained by analyzing and processing the position data acquired by the GNSS sensor, and a reference coordinate sequence can be obtained by arranging the coordinates in sequence. Similarly, by analyzing and processing wheel speed data acquired by the wheel speed meter, another reference coordinate sequence can be obtained, and thus two reference coordinate sequences are obtained.
Step 120, determining initial iteration coordinates and initial iteration directions for the observed object based on the at least two reference coordinate sequences.
Alternatively, for each reference coordinate sequence of the N reference coordinate sequences, a preset number of coordinates in the reference coordinate sequence that are ranked forward may be determined, and based on the preset number of coordinates corresponding to each of the N reference coordinate sequences, an initial iteration coordinate and an initial iteration direction for the observed object may be determined. The preset number may be 2, 3 or other values, which are not listed here.
It should be noted that, in the embodiment of the present disclosure, the fusion of N reference coordinate sequences may be implemented through an iterative algorithm. The initial iteration coordinates and initial iteration directions for the observed object can be understood as the initial input data required by the iterative algorithm.
Step 130, determining the target iteration coordinate based on the at least two reference coordinate sequences, the initial iteration coordinate and the initial iteration direction.
Optionally, based on the N reference coordinate sequences, the initial iteration coordinates and the initial iteration direction, new iteration coordinates can be recursively deduced through operation of an iteration algorithm, and each new iteration coordinate which is recursively deduced can be used as a target iteration coordinate, so that a plurality of target iteration coordinates can be obtained. The number of target iteration coordinates may be represented as M target iteration coordinates.
Step 140, determining a target fit coordinate sequence associated with the observed object based on the initial iteration coordinate and the target iteration coordinate.
Alternatively, the initial iteration coordinates and the M target iteration coordinates may be sequentially arranged to obtain a coordinate sequence, and the coordinate sequence is used as the target fitting coordinate sequence.
Of course, the manner of determining the target fit coordinate sequence is not limited thereto. For example, after the initial iterative coordinates and the M target iterative coordinates are sequentially arranged to obtain a coordinate sequence, the obtained coordinate sequence may be processed by a certain post-processing algorithm to obtain a target fitting coordinate sequence.
And step 150, processing the target fitting coordinate sequence according to a processing mode matched with the object type of the observed object.
Alternatively, the object type of the observation object may include, but is not limited to, a predetermined road surface element, a host vehicle, and the like.
If the observed object is a preset pavement element, the vehicle behavior prediction, the vehicle decision and the path planning can be carried out according to the target fitting coordinate sequence, and the vehicle can be controlled to run on the basis. Here, the own vehicle decision may relate to a decision on the speed, acceleration, traveling direction, etc. of the own vehicle.
In one example, if the observation target is a lane line, the traveling direction of the host vehicle may be controlled so as to avoid the traveling direction of the host vehicle from deviating from the extending direction of the lane line as much as possible.
In another example, if the observation target is a physical boundary line of the road, the traveling direction of the host vehicle may be controlled to avoid collision between the host vehicle and the physical boundary line of the road as much as possible.
If the observation object is the host vehicle, the target fitting coordinate sequence can be uploaded to the cloud server, and the cloud server can analyze road conditions according to the target fitting coordinate sequence. For example, the cloud server may collect a large number of target fit coordinate sequences (which may include target fit coordinate sequences corresponding to each of the plurality of vehicles), and by analyzing the large number of target fit coordinate sequences, the cloud server may determine that a specific road segment is severely congested if the cloud server finds that all vehicles passing through the specific road segment remain substantially motionless, and the cloud server may prompt other vehicles to avoid the specific road segment as much as possible.
In the embodiment of the disclosure, the target fitting coordinate sequence may be regarded as a fusion result obtained by fusing at least two reference coordinate sequences through non-parametric fitting, avoiding the implementation of iteration operation based on priori knowledge in the related art, being beneficial to reducing adverse effects possibly caused by jitter and jitter of data to be fused (i.e. the reference coordinate sequences), reducing adverse effects possibly caused by influence of shielding factors on the integrity of the data to be fused, and being beneficial to compensating for the advantages and disadvantages of multi-sensor characteristics, realizing optimal solutions of precision, distance, view angle range, noise, stability and the like, and ensuring the accuracy, reliability and integrity of the target fitting coordinate sequence as the fusion result. In this way, the target fitting coordinate sequence is processed according to the processing mode matched with the object type of the observed object, so that the application effect of the data acquired by the sensor equipment is guaranteed. For example, if the observation object is a specific road surface element, real-time sensing can be effectively performed on the surrounding environment of the vehicle, so that prediction of the vehicle behavior, decision of the vehicle, path planning and the like can be reasonably performed.
Fig. 2 is a flow chart of a method of determining iteration coordinates of a target provided by some exemplary embodiments of the present disclosure. The method shown in fig. 2 may include step 210, step 220, and step 230. Alternatively, a combination of steps 210 through 230 may be an alternative embodiment of step 130 of the present disclosure.
Step 210, determining a first window area based on the initial iteration coordinate, the initial iteration direction and the preset window feature.
Optionally, the preset window features may include, but are not limited to, window shape, window size, and the like. If the reference coordinate system is a two-dimensional coordinate system, the window shape may include, but is not limited to, rectangular, circular, oval, etc. If the window shape is rectangular, the window dimensions may include a length and a width. The length may be denoted as L and the width may be denoted as W. If the window shape is circular, the window size may include a radius. If the window shape is elliptical, the window size may include a major axis radius and a minor axis radius.
In some embodiments, the reference coordinate system may be a three-dimensional coordinate system and the window shape may include, but is not limited to, a cuboid, sphere, ellipsoid, and the like. For ease of understanding, the description below will mainly be given by taking the case where the reference coordinate system is a two-dimensional coordinate system as an example.
In some alternative embodiments of the present disclosure, step 210, as shown in fig. 3, may include step 2101, step 2103, and step 2105.
Step 2101, determining a window shape and a window size based on a preset window feature.
Alternatively, the preset window feature may include a window shape and a window size, and then the window shape and the window size may be extracted from the preset window feature.
In some embodiments, the preset window feature may include only one of the window shape and the window size, e.g., only the window shape, then the window shape may be extracted from the preset window feature, and additionally, the window size may be manually selected, or empirically determined.
Step 2103, determining a window unwinding direction based on the initial iteration direction.
Alternatively, the initial iteration direction may be taken as the window expansion direction, or a direction having an angle smaller than a preset angle (which may be a very small angle) with the initial iteration direction may be taken as the window expansion direction.
Step 2105, determining a first window region based on the initial iteration coordinates, the window shape, the window size, and the window expansion direction.
In one example, as shown in fig. 4-1, A0 is the position point represented by the initial iteration coordinate, the window shape is rectangular, the length in the window size is L, the width is W, and the window expansion direction is the direction indicated by arrow 410. Then, A0 may be taken as the center point of one short side, a rectangular region 420 having a length L and a width W and expanding in the direction indicated by the arrow 410 may be determined, and the rectangular region 420 may be taken as the first window region. Of course, A0 may not be the center point of the short side, and may be, for example, a 1/3 position point, a 2/3 position point, or the like on the short side.
In another example, as shown in FIG. 4-2, A0 is the location point represented by the initial iteration coordinate, the window shape is elliptical, the major axis radius in the window size is L/2, the minor axis radius is W/2, and the window expansion direction is the direction indicated by arrow 410. Then, an elliptical area 430 having a length L of the major axis and a length W of the minor axis and expanding in the direction indicated by the arrow 410 may be determined with A0 as one end point of the major axis, and the elliptical area 430 may be used as the first window area.
In the embodiment shown in fig. 3, the shape and size of the window may be determined based on the preset window feature, the direction for expanding the window may be determined based on the initial iteration direction, and the window may be expanded from a suitable position along a suitable direction in combination with the initial iteration coordinate, so that the required first window area may be determined efficiently and reliably.
Step 220, determining a coordinate set composed of coordinates of the position points represented in the at least two reference coordinate sequences distributed in the first window area.
Optionally, each coordinate in the N reference coordinate sequences may be traversed, and whether the location points represented by the traversed coordinates are distributed in the first window area or not is determined, so that each coordinate in the N reference coordinate sequences, in which the represented location points are distributed in the first window area, may be screened out, and the screened out coordinates may form a coordinate set.
In step 230, the target iteration coordinates are determined based on the set of coordinates.
In some alternative embodiments of the present disclosure, step 230, as shown in fig. 5, may include steps 2301, 2303, and 2305.
And 2301, fusing each coordinate in the coordinate set to obtain a fused coordinate.
Alternatively, each coordinate in the coordinate set may be fused by mean operation or weighted average operation, to obtain a fused coordinate.
In step 2303, a target iteration direction is determined based on the fused coordinates and the initial iteration coordinates.
In some optional embodiments of the present disclosure, step 2303 may include:
Determining a first direction vector in which the position point represented by the initial iteration coordinate points to the position point represented by the fusion coordinate;
A target iteration direction is determined based on the first direction vector.
In one example, where the initial iteration coordinates are (x 1, y 1) and the fusion coordinates are (x 2, y 2), the first direction vector may be represented as (x 2-x1, y2-y 1).
After the first direction vector is determined, a direction indicated by the first direction vector may be taken as a target iteration direction, or a direction having an included angle smaller than a preset angle with the direction indicated by the first direction vector may be taken as a target iteration direction.
Therefore, based on the fusion coordinates and the initial iteration coordinates, effective references can be provided for determining the target iteration direction through calculation of the direction vector, and therefore the target iteration direction can be determined efficiently and rapidly.
Of course, the embodiment of step 2303 is not limited thereto. For example, a direction vector in which the position point represented by the fused coordinates points to the position point represented by the initial iteration coordinates may be determined, and a direction indicated by a reverse amount of the direction vector may be taken as the target iteration direction.
In step 2305, the target iteration coordinates are determined based on the fused coordinates and the target iteration direction.
In some optional embodiments of the present disclosure, step 2305 may include:
Along the target iteration direction, determining a target position point which is separated from the position point represented by the fusion coordinate by a preset distance;
and determining the iteration coordinates of the target based on the coordinates of the target position points.
Alternatively, the preset distance may be expressed as delta.
As shown in fig. 4-1 and 4-2, A0 is a position point represented by the initial iteration coordinate, A1 is a position point represented by the fusion coordinate, and then a direction vector pointing to A1 by A0 may be used as a first direction vector, and then the target iteration direction may be a transmitting direction of the ray A0 A1. Thus, the ray A0A1 can be searched for a position point that is delta from A1, and if the found position point is BO, B0 can be set as the target position point.
After determining the target location point, the coordinates of the target location point may be taken as target iteration coordinates. Of course, the target iteration coordinates are not limited to the coordinates of the target location points. For example, the target iteration coordinate may be a coordinate of a certain position point on the ray A0A1, which is closer to the target position point.
Therefore, according to the target iteration direction and the preset distance, the required target position point can be determined efficiently and reliably, so that an effective reference is provided for determining the target iteration coordinate, and the target iteration coordinate can be determined efficiently and rapidly.
Of course, the embodiment of step 2305 is not so limited. For example, instead of determining the target position point along the target iteration direction, the target position point may be determined along a direction having an angle smaller than a preset angle with the target iteration direction.
In the embodiment shown in fig. 5, each coordinate in the coordinate set may be fused to obtain a fused coordinate, and on this basis, the target iteration direction and the target iteration coordinate may be determined sequentially. In this way, all the falling points (namely, the position points represented by all the coordinates in the coordinate set) in the first window area can be used for determining the target iteration coordinates, so that the points outside the first window area (which can be regarded as outliers) are prevented from being used for determining the target iteration coordinates as much as possible, interference caused by the outliers is avoided, and the rationality and reliability of the determined target iteration coordinates are guaranteed.
Of course, the embodiment of step 230 is not limited to the embodiment shown in fig. 5. For example, each coordinate in the coordinate set may be fused to obtain a fused coordinate, and the fused coordinate is used as the target iteration coordinate.
In the embodiment shown in fig. 2, the first window area can be reasonably determined based on the initial iteration coordinates, the initial iteration direction and the preset window characteristics, and on the basis, the coordinate set can be reasonably determined so as to determine the target iteration coordinates based on the coordinate set, so that the target iteration coordinates can be determined efficiently and quickly, interference caused by outliers can be avoided as much as possible, the fusion effect of N reference coordinate sequences in a noise-increasing scene can be improved, and high robustness is achieved.
Fig. 6 is a flow chart of a method for obtaining fused coordinates provided by some exemplary embodiments of the present disclosure. The method shown in fig. 6 may include step 610, step 620, step 630, and step 640. Alternatively, a combination of steps 610 through 640 may be an alternative embodiment of step 2301 of the present disclosure.
Step 610, determining distances between the location points represented by the coordinates in the coordinate set and the location points represented by the initial iteration coordinates, respectively.
Alternatively, for a location point represented by any coordinate in the coordinate set, based on the coordinate and the initial iteration coordinate, a distance between the location point and the location point represented by the initial iteration coordinate may be determined by geometric calculation.
In step 620, the coordinate features of each coordinate in the coordinate set in the preset dimension are respectively determined.
Optionally, the preset dimension may include, but is not limited to, a source dimension, a confidence dimension, a freshness dimension, and the like, and accordingly, the coordinate feature of any coordinate in the coordinate set in the preset dimension may include, but is not limited to, a source type, a confidence value, a freshness value, and the like.
Alternatively, the source type may include, but is not limited to, an image type, a point cloud type, and the like. If the reference coordinate sequence to which any coordinate in the coordinate set belongs is determined based on the image data acquired by the camera, the source type of the coordinate is an image type; if the reference coordinate sequence to which any coordinate in the coordinate set belongs is determined based on the point cloud data acquired by the laser radar, the source type of the coordinate is the point cloud type.
Alternatively, the confidence value may be used to characterize the level of confidence. The confidence value for any coordinate in the set of coordinates may be determined by a neural network model. For example, the semantic segmentation algorithm may be implemented through a semantic segmentation model, where when determining a semantic class for each pixel point in an environmental image, the semantic segmentation model may also predict a corresponding confidence value for the pixel point, where the confidence value may be used as a confidence value for coordinates of the pixel point.
Alternatively, the freshness value may be used to characterize the level of freshness. For example, the observation object is a host vehicle, the freshness value of any coordinate in the coordinate set may be determined according to the interval duration between the time point corresponding to the coordinate and the current time point, and the interval duration and the freshness value may be negatively correlated.
Step 630, determining weights corresponding to the coordinates in the coordinate set based on the distances and the coordinate features corresponding to the coordinates in the coordinate set.
In some optional embodiments of the present disclosure, step 630 may include:
Determining first weights corresponding to the coordinates in the coordinate set based on the distances corresponding to the coordinates in the coordinate set;
And determining second weights respectively corresponding to the coordinates in the coordinate set based on the coordinate features respectively corresponding to the coordinates in the coordinate set.
Alternatively, to determine the first weight, a kernel function may be introduced. The kernel function may be a gaussian distribution function. The argument of the kernel function may be a distance and the dependent variable may be a first weight. Therefore, the first weight corresponding to any coordinate can be obtained through calculation by only bringing the distance corresponding to the coordinate into the kernel function. Here, the kernel function may be used to inversely correlate the first weight with the distance.
Alternatively, in a similar manner to the determination of the first weight, an independent variable may be introduced as a coordinate feature for the determination of the second weight, the amount of strain being a specific function of the second weight. In determining the second weight, at least one of the following rules may be followed: (1) The second weight corresponding to the point cloud type is greater than the second weight corresponding to the image type; (2) the second weight is positively correlated with the confidence value; (3) the second weight is positively correlated with the freshness value.
And step 640, fusing each coordinate in the coordinate set based on the weight corresponding to each coordinate in the coordinate set, so as to obtain the fused coordinate.
Alternatively, for any coordinate in the coordinate set, after determining the corresponding first weight and second weight, the first weight corresponding to the coordinate may be added or multiplied with the second weight corresponding to the coordinate, and the added result or the multiplied result may be used as the target weight corresponding to the coordinate.
After the target weights respectively corresponding to the coordinates in the coordinate set are obtained, the coordinates in the coordinate set can be weighted by using the target weights respectively corresponding to the coordinates in the coordinate set, and the weighted result is used as a fusion coordinate.
Assuming that the coordinate set includes R coordinates, and that the target weight corresponding to P 1、P2、P3、……、PR,P1 is denoted as K 1,P2, the target weight corresponding to K 2,P3 is denoted as K 3,……,PR, and the target weight corresponding to K R and the fused coordinate is denoted as P, P may be determined as follows:
P=(K1*P1+K2*P2+K3*P3+……+KR*PR)/(K1+K2+K3+……+KR)
In the embodiment shown in fig. 6, the distance and the coordinate characteristics corresponding to each coordinate in the coordinate set can be combined, the corresponding weights can be reasonably determined for each coordinate in the coordinate set, and the weights are used for fusing each coordinate in the coordinate set, so that the rationality and the reliability of fused coordinates obtained through fusion can be guaranteed.
In some embodiments, the target weight corresponding to any coordinate in the set of coordinates may also be determined based only on the corresponding distance, or based only on the coordinate feature corresponding to that coordinate.
It should be noted that, as described above, on the premise that N reference coordinate sequences, initial iteration coordinates and initial iteration directions are known, the first window region may be determined based on the initial iteration coordinates, the initial iteration directions and the preset window features, the coordinate set may be determined based on the N reference coordinate sequences and the first window region, the coordinates in the coordinate set may be fused to obtain the fused coordinates, the target iteration direction may be determined based on the fused coordinates and the initial iteration coordinates, and the target iteration coordinates may be determined based on the fused coordinates and the target iteration directions, which may be regarded as the first round of iterative processing in the operation process of the iterative algorithm. After the first round of iteration processing is finished, the target iteration coordinate obtained through the first round of iteration processing can be regarded as a new initial iteration coordinate, the target iteration direction obtained through the first round of iteration processing is regarded as a new initial iteration direction, and then the second round of iteration processing can be performed by combining N reference coordinate sequences. After the second round of iteration processing is finished, the target iteration coordinate obtained through the second round of iteration processing can be regarded as a new initial iteration coordinate, the target iteration direction obtained through the second round of iteration processing is regarded as a new initial iteration direction, and then the third round of iteration processing can be performed by combining N reference coordinate sequences. And the following steps are performed until iteration processing is not needed to be continued.
As shown in fig. 4-1 and 4-2, the position point represented by the initial iteration coordinate is A0, the initial iteration direction is the direction indicated by the arrow 410, and during the first iteration process, the determined position point represented by the fusion coordinate is A1, and then the direction pointed by A0 to A1 can be used as the target iteration direction obtained through the first iteration process, so that the coordinates of the position points B0 and B0 separated by delta from A1 on the ray A0A1 can be determined as the target iteration coordinate obtained through the first iteration process. Next, the coordinates of B0 may be regarded as new initial iteration coordinates, the direction from A0 to A1 may be regarded as a new initial iteration direction, and a second round of iterative processing may be performed on the basis of this. Assuming that the location point represented by the determined fusion coordinate is B1 through the second round of iterative processing, the direction from B0 to B1 may be used as the target iterative direction obtained through the second round of iterative processing, so that the coordinates of the location points C0, C0 separated by delta from B1 may be determined on the ray B0B1 and used as the target iterative coordinate obtained through the second round of iterative processing. Then, the coordinates of C0 can be regarded as new initial iteration coordinates, the direction from B0 to B1 can be regarded as new initial iteration direction, and a third round of iterative processing can be performed on the basis of this. Thus, through the M rounds of iterative processing, M target iterative coordinates above can be obtained.
In the process of each round of iterative processing, delta serving as a preset distance is used when the target iterative coordinate is determined, and the fitting distortion and the smoothness caused by overlarge value can be avoided by reasonably setting the value of the preset distance, so that the reliability of the finally determined target fitting coordinate sequence is ensured, and the influence on the converging speed caused by overlarge value can be avoided. According to experiments, if the observation object is a lane line, the preset distance may have any value between 0.1 meter and 0.3 meter. For example, the preset distance may be 0.1 meter, 0.15 meter, 0.2 meter, 0.25 meter, 0.3 meter, etc.
Fig. 7 is a flow chart of a method of determining a target fit coordinate sequence provided by some exemplary embodiments of the present disclosure. The method shown in fig. 7 may include steps 710, 720, 730, and 740. Alternatively, a combination of steps 710 through 740 may be used as an alternative embodiment of determining a target fit coordinate sequence associated with an observed object in step 140 of the present disclosure.
At step 710, a first relationship between the set of coordinates and a preset iteration duration condition is determined.
Alternatively, the first relationship may be used to characterize whether the set of coordinates satisfies a preset iteration duration condition. For example, the first relationship may be in the form of a classification, if classified as 1, the set of characterization coordinates satisfying the preset iteration persistence condition, and if classified as 0, the set of characterization coordinates not satisfying the preset iteration persistence condition.
In some optional embodiments of the present disclosure, step 710 may include:
determining a matching relation between a set type of the coordinate set and a preset set type;
For each coordinate in at least two reference coordinate sequences, determining a numerical relation between the number of first window areas distributed with the position points represented by the coordinate and a preset number;
based on the matching relationship and the numerical relationship, a first relationship between the coordinate set and a preset iteration continuation condition is determined.
Optionally, the preset set type may include an empty set type. The matching relationship between the set type of the coordinate set and the preset set type can be used for representing whether the set type of the coordinate set is an empty set type. Similar to the first relationship, the matching relationship may also be in a bifurcated form.
Alternatively, each coordinate in the N reference coordinate sequences may correspond to a count value, and the count value is 0 in the initial state. In the process of each round of iterative processing, whether the position points represented by the coordinates are distributed in the first window area corresponding to the round of iterative processing can be determined according to each coordinate in the N reference coordinate sequences, if yes, 1 can be added to the count value corresponding to the coordinates, otherwise, the count value corresponding to the coordinates remains unchanged. After the round of iterative processing is finished, the number represented by the count value corresponding to the coordinate can be used as the number of the first window areas distributed with the position points represented by the coordinate. The number corresponding to the coordinate may be considered as the number of times the coordinate is accessed. The numerical relationship between the number corresponding to the coordinates and the preset number may be a magnitude relationship between the number corresponding to the coordinates and the preset number.
Optionally, the determining the first relation between the coordinate set and the preset iteration duration condition is implemented in various ways based on the matching relation and the numerical relation. For example, if the set type of the matching relationship used to characterize the coordinate set is an empty set type, and/or at least a portion of the numerical relationship used to characterize the number of corresponding coordinate sets is greater than a preset number, a first relationship used to characterize the coordinate set as not satisfying the preset iteration duration condition may be determined. For another example, if the set type of the matching relationship for representing the coordinate set is not an empty set type, and the number of the corresponding numerical relationships for representing the corresponding coordinates is not greater than the preset number, a first relationship for representing that the coordinate set satisfies the preset iteration duration condition may be determined.
In this way, the first relation between the coordinate set and the preset iteration continuation condition can be reasonably determined by referring to the set type of the coordinate set and the accessed times of each coordinate.
Step 720, based on the first relationship, counting the number and/or the duty ratio of the points, which are represented by the position points in the at least two reference coordinate sequences and are distributed outside the first window area, to obtain a counting result.
Optionally, in response to the first relation being used to characterize the coordinate set to meet the preset iteration duration condition, the latest determined target iteration coordinate may be regarded as a new initial iteration coordinate, the latest determined target iteration direction may be regarded as a new initial iteration direction, and the next iteration process may be performed on the basis of the new initial iteration direction.
Optionally, in response to the first relation being used to characterize that the coordinate set does not meet the preset iteration duration condition, which means that the next iteration process is not required, statistics may be performed on the number and/or the duty ratio of coordinates, in the N reference coordinate sequences, in which the represented position points are always distributed outside the first window area, and the obtained statistical result may include the counted target number and/or the counted target duty ratio.
In step 730, a second relationship between the statistical result and the preset abnormal condition is determined.
Alternatively, the second relationship may be used to characterize whether the statistical result satisfies a preset abnormal condition. Similar to the first relationship, the second relationship may also be in a bifurcated form.
Here, if the target number is greater than the preset number and/or the target duty ratio is greater than the preset duty ratio, a second relationship for characterizing that the statistical result satisfies the preset abnormal condition may be determined. If the target number is less than or equal to the preset number and the target duty cycle is less than or equal to the preset duty cycle, a second relationship may be determined that characterizes the statistics not meeting the preset exception condition.
Step 740, determining a target fit coordinate sequence associated with the observed object based on the second relationship.
Optionally, in response to the second relationship being used to characterize that the statistical result does not meet the preset abnormal condition, a target fit coordinate sequence associated with the observed object may be determined. For example, as introduced above, a target fit coordinate sequence may be determined in combination with an initial iteration position and M target iteration positions.
Optionally, in response to the second relationship being used to characterize that the statistical result meets the preset abnormal condition, a prompt message for characterizing the fitting error may be generated and output to prompt a manual check of the matching of the N reference coordinate sequences. Here, the reason for the fitting error may be: the N reference coordinate sequences do not correspond to the same observation. For example, among the N reference coordinate sequences, a part of the reference coordinate sequences corresponds to the lane line, and the other part of the reference coordinate sequences corresponds to the road physical boundary line.
In the embodiment shown in fig. 7, the condition judgment related to the first relationship and the second relationship may be performed, and the target fitting coordinate sequence is determined only when the judgment result meets the requirement, which is favorable for avoiding that N reference coordinate sequences that do not correspond to the same observation object are used for determining the target fitting coordinate sequence, so that the rationality and reliability of the determined target fitting coordinate sequence can be ensured.
Fig. 8-1 is a flow chart of a method of determining initial iteration coordinates and initial iteration directions provided by some exemplary embodiments of the present disclosure. The method shown in fig. 8-1 may include step 810, step 820, and step 830. Alternatively, a combination of steps 810 through 830 may be used as an alternative embodiment of step 120 of the present disclosure.
Step 810, for each reference coordinate sequence, determines a second direction vector in which the location point represented by the second coordinate in the reference coordinate sequence points to the location point represented by the first coordinate.
Alternatively, for each of the N reference coordinate sequences, a predetermined number of the first-ordered coordinates in the reference coordinate sequence may be determined, for example, two first-ordered coordinates in the reference coordinate sequence. That is, the preset number of coordinates corresponding to the reference coordinate sequence includes a first coordinate and a second coordinate in the reference coordinate sequence, on the basis of which it is possible to determine that the position point represented by the second coordinate in the reference coordinate sequence points to the second direction vector of the position point represented by the first coordinate. The determining manner of the second direction vector may refer to the description of the determining manner of the first direction vector, which is not described herein.
Step 820, for each reference coordinate sequence, determining a second window area based on the first coordinate and the second direction vector corresponding to the reference coordinate sequence, and the preset window feature.
Optionally, determining the second window area based on the first coordinate and the second direction vector corresponding to the reference coordinate sequence and the preset window feature may include:
determining a window shape and a window size based on preset window features;
determining a window expansion direction based on the second direction vector;
And determining a second window area based on the first coordinate, the window shape, the window size and the window unfolding direction corresponding to the reference coordinate sequence.
It should be noted that, the determining manner of the second window area may be referred to further description of the determining manner of the first window area hereinabove, and will not be described herein.
Step 830, determining an initial iteration coordinate and an initial iteration direction for the observed object based on the second window region and the at least two reference coordinate sequences.
In some alternative embodiments of the present disclosure, as shown in fig. 8-2, step 830 may include step 8301, step 8303, and step 8305.
Step 8301, selecting a reference coordinate sequence of the position point represented by any coordinate in the at least two reference coordinate sequences, from the at least two reference coordinate sequences, wherein the position point is not distributed in the corresponding second window region.
Optionally, each second window area may be compared with the N reference coordinate sequences, so as to screen out a reference coordinate sequence of a position point represented by any coordinate of the N reference coordinate sequences that is not distributed in the corresponding second window area.
Step 8303, determining initial iteration coordinates for the observed object based on the first coordinates corresponding to the screened reference coordinate sequence.
Alternatively, the first coordinate corresponding to the screened reference coordinate sequence may be used as the initial iteration coordinate for the observed object.
Step 8305, determining an initial iteration direction for the observed object based on the inverse of the second direction vector corresponding to the screened reference coordinate sequence.
Alternatively, the direction indicated by the inverse vector of the second direction vector corresponding to the screened reference coordinate sequence may be used as the initial iteration direction for the observation object.
In some embodiments, the coordinates of a position point with a relatively close position point interval represented by the first coordinate corresponding to the screened reference coordinate sequence may be used as initial iteration coordinates, and/or a direction in which an included angle of a direction indicated by the reverse amount of the second direction vector corresponding to the screened reference coordinate sequence is smaller than a preset angle may be used as an initial iteration direction.
In one example, the N reference coordinate sequences may be two reference coordinate sequences, denoted as reference coordinate sequence 1 and reference coordinate sequence 2, respectively. As shown in fig. 9, the position point indicated by the first coordinate and the position point indicated by the second coordinate in the reference coordinate sequence 1 are A0 and E0, respectively, and the position point indicated by the first coordinate and the position point indicated by the second coordinate in the reference coordinate sequence 2 are B0 and F0, respectively. Then, the second direction vector corresponding to the reference coordinate sequence 1 may be a direction vector directed to A0 by E0, and the second direction vector corresponding to the reference coordinate sequence 2 may be a direction vector directed to B0 by F0. Next, for reference coordinate sequence 1 and reference coordinate sequence 2, a second window region may be determined, respectively. Assuming that the second window area corresponding to the reference coordinate sequence 1 is a rectangular area 910, the second window area corresponding to the reference coordinate sequence 2 is a rectangular area 920, and no coordinates of any two reference coordinate sequences are distributed in the rectangular area 910, and partial coordinates of the two reference coordinate sequences are distributed in the rectangular area 920, the initial iteration coordinate can be determined based on the first coordinate of the reference coordinate sequence 1, and the initial iteration direction can be determined based on the reverse amount of the second direction vector corresponding to the reference coordinate sequence 1. For example, the coordinates of A0 may be taken as initial iteration coordinates, and the direction pointed to by A0 to E0 may be taken as initial iteration direction.
In the embodiment shown in fig. 8-2, referring to the second window area, one reference coordinate sequence may be reasonably selected from the N reference coordinate sequences, and according to the first coordinate and the second direction vector corresponding to the reference coordinate sequence, the initial iteration coordinate and the initial iteration direction may be reasonably determined, so that the iteration start point (the position point represented by the initial iteration coordinate) may be adaptively found, and thus, the unordered input may be adapted. For example, the observation target is a host vehicle, and although the N reference coordinate sequences correspond to different time periods, there is no need to care which reference coordinate sequence corresponds to the earliest starting time point of the time period.
Of course, the implementation of step 830 is not limited thereto. For example, after one reference coordinate sequence is selected from the N reference coordinate sequences, an average value of a first coordinate and a second coordinate corresponding to the selected reference coordinate sequence may be determined, a position point separated from a position point represented by the average value by a preset distance may be determined along a direction indicated by a second direction vector corresponding to the selected reference coordinate sequence, and the determined coordinate of the position point may be used as an initial iteration coordinate. For another example, the initial iteration direction may be determined in combination with the inverse of the second direction vector corresponding to each of the N reference coordinate sequences.
In the embodiment shown in fig. 8-1, by sequentially determining the second direction vector and the second window area and combining N reference coordinate sequences, an effective reference can be provided for determining the initial iteration coordinate and the initial iteration direction, so that an appropriate initial iteration coordinate and an appropriate initial iteration direction can be determined quickly and adaptively, so as to operate the iterative algorithm according to the initial iteration coordinate and the initial iteration direction.
Fig. 10 is a flow chart of a method of determining a target fit coordinate sequence provided by further exemplary embodiments of the present disclosure. The method shown in fig. 10 may include step 1010, step 1020, and step 1030. Alternatively, a combination of steps 1010 through 1030 may be used as an alternative embodiment of step 140 of the present disclosure.
Step 1010, determining an intermediate fit coordinate sequence associated with the observed object based on the initial iteration coordinates and the target iteration coordinates.
Alternatively, the initial iteration coordinates and the M target iteration coordinates may be sequentially arranged to form a coordinate sequence, and the coordinate sequence may be used as an intermediate fitting coordinate sequence.
Of course, the manner of determining the intermediate fitting coordinate sequence is not limited thereto. For example, after the initial iterative coordinates and the M target iterative coordinates are sequentially arranged to form a coordinate sequence, some optimization processing may be performed on the coordinate sequence, and the optimization result may be used as an intermediate fitting coordinate sequence.
And 1020, performing point sampling on the intermediate fitting coordinate sequence according to an equidistant sampling mode to obtain a sampling point sequence.
Alternatively, the sampling start point and the sampling interval of the equidistant sampling manner may be set according to practical situations, which is not limited in the present disclosure. By applying the equidistant sampling mode, a plurality of sampling points can be obtained, and a sampling point sequence can be formed by arranging the plurality of sampling points in sequence.
Step 1030, determining a target fit coordinate sequence associated with the observed object based on the sequence of sample points.
Alternatively, the sampling point sequence may be directly used as the target fitting coordinate sequence, or some optimization processing may be performed on the sampling point sequence, and the optimization result may be used as the target fitting coordinate sequence.
In the embodiment shown in fig. 10, in the process of determining the target fitting coordinate sequence based on the initial iteration coordinate and the target iteration coordinate, an equidistant sampling processing mode can be introduced, which is beneficial to making the final target fitting coordinate sequence more regular.
In some alternative examples, as shown in fig. 11, the following processes may be included in embodiments of the present disclosure:
(a) The start point and direction of the iteration (corresponding to the initial iteration position and initial iteration direction determined above) are determined. Alternatively, the iteration start and direction may be searched by a search module.
(B) And displaying the window along the iteration direction by taking the iteration point as the center, and finding out all falling points in the window (corresponding to the first window area). Alternatively, a Kernel Density Estimation (KDE) method may be introduced, using a kernel function to calculate the kernel weight (corresponding to the first weight above).
(C) Judging iteration termination conditions: for determining whether the sliding window iteration can be terminated (i.e., whether the running iterative algorithm can be terminated). Alternatively, the number of times each coordinate in the N reference coordinate sequences is accessed may be maintained, terminating the sliding window iteration when any of the following conditions are met: 1. in the window expanded along the iteration points and directions, the number of the falling points is 0 (corresponding to the condition that the set type of the coordinate set is the empty set type), which indicates that fitting is completed; 2. the number of accesses to a certain part of points exceeding a threshold (corresponding to the case that at least part of the numerical relationships above are used to characterize the corresponding number of corresponding coordinates is greater than a preset number), indicates that the input data forms a loop, and the fitting is completed and the loop should be terminated.
(D) For each coordinate in the N reference coordinate sequences, a priori weight (corresponding to the second weight above) is given according to information such as source type, confidence value, freshness value and the like, and the priori weight and the kernel weight are fused to obtain a fusion weight (corresponding to the target weight above). All falling points within the window are weighted with the weighting weights, and the weighted center of gravity (corresponding to the fused coordinates above) is calculated.
(E) And taking the ray direction from the iteration point to the weighted center of gravity as a new iteration direction, and moving a fixed step delta from the weighted center of gravity along the new iteration direction as a new iteration point. Alternatively, the iteration start point and the new iteration point obtained by each round of iterative processing can be added to a point column as a fitting result for storage.
(F) And according to the maintained accessed times, evaluating whether the fitting covers all input data or not from the number (corresponding to the target number), the duty ratio (corresponding to the target duty ratio) and other dimensions of the non-accessed points (corresponding to the judgment related to the preset abnormal condition) to obtain an evaluation result. In an extreme scenario, if a matching error occurs, two curves with intervals of several tens of meters are matched together and fusion is attempted, at this time, all points on one curve are not accessed, evaluation gives a failure result, and at this time, the two curves can be required to be re-matched (corresponding to the above, prompt information for representing the matching error is output to prompt manual checking of the matching of the N reference coordinate sequences).
(G) And (3) performing post-processing operation, wherein resampling can be performed in the post-processing process, for example, resampling can be performed in an equidistant sampling mode, so as to ensure that the distance between two adjacent points in the output fitting point sequence (corresponding to the target fitting coordinate sequence).
In summary, by adopting the embodiment of the disclosure, non-parametric fitting can be performed on the N reference coordinate sequences in the driving process to obtain the target fitting coordinate sequence, the accuracy, reliability and integrity of the target fitting coordinate sequence can be better ensured, and the target fitting coordinate sequence is processed according to a processing mode matched with the object type of the observed object, so that the application effect of the data acquired by the sensor equipment can be better ensured. In addition, by adopting the embodiment of the disclosure, the iteration starting point can be adaptively found, the method is suitable for disordered input, and the fitting is not easily interfered by outliers by using a window, so that the method has high robustness and excellent performance in a high-noise scene.
Exemplary apparatus
Fig. 12 is a schematic structural view of an apparatus for fusion processing of sensing data of a vehicle sensor device according to some exemplary embodiments of the present disclosure. The apparatus shown in fig. 12 includes a first determination module 1210, a second determination module 1220, a third determination module 1230, a fourth determination module 1240, and a processing module 1250.
A first determining module 1210 for determining at least two reference coordinate sequences associated with an observed object of a sensor device provided to a vehicle based on data collected by the sensor device;
A second determining module 1220, configured to determine an initial iteration coordinate and an initial iteration direction for the observed object based on the at least two reference coordinate sequences determined by the first determining module 1210;
A third determining module 1230, configured to determine a target iteration coordinate based on the at least two reference coordinate sequences determined by the first determining module 1210, the initial iteration coordinate determined by the second determining module 1220, and the initial iteration direction determined by the second determining module 1220;
A fourth determining module 1240, configured to determine a target fitting coordinate sequence associated with the observed object based on the initial iteration coordinate determined by the second determining module 1220 and the target iteration coordinate determined by the third determining module 1230;
The processing module 1250 is configured to process the target fitting coordinate sequence determined by the fourth determining module 1240 according to a processing manner adapted to the object type of the observed object.
In some alternative examples, as shown in fig. 13, the third determining module 1230 includes:
A first determining submodule 1310, configured to determine a first window area based on the initial iteration coordinate determined by the second determining module 1220, the initial iteration direction determined by the second determining module 1220, and the preset window feature;
a second determining submodule 1320, configured to determine a coordinate set composed of coordinates in which the position points represented in the at least two reference coordinate sequences determined by the first determining module 1210 are distributed in the first window area determined by the first determining submodule 1310;
A third determining submodule 1330, configured to determine the iteration coordinates of the target based on the set of coordinates determined by the second determining submodule 1320.
In some alternative examples, third determination submodule 1330 includes:
the fusion unit is configured to fuse each coordinate in the coordinate set determined by the second determination submodule 1320 to obtain a fused coordinate;
a first determining unit, configured to determine a target iteration direction based on the fusion coordinate obtained by the fusion unit and the initial iteration coordinate determined by the second determining module 1220;
And a second determining unit, configured to determine the target iteration coordinate based on the fusion coordinate obtained by the fusion unit and the target iteration direction determined by the third determining module 1230.
In some alternative examples, the second determining unit includes:
The first determining subunit is used for determining a target position point which is separated from the position point represented by the fusion coordinate obtained by the fusion unit by a preset distance along the target iteration direction determined by the first determining unit;
and the second determining subunit is used for determining the target iteration coordinates based on the coordinates of the target position points determined by the first determining subunit.
In some alternative examples, the first determining unit includes:
A third determining subunit, configured to determine a first direction vector of the location point represented by the initial iteration coordinate determined by the second determining module 1220, where the location point represented by the initial iteration coordinate points to the location point represented by the fusion coordinate obtained by the fusion unit;
And the fourth determining subunit is used for determining the target iteration direction based on the first direction vector determined by the third determining subunit.
In some alternative examples, the fusion unit includes:
A fifth determining subunit, configured to determine distances between the location points represented by the respective coordinates in the coordinate set determined by the second determining subunit 1320 and the location points represented by the initial iteration coordinates determined by the second determining module 1220, respectively;
A sixth determining subunit, configured to determine, respectively, a coordinate feature of each coordinate in the coordinate set determined by the second determining submodule 1320 in a preset dimension;
a seventh determining subunit, configured to determine weights corresponding to the respective coordinates in the coordinate set determined by the second determining subunit 1320 based on the distance determined by the fifth determining subunit and the coordinate feature determined by the sixth determining subunit;
and a fusion subunit, configured to fuse each coordinate in the coordinate set determined by the second determination subunit 1320 based on the weight determined by the seventh determination subunit, to obtain a fused coordinate.
In some alternative examples, as shown in fig. 14, the fourth determination module 1240 includes:
A fourth determining submodule 1410, configured to determine a first relationship between the coordinate set determined by the second determining submodule 1320 and a preset iteration duration condition;
A statistics submodule 1420, configured to perform statistics on the number and/or the duty ratio of the points, which are represented by the position points in the at least two reference coordinate sequences and are determined by the first determination submodule 1210, and are distributed outside the first window area determined by the first determination submodule 1310, based on the first relationship determined by the fourth determination submodule 1410, so as to obtain a statistical result;
a fifth determining submodule 1430 for determining a second relation between the statistical result obtained by the statistical module and the preset abnormal condition;
a sixth determination submodule 1440 is configured to determine a target fit coordinate sequence associated with the observed object based on the second relationship determined by the fifth determination submodule 1430.
In some alternative examples, fourth determination submodule 1410 includes:
A third determining unit, configured to determine a matching relationship between the set type of the coordinate set determined by the second determining submodule 1320 and a preset set type;
A fourth determining unit, configured to determine, for each coordinate in the at least two reference coordinate sequences determined by the first determining module 1210, a numerical relationship between the number of first window areas distributed with the location points represented by the coordinates and a preset number;
And a fifth determining unit, configured to determine, based on the matching relationship determined by the third determining unit and the numerical relationship determined by the fourth determining unit, a first relationship between the coordinate set determined by the second determining submodule 1320 and a preset iteration duration condition.
In some alternative examples, the first determination submodule 1310 includes:
a sixth determining unit configured to determine a window shape and a window size based on a preset window feature;
a seventh determining unit, configured to determine a window expansion direction based on the initial iteration direction determined by the second determining module 1220;
An eighth determining unit, configured to determine the first window area based on the initial iteration coordinate determined by the second determining module 1220, the window shape determined by the sixth determining unit, the window size determined by the sixth determining unit, and the window expansion direction determined by the seventh determining unit.
In some alternative examples, as shown in fig. 15, the second determining module 1220 includes:
a seventh determining sub-module 1510, configured to determine, for each reference coordinate sequence determined by the first determining module 1210, a second direction vector in which the position point represented by the second coordinate in the reference coordinate sequence points to the position point represented by the first coordinate;
an eighth determining submodule 1520, configured to determine, for each reference coordinate sequence determined by the first determining module 1210, a second window area based on a first coordinate and a second direction vector corresponding to the reference coordinate sequence and a preset window feature;
A ninth determining sub-module 1530 for determining an initial iteration coordinate and an initial iteration direction for the observed object based on the second window area determined by the eighth determining sub-module 1520 and the at least two reference coordinate sequences determined by the first determining module 1210.
In some alternative examples, the ninth determination submodule 1530 includes:
A ninth determining unit, configured to screen, from the at least two reference coordinate sequences determined by the first determining module 1210, a reference coordinate sequence corresponding to a position point represented by any coordinate of the at least two reference coordinate sequences that is not distributed in the second window area determined by the eighth determining sub-module 1520;
A tenth determining unit, configured to determine an initial iteration coordinate for the observed object based on the first coordinate corresponding to the reference coordinate sequence screened by the ninth determining unit;
An eleventh determining unit, configured to determine an initial iteration direction for the observation object based on the reverse amount of the second direction vector corresponding to the reference coordinate sequence screened by the ninth determining unit.
In some alternative examples, as shown in fig. 16, the fourth determination module 1240 includes:
A tenth determination submodule 1610, configured to determine an intermediate fitting coordinate sequence associated with the observed object based on the initial iteration coordinate determined by the second determination module 1220 and the target iteration coordinate determined by the third determination module 1230;
the sampling submodule 1620 is configured to perform point sampling on the intermediate fitting coordinate sequence determined by the tenth determination submodule 1610 according to an equidistant sampling manner, so as to obtain a sampling point sequence;
An eleventh determining submodule 1630 is configured to determine a target fit coordinate sequence associated with the observed object based on the sample point sequence obtained by the sampling submodule 1620.
In the apparatus of the present disclosure, various optional embodiments, optional implementations, and optional examples of the disclosure may be flexibly selected and combined as needed to achieve corresponding functions and effects, which are not listed in one-to-one.
The beneficial technical effects corresponding to the exemplary embodiments of the present apparatus may refer to the corresponding beneficial technical effects of the foregoing exemplary method section, and will not be described herein.
Exemplary electronic device
Fig. 17 illustrates a block diagram of an electronic device, according to an embodiment of the disclosure, the electronic device 1700 includes one or more processors 1710 and memory 1720.
The processor 1710 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities and may control other components in the electronic device 1700 to perform desired functions.
Memory 1720 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and the processor 1710 may execute the one or more computer program instructions to implement the methods of the various embodiments of the present disclosure described above and/or other desired functions.
In one example, the electronic device 1700 may further include: input devices 1730 and output devices 1740, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input device 1730 may also include, for example, a keyboard, a mouse, and the like.
The output device 1740 may output various information to the outside, which may include, for example, a display, a speaker, a printer, and a communication network and a remote output apparatus connected thereto, etc.
Of course, for simplicity, only some of the components of the electronic device 1700 that are relevant to the present disclosure are shown in fig. 17, with components such as buses, input/output interfaces, and the like omitted. In addition, electronic device 1700 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform steps in a method according to various embodiments of the present disclosure described in the "exemplary methods" section of the present description.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform steps in a method according to various embodiments of the present disclosure described in the above "exemplary method" section of the present disclosure.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, but the advantages, benefits, effects, etc. mentioned in this disclosure are merely examples and are not to be considered as necessarily possessed by the various embodiments of the present disclosure. The specific details disclosed herein are merely for purposes of example and understanding, and are not intended to limit the disclosure to the specific details described above.
Various modifications and alterations to this disclosure may be made by those skilled in the art without departing from the spirit and scope of the application. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (12)

1. A method of fusion processing of sensory data of a vehicle sensor device, comprising:
Determining at least two reference coordinate sequences associated with an observed object of a sensor device provided to a vehicle based on data acquired by the sensor device;
Determining initial iteration coordinates and initial iteration directions for the observed object based on at least two of the reference coordinate sequences;
determining a target iteration coordinate based on at least two reference coordinate sequences, the initial iteration coordinate and the initial iteration direction;
determining a target fit coordinate sequence associated with the observed object based on the initial iteration coordinate and the target iteration coordinate;
and processing the target fitting coordinate sequence according to a processing mode matched with the object type of the observed object.
2. The method of claim 1, wherein the determining target iteration coordinates based on at least two of the reference coordinate sequences, the initial iteration coordinates, and the initial iteration direction comprises:
Determining a first window area based on the initial iteration coordinates, the initial iteration direction and preset window features;
determining a coordinate set composed of coordinates of the position points represented in at least two reference coordinate sequences distributed in the first window area;
and determining the iteration coordinates of the target based on the coordinate set.
3. The method of claim 2, wherein the determining target iteration coordinates based on the set of coordinates comprises:
fusing all coordinates in the coordinate set to obtain fused coordinates;
determining a target iteration direction based on the fusion coordinates and the initial iteration coordinates;
and determining target iteration coordinates based on the fusion coordinates and the target iteration direction.
4. The method of claim 3, wherein the determining target iteration coordinates based on the fused coordinates and the direction vector comprises:
Along the target iteration direction, determining a target position point which is separated from the position point represented by the fusion coordinate by a preset distance;
and determining target iteration coordinates based on the coordinates of the target position points.
5. The method of any of claims 2-4, the determining a target fit coordinate sequence associated with the observed object, comprising:
Determining a first relation between the coordinate set and a preset iteration continuous condition;
Based on the first relation, counting the number and/or the duty ratio of the points, which are distributed outside the first window area, of the position points represented by at least two reference coordinate sequences, so as to obtain a counting result;
determining a second relation between the statistical result and a preset abnormal condition;
a target fit coordinate sequence associated with the observed object is determined based on the second relationship.
6. The method of claim 5, wherein the determining a first relationship between the set of coordinates and a preset iteration duration condition comprises:
determining a matching relation between the set type of the coordinate set and a preset set type;
For each coordinate in at least two reference coordinate sequences, determining a numerical relation between the number of the first window areas distributed with the position points represented by the coordinate and a preset number;
And determining a first relation between the coordinate set and a preset iteration continuous condition based on the matching relation and the numerical relation.
7. The method of any of claims 2-4, wherein the determining a first window region based on the initial iteration coordinates, the initial iteration direction, and a preset window feature comprises:
determining a window shape and a window size based on preset window features;
determining a window unfolding direction based on the initial iteration direction;
a first window region is determined based on the initial iteration coordinates, the window shape, the window size, and the window expansion direction.
8. The method of any of claims 1-4, wherein the determining initial iteration coordinates and initial iteration directions for the observed object based on at least two of the reference coordinate sequences comprises:
Determining, for each of the reference coordinate sequences, a second direction vector in which the position point represented by the second coordinate in the reference coordinate sequence points to the position point represented by the first coordinate;
Determining a second window area according to the first coordinate and the second direction vector corresponding to each reference coordinate sequence and preset window characteristics;
an initial iteration coordinate and an initial iteration direction for the observed object are determined based on the second window region and at least two of the reference coordinate sequences.
9. The method of any of claims 1-4, wherein the determining a target fit coordinate sequence associated with the observed object based on the initial iteration coordinates and the target iteration coordinates comprises:
determining an intermediate fit coordinate sequence associated with the observed object based on the initial iteration coordinate and the target iteration coordinate;
According to an equidistant sampling mode, carrying out point sampling on the intermediate fitting coordinate sequence to obtain a sampling point sequence;
A target fit coordinate sequence associated with the observed object is determined based on the sequence of sampling points.
10. An apparatus for fusion processing of sensory data of a vehicle sensor device, comprising:
A first determination module for determining at least two reference coordinate sequences associated with an observation object of a sensor device provided to a vehicle based on data acquired by the sensor device;
The second determining module is used for determining initial iteration coordinates and initial iteration directions for the observed object based on at least two reference coordinate sequences determined by the first determining module;
The third determining module is used for determining a target iteration coordinate based on at least two reference coordinate sequences determined by the first determining module, the initial iteration coordinate determined by the second determining module and the initial iteration direction determined by the second determining module;
A fourth determining module, configured to determine a target fitting coordinate sequence associated with the observed object based on the initial iteration coordinate determined by the second determining module and the target iteration coordinate determined by the third determining module;
And the processing module is used for processing the target fitting coordinate sequence determined by the fourth determining module according to a processing mode matched with the object type of the observed object.
11. A computer-readable storage medium storing a computer program for executing the method of fusion processing of sensory data of a vehicle sensor device according to any one of the preceding claims 1 to 9.
12. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method for fusion processing of sensory data of a vehicle sensor device according to any one of the preceding claims 1-9.
CN202410264593.6A 2024-03-07 2024-03-07 Method and device for fusion processing of sensing data of vehicle sensor equipment Pending CN118133222A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410264593.6A CN118133222A (en) 2024-03-07 2024-03-07 Method and device for fusion processing of sensing data of vehicle sensor equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410264593.6A CN118133222A (en) 2024-03-07 2024-03-07 Method and device for fusion processing of sensing data of vehicle sensor equipment

Publications (1)

Publication Number Publication Date
CN118133222A true CN118133222A (en) 2024-06-04

Family

ID=91229881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410264593.6A Pending CN118133222A (en) 2024-03-07 2024-03-07 Method and device for fusion processing of sensing data of vehicle sensor equipment

Country Status (1)

Country Link
CN (1) CN118133222A (en)

Similar Documents

Publication Publication Date Title
JP7086111B2 (en) Feature extraction method based on deep learning used for LIDAR positioning of autonomous vehicles
US20230144209A1 (en) Lane line detection method and related device
CN111771207B (en) Enhanced vehicle tracking
CN109829351B (en) Method and device for detecting lane information and computer readable storage medium
CN113264066B (en) Obstacle track prediction method and device, automatic driving vehicle and road side equipment
CN114126944B (en) Rolling time domain state estimator
Sless et al. Road scene understanding by occupancy grid learning from sparse radar clusters using semantic segmentation
CN112907678B (en) Vehicle-mounted camera external parameter attitude dynamic estimation method and device and computer equipment
WO2022056770A1 (en) Path planning method and path planning apparatus
JP2021523443A (en) Association of lidar data and image data
US20210389133A1 (en) Systems and methods for deriving path-prior data using collected trajectories
WO2022217630A1 (en) Vehicle speed determination method and apparatus, device, and medium
CN114080629A (en) Object detection in point clouds
CN112651535A (en) Local path planning method and device, storage medium, electronic equipment and vehicle
CN117141520B (en) Real-time track planning method, device and equipment
CN113895460A (en) Pedestrian trajectory prediction method, device and storage medium
US20210192777A1 (en) Method, device and storage medium for positioning object
Sakic et al. Camera-LIDAR object detection and distance estimation with application in collision avoidance system
CN111707258A (en) External vehicle monitoring method, device, equipment and storage medium
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
US20230072966A1 (en) Systems and methods for providing and using confidence estimations for semantic labeling
US20210366274A1 (en) Method and device for predicting the trajectory of a traffic participant, and sensor system
CN118033622A (en) Target tracking method, device, equipment and computer readable storage medium
JP2024012160A (en) Method, apparatus, electronic device and medium for target state estimation
CN118133222A (en) Method and device for fusion processing of sensing data of vehicle sensor equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination