WO2023123325A1 - 一种状态估计方法和装置 - Google Patents

一种状态估计方法和装置 Download PDF

Info

Publication number
WO2023123325A1
WO2023123325A1 PCT/CN2021/143595 CN2021143595W WO2023123325A1 WO 2023123325 A1 WO2023123325 A1 WO 2023123325A1 CN 2021143595 W CN2021143595 W CN 2021143595W WO 2023123325 A1 WO2023123325 A1 WO 2023123325A1
Authority
WO
WIPO (PCT)
Prior art keywords
state
data
time
time period
target
Prior art date
Application number
PCT/CN2021/143595
Other languages
English (en)
French (fr)
Inventor
洪峰
张德明
黄成凯
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/143595 priority Critical patent/WO2023123325A1/zh
Publication of WO2023123325A1 publication Critical patent/WO2023123325A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present application relates to the field of intelligent driving, in particular to a state estimation method and device.
  • smart terminals such as smart transportation equipment, smart home equipment, and robots are gradually entering people's daily lives.
  • Sensors play a very important role in smart terminals.
  • Various sensors installed on the smart terminal such as millimeter-wave radar, lidar, camera, ultrasonic radar, etc., sense the surrounding environment during the movement of the smart terminal, collect data, identify moving objects, and static Recognition of scenes such as lane lines and signs, and combined with navigator and map data for path planning. Sensors can detect possible dangers in advance and assist or even take necessary avoidance measures autonomously, effectively increasing the safety and comfort of smart terminals.
  • Intelligent driving technology includes perception, decision-making, control and other stages.
  • the perception module is the "eye" of the intelligent vehicle.
  • the perception module receives the surrounding environment information, and understands the environment in which the cognition is located through machine learning technology.
  • the perception module can estimate the state of other traffic participants according to the output of each sensor, and realize the tracking of other traffic participants.
  • the decision-making module uses the information output by the perception module to predict the behavior of traffic participants, so as to make behavior decisions for the own vehicle.
  • the control module calculates the lateral acceleration and longitudinal acceleration of the vehicle according to the output of the decision-making module, and controls the passing of the own vehicle.
  • the perception module performs state estimation on each traffic participant, and the state estimation result is obtained by processing the data collected by each sensor in the vehicle.
  • cost-effective sensors are usually provided in the vehicle.
  • cost-effective sensors may be less accurate in sensing their surroundings.
  • High-precision sensors can be installed on the top of the vehicle.
  • the data collected by the high-precision sensor is processed, and the processing result can be used to evaluate the accuracy of the state estimation result determined by the perception module in the vehicle.
  • the way of processing the data collected by the high-precision sensor affects the accuracy of the processing result, thereby affecting the accuracy of the evaluation result of the perception module.
  • the present application provides a state estimation method and device, which can improve the accuracy of state estimation results.
  • a state estimation method including: acquiring first state data of a target within a first time period, the first state data is associated with first collected state data, and the first collected state data includes data of the target collected by a first sensor during the first time period, the data of the target being at least associated with a first state of the target during the first time period; according to the first state data, perform state estimation according to the first time sequence, and obtain the first estimated state data corresponding to the second time period; according to the first state data, perform state estimation according to the second time sequence opposite to the first time sequence Estimating, obtaining second state estimation data corresponding to a third time period, where the second time period overlaps with the third time period; according to the first state estimation data and the second state estimation data, determine third state estimation data for estimating the first state.
  • the first state data of the target perform two state estimations that are opposite in time sequence, and determine the third state estimation data for estimating the state of the target according to the results of the two state estimations, the results of the two state estimations Part or all of the errors in can cancel each other, thereby improving the accuracy of the third state estimation data.
  • the first time starting point of the second time period is an initial time point
  • the method further includes: determining initial state data according to the first state data, the The initial state data is used to represent the initial estimated state of the target within the first time period; according to the first state data and the initial state data, the initial time point is determined, and at the initial time point The difference between the first state and the initially estimated state is smaller than a preset value.
  • the state estimation of the target is carried out, and the accuracy of the initial state of the target affects the accuracy of the state estimation result.
  • the difference between the first state and the initial estimated state is large, at least one of the first state and the initial estimated state has a large difference from the true state of the target.
  • the time point at which the difference between the first state and the initial estimated state is less than the preset value is the initial time point. It can be considered that the first state and the initial estimated state of the target at the initial time point are both convergent, and the true state of the target is The difference is small. Therefore, performing state estimation with the initial time point as the first time starting point of the second time period can make the accuracy of the first estimated state data higher.
  • the first estimated state data includes first data, and the first data is a second time starting point for the third time period according to the second time sequence Input data for state estimation.
  • the state indicated by the first data selected in the first estimated state data is more accurate. Therefore, compared with the first state, the first data in the first estimated state data is used as the input data for state estimation on the second time starting point of the third time period according to the second time sequence, so that the second The accuracy of the estimated state data is higher.
  • the second time starting point of the third time period is the last time point of the first time period along the first time sequence.
  • the length of the time period in which the second time period corresponding to the first estimated state data coincides with the third time period corresponding to the second state estimation data affects the accuracy of the third state estimation data. Prolonging the overlapping time period of the second time period and the third time period as much as possible can make the accuracy of the third state estimation data higher.
  • the first state estimation data includes second data, the second data corresponds to a state estimation result of the target within a first preset time period, and the first The preset duration is a period of time after the first time period along the first time sequence; and/or, the second state estimation data includes third data, and the third data corresponds to the second preset duration
  • the second preset duration is a period of time after the first time period along the second time sequence.
  • the third estimated state data is determined based on the first estimated state data and the second estimated state data.
  • the third state estimation data may exceed the first time period.
  • fourth state estimation data is determined according to second state data corresponding to a fourth time period, where the second state data is the first state data or the first state data Three-state estimation data, the fourth time period includes a first sub-time period and a second sub-time period, and there is a time interval between the first sub-time period and the second sub-time period, and the time interval does not Belonging to the fourth time period, the fourth state estimation data is used to estimate a second state of the target in the time interval.
  • the second state data of the target in the fourth time period determine the second state of the target in the time interval, so as to determine the state of the target in the continuous time region formed by the fourth time period and the time interval, and obtain the state estimation of the target The result is more complete.
  • the determining the fourth state estimation data according to the second state data corresponding to the fourth time period includes: determining at least one supplementary state data set, the at least The start data and end data included in each supplementary state data set in a supplementary state data set are determined according to the second state data, and each supplementary state data set corresponds to a difference parameter, and each supplementary The difference parameter of the state data set is used to represent the difference between the state corresponding to each supplementary state data set and the state corresponding to the second state data; according to each supplementary state data set in the at least one supplementary state data set A loss function for a set of state data to determine said fourth state estimation data, wherein said loss function for each set of supplementary state data includes a difference parameter for each set of supplementary state data, and is related to each set of supplementary state data The difference parameter of the data set is positively correlated.
  • the difference between the state of the target in the time interval and the state in the fourth time period is considered, and the state of the target in the time interval represented by the determined fourth state estimation data is more in line with the state of the target in the fourth time period state, making the estimation data of the fourth state more reasonable and accurate.
  • a state estimation method including: acquiring first state data of a target within a first time period, the first state data being associated with the first collection state data, and the first collection state
  • the data includes data of the target collected by a first sensor during the first time period, the data of the target being at least associated with a first state of the target during the first time period; according to the First state data, determine initial state data, the initial state data is used to represent the initial estimated state of the target within the first time period; according to the first state data and the initial state data, determine the initial time point, the difference between the first state and the initial estimated state at the initial time point is less than a preset value; according to the first state data, with the initial time point as the first time starting point, perform The target's state estimate.
  • the difference between the first state and the initial estimated state is large, at least one of the first state and the initial estimated state has a large difference from the true state of the target.
  • the time point at which the difference between the first state and the initial estimated state is less than the preset value is the initial time point. It can be considered that the first state and the initial estimated state of the target at the initial time point are both convergent, and the true state of the target is The difference is small.
  • the performing the state estimation of the target based on the first state data and using the initial time point as the first time point includes: according to the first State data, performing state estimation according to the first time sequence to obtain the first estimated state data corresponding to the second time period, the first time starting point is the starting point of the second time period along the first time sequence; according to The first state data is estimated according to a second time sequence opposite to the first time sequence to obtain second state estimation data corresponding to a third time period, and the second time period is the same as the third time period The time periods overlap; according to the first state estimation data and the second state estimation data, third state estimation data is determined, and the three state estimation data are used to estimate the first state.
  • the first estimated state data includes first data
  • the first data is a second time starting point for the third time period according to the second time sequence Input data for state estimation.
  • the second time starting point of the third time period is the last time point of the first time period along the first time sequence.
  • the first state estimation data includes second data
  • the second data corresponds to a state estimation result of the target within a first preset time period
  • the first The preset duration is a period of time after the first time period along the first time sequence
  • the second state estimation data includes third data
  • the third data corresponds to the second preset duration
  • the second preset duration is a period of time after the first time period along the second time sequence.
  • the method further includes: determining fourth state estimation data according to second state data corresponding to a fourth time period, where the second state data is the first State data or the third state estimation data, the fourth time period includes a first sub-time period and a second sub-time period, and there is a time interval between the first sub-time period and the second sub-time period , the time interval does not belong to the fourth time period, and the fourth state estimation data is used to estimate the second state of the target in the time interval.
  • the determining the fourth state estimation data according to the second state data corresponding to the fourth time period includes: determining at least one supplementary state data set, the at least The start data and end data included in each supplementary state data set in a supplementary state data set are determined according to the second state data, and each supplementary state data set corresponds to a difference parameter, and each supplementary The difference parameter of the state data set is used to represent the difference between the state corresponding to each supplementary state data set and the state corresponding to the second state data; according to each supplementary state data set in the at least one supplementary state data set A loss function for a set of state data to determine said fourth state estimation data, wherein said loss function for each set of supplementary state data includes a difference parameter for each set of supplementary state data, and is related to each set of supplementary state data The difference parameter of the data set is positively correlated.
  • a state estimation method including: acquiring second state data, the second state data is associated with the first collected state data, and the first collected state data includes information from the first sensor in the first collected state data
  • the data of the target collected within a time period, the data of the target is at least associated with the first state of the target within the first time period
  • the fourth time period corresponding to the second state data includes A first sub-time period and a second sub-time period, there is a time interval between the first sub-time period and the second sub-time period, and the time interval does not belong to the fourth time period; according to the first sub-time period
  • Two state data, determining fourth state estimation data the fourth state estimation data is used to estimate the second state of the target in the time interval.
  • the second state of the target in the time interval is determined, thereby making the state data obtained based on the first sensor more complete and improving the accuracy of the state data obtained based on the first sensor.
  • the determining the fourth state estimation data according to the second state data includes: determining at least one supplementary state data set, each of the at least one supplementary state data set The start data and end data included in each supplementary state data set are determined according to the second state data, and each supplementary state data set corresponds to a difference parameter, and the difference parameter of each supplementary state data set is used To represent the difference between the state corresponding to each set of supplementary state data and the state corresponding to the second state data; according to the loss function of each set of supplementary state data in the at least one set of supplementary state data, determining said fourth state estimation data, wherein said loss function for each set of supplementary state data includes and is positively correlated with a difference parameter for said set of each supplementary state data .
  • the second state data is third state estimation data
  • the acquiring the second state data includes: performing according to the first time sequence according to the first state data state estimation, obtaining first estimated state data corresponding to a second time period, the first state data being associated with the first collected state data; according to the first state data, according to the order of the first time State estimation is performed in the opposite second time sequence to obtain second state estimation data corresponding to a third time period, and the second time period overlaps with the third time period; according to the first state estimation data and the obtained The second state estimation data is determined, and the third state estimation data is determined, and the third state estimation data is used for estimating the first state.
  • the first time starting point of the second time period is an initial time point
  • the method further includes: determining initial state data according to the first state data, the The initial state data is used to represent the initial estimated state of the target within the first time period; according to the first state data and the initial state data, the initial time point is determined, and at the initial time point The difference between the first state and the initially estimated state is smaller than a preset value.
  • the first estimated state data includes first data, and the first data is a second time starting point for the third time period according to the second time sequence Input data for state estimation.
  • the second time starting point is the last time point of the first time period along the first time sequence.
  • the first state estimation data includes second data, the second data corresponds to a state estimation result of the target within a first preset time period, and the first The preset duration is a period of time after the first time period along the first time sequence; or, the second state estimation data includes third data, and the third data corresponds to the pair within the second preset duration.
  • the second preset duration is a period of time following the first time period along the second time sequence.
  • the second state data is third state estimation data
  • the acquiring the second state data includes: determining initial state data according to the first state data, the The initial state data is used to represent the initial estimated state of the target within the first time period, and the first state data is associated with the first collection state data; according to the first state data and the Initial state data, determining an initial time point at which the difference between the first state and the initial estimated state is smaller than a preset value; according to the first state data, taking the initial time point as Starting from the first time, the state estimation of the target is performed to determine the third state estimation data, and the third state estimation data is used for estimating the first state.
  • a state estimation device including an acquisition module and a processing module; the acquisition module is used to acquire first state data of a target within a first time period, and the first state data is related to the first acquisition state Data association, the first collection state data includes the data of the target collected by the first sensor within the first time period, the data of the target is at least the same as that of the target within the first time period associated with the first state; the processing module is used to perform state estimation according to the first time sequence according to the first state data, and obtain the first estimated state data corresponding to the second time period; the processing module is also For performing state estimation according to the first state data in a second time sequence opposite to the first time sequence to obtain second state estimation data corresponding to a third time period, the second time period being the same as The third time period overlaps; the processing module is further configured to determine third state estimation data according to the first state estimation data and the second state estimation data, and the third state estimation data is used for The first state is estimated.
  • the first time starting point of the second time period is an initial time point
  • the processing module is further configured to: determine initial state data according to the first state data, The initial state data is used to represent the initial estimated state of the target within the first time period; according to the first state data and the initial state data, the initial time point is determined, and at the initial time Point the difference between the first state and the initially estimated state is less than a preset value.
  • the first estimated state data includes first data, and the first data is a second time starting point for the third time period according to the second time sequence Input data for state estimation.
  • the second time starting point is the last time point of the first time period along the first time sequence.
  • the first state estimation data includes second data, the second data corresponds to a state estimation result of the target within a first preset time period, and the first The preset duration is a period of time after the first time period along the first time sequence; and/or, the second state estimation data includes third data, and the third data corresponds to the second preset duration
  • the second preset duration is a period of time after the first time period along the second time sequence.
  • the processing module is further configured to determine fourth state estimation data according to second state data corresponding to a fourth time period, where the second state data is the The first state data or the third state estimation data, the fourth time period includes a first sub-time period and a second sub-time period, and there exists between the first sub-time period and the second sub-time period A time interval, the time interval does not belong to the fourth time period, and the fourth state estimation data is used to estimate the second state of the target in the time interval.
  • the processing module is specifically configured to: determine at least one supplementary state data set, the initial data and The termination data is determined according to the second state data, and each set of supplementary state data corresponds to a difference parameter, and the difference parameter of each set of supplementary state data is used to indicate that each set of supplementary state data corresponds to The difference between the state of the state and the state corresponding to the second state data; according to the loss function of each of the supplementary state data sets in the at least one supplementary state data set, the fourth state estimation data is determined, wherein, The loss function of each set of supplementary state data includes a difference parameter of each set of supplementary state data, and is positively correlated with the difference parameter of each set of supplementary state data.
  • a state estimation device which is characterized in that it includes: an acquisition module and a processing module; the acquisition module is used to acquire the first state data of the target within the first time period, and the first state The data is associated with first acquisition status data comprising data of the target collected by the first sensor during the first time period, the data of the target being at least at least as close as the target in the The first state within the first time period is associated; the processing module is used to determine initial state data according to the first state data, and the initial state data is used to indicate that the target is at the first time The initial estimated state in the segment; the processing module is also used to determine an initial time point according to the first state data and the initial state data, and at the initial time point, the first state and the initial estimated state The difference between states is smaller than a preset value; the processing module is further configured to, according to the first state data, use the initial time point as a first time starting point to estimate the state of the target.
  • the processing module is specifically configured to: perform state estimation according to the first time sequence according to the first state data, and obtain a first estimated state corresponding to a second time period data, the first time starting point is the starting point of the second time period along the first time sequence; according to the first state data, perform state estimation in a second time sequence opposite to the first time sequence , to obtain the second state estimation data corresponding to the third time period, the second time period overlaps with the third time period; according to the first state estimation data and the second state estimation data, determine the second state estimation data Three-state estimation data for estimating the first state.
  • the first estimated state data includes first data, and the first data is a second time starting point of the third time period according to the second time sequence Input data for state estimation.
  • the second time starting point is the last time point of the first time period along the first time sequence.
  • the first state estimation data includes second data, the second data corresponds to a state estimation result of the target within a first preset time period, and the first The preset duration is a period of time after the first time period along the first time sequence; and/or, the second state estimation data includes third data, and the third data corresponds to the second preset duration
  • the second preset duration is a period of time after the first time period along the second time sequence.
  • the processing module is further configured to determine fourth state estimation data according to second state data corresponding to a fourth time period, where the second state data is the The first state data or the third state estimation data, the fourth time period includes a first sub-time period and a second sub-time period, and there exists between the first sub-time period and the second sub-time period A time interval, the time interval does not belong to the fourth time period, and the fourth state estimation data is used to estimate the second state of the target in the time interval.
  • the processing module is specifically configured to: determine at least one supplementary status data set, each supplementary status data set in the at least one supplementary status data set includes the initial data and The termination data is determined according to the second state data, and each set of supplementary state data corresponds to a difference parameter, and the difference parameter of each set of supplementary state data is used to indicate that each set of supplementary state data corresponds to The difference between the state of the state and the state corresponding to the second state data; according to the loss function of each of the supplementary state data sets in the at least one supplementary state data set, the fourth state estimation data is determined, wherein, The loss function of each set of supplementary state data includes a difference parameter of each set of supplementary state data, and is positively correlated with the difference parameter of each set of supplementary state data.
  • a state estimation device which is characterized in that it includes: an acquisition module and a processing module; the acquisition module is used to acquire second state data, and the second state data is associated with the first collected state data , the first collection state data includes the data of the target collected by the first sensor within the first time period, and the data of the target is at least the same as the first data of the target within the first time period
  • the state is associated, the fourth time period corresponding to the second state data includes a first sub-time period and a second sub-time period, and there is a time interval between the first sub-time period and the second sub-time period, The time interval does not belong to the fourth time period; the processing module is configured to, according to the second state data, determine fourth state estimation data, and the fourth state estimation data is used to estimate that the target is in the The second state of the time interval.
  • the processing module is specifically configured to: determine at least one supplementary status data set, each supplementary status data set in the at least one supplementary status data set includes the initial data and The termination data is determined according to the second state data, and each set of supplementary state data corresponds to a difference parameter, and the difference parameter of each set of supplementary state data is used to indicate that each set of supplementary state data corresponds to The difference between the state of the state and the state corresponding to the second state data; according to the loss function of each of the supplementary state data sets in the at least one supplementary state data set, the fourth state estimation data is determined, wherein, The loss function of each set of supplementary state data includes a difference parameter of each set of supplementary state data, and is positively correlated with the difference parameter of each set of supplementary state data.
  • the second state data is third state estimation data
  • the acquiring module is specifically configured to: perform state estimation according to the first time sequence according to the first state data , to obtain the first estimated state data corresponding to the second time period, the first state data is associated with the first acquisition state data; according to the first state data, according to the order opposite to the first time Perform state estimation in a second time sequence to obtain second state estimation data corresponding to a third time period, where the second time period overlaps with the third time period; according to the first state estimation data and the first state estimation data Two state estimation data, determine the third state estimation data, the third state estimation data is used to estimate the first state.
  • the first time starting point of the second time period is an initial time point
  • the acquiring module is specifically configured to: determine the initial state data according to the first state data, The initial state data is used to represent the initial estimated state of the target within the first time period; according to the first state data and the initial state data, the initial time point is determined, and at the initial time Point the difference between the first state and the initially estimated state is less than a preset value.
  • the first estimated state data includes first data, and the first data is a second time starting point for the third time period according to the second time sequence Input data for state estimation.
  • the second time starting point of the third time period is the last time point of the first time period along the first time sequence.
  • the first state estimation data includes second data
  • the second data corresponds to a state estimation result of the target within a first preset time period
  • the first The preset duration is a period of time after the first time period along the first time sequence
  • the second state estimation data includes third data
  • the third data corresponds to the second preset duration
  • the second preset duration is a period of time after the first time period along the second time sequence.
  • the second state data is third state estimation data
  • the acquiring module is specifically configured to: determine initial state data according to the first state data, and the initial The state data is used to represent the initial estimated state of the target within the first time period, and the first state data is associated with the first collected state data; according to the first state data and the initial state data to determine an initial time point at which the difference between the first state and the initial estimated state is smaller than a preset value; according to the first state data, the initial time point is used as the first At the starting point of time, the state estimation of the target is performed to determine the third state estimation data, and the third state estimation data is used to estimate the first state.
  • a state estimation device including a memory and a processor, the memory is used to store program instructions, and when the program instructions are executed in the processor, the processor is used to obtain the target in the first time period first state data within the first state data, the first state data is associated with the first collection state data, the first collection state data includes the data of the target collected by the first sensor during the first time period, The data of the target is at least associated with a first state of the target within the first time period; according to the first state data, state estimation is performed in a first time sequence to obtain a state corresponding to a second time period First estimated state data; according to the first state data, perform state estimation according to a second time sequence opposite to the first time sequence, to obtain second state estimation data corresponding to a third time period, the second The time period overlaps with the third time period; according to the first state estimation data and the second state estimation data, third state estimation data is determined, and the third state estimation data is used to estimate the first state estimation data state.
  • the first time starting point of the second time period is an initial time point
  • the processor is further configured to: determine initial state data according to the first state data, The initial state data is used to represent the initial estimated state of the target within the first time period; according to the first state data and the initial state data, the initial time point is determined, and at the initial time Point the difference between the first state and the initially estimated state is less than a preset value.
  • the first estimated state data includes first data, and the first data is a second time starting point of the third time period according to the second time sequence Input data for state estimation.
  • the second time starting point is the last time point of the first time period along the first time sequence.
  • the first state estimation data includes second data, the second data corresponds to a state estimation result of the target within a first preset time period, and the first The preset duration is a period of time after the first time period along the first time sequence; or, the second state estimation data includes third data, and the third data corresponds to the pair within the second preset duration.
  • the second preset duration is a period of time following the first time period along the second time sequence.
  • the processor is further configured to determine fourth state estimation data according to second state data corresponding to a fourth time period, where the second state data is the The first state data or the third state estimation data, the fourth time period includes a first sub-time period and a second sub-time period, and there exists between the first sub-time period and the second sub-time period A time interval, the time interval does not belong to the fourth time period, and the fourth state estimation data is used to estimate the second state of the target in the time interval.
  • the processor is specifically configured to: determine at least one supplementary state data set, the initial data and The termination data is determined according to the second state data, and each set of supplementary state data corresponds to a difference parameter, and the difference parameter of each set of supplementary state data is used to indicate that each set of supplementary state data corresponds to The difference between the state of the state and the state corresponding to the second state data; according to the loss function of each of the supplementary state data sets in the at least one supplementary state data set, the fourth state estimation data is determined, wherein, The loss function of each set of supplementary state data includes a difference parameter of each set of supplementary state data, and is positively correlated with the difference parameter of each set of supplementary state data.
  • a state estimation device including: a memory and a processor, the memory is used to store program instructions, and when the program instructions are executed in the processor, the processor is used to: acquire the target in First state data within a first time period, the first state data is associated with first collected state data, the first collected state data includes the data collected from the first sensor during the first time period data of the target, the data of the target is at least associated with the first state of the target within the first time period; according to the first state data, initial state data is determined, and the initial state data is used to represent The initial estimated state of the target within the first time period; according to the first state data and the initial state data, an initial time point is determined, and at the initial time point, the first state and the initial state The difference between the estimated states is smaller than a preset value; according to the first state data, the initial time point is taken as a first time starting point to estimate the state of the target.
  • the processor is specifically configured to: perform state estimation according to the first time sequence according to the first state data, and obtain a first estimated state corresponding to a second time period data, the first time starting point is the starting point of the second time period along the first time sequence; according to the first state data, perform state estimation in a second time sequence opposite to the first time sequence , to obtain the second state estimation data corresponding to the third time period, the second time period overlaps with the third time period; according to the first state estimation data and the second state estimation data, determine the second state estimation data Three-state estimation data for estimating the first state.
  • the first estimated state data includes first data, and the first data is a second time starting point for the third time period according to the second time sequence Input data for state estimation.
  • the second time starting point is the last time point of the first time period along the first time sequence.
  • the first state estimation data includes second data
  • the second data corresponds to a state estimation result of the target within a first preset time period
  • the first The preset duration is a period of time after the first time period along the first time sequence
  • the second state estimation data includes third data
  • the third data corresponds to the second preset duration
  • the second preset duration is a period of time after the first time period along the second time sequence.
  • the processor is further configured to: determine fourth state estimation data according to second state data corresponding to a fourth time period, where the second state data is the The first state data or the third state estimation data, the fourth time period includes a first sub-time period and a second sub-time period, and there exists between the first sub-time period and the second sub-time period A time interval, the time interval does not belong to the fourth time period, and the fourth state estimation data is used to estimate the second state of the target in the time interval.
  • the processor is specifically configured to: determine at least one supplementary state data set, the initial data and The termination data is determined according to the second state data, and each set of supplementary state data corresponds to a difference parameter, and the difference parameter of each set of supplementary state data is used to indicate that each set of supplementary state data corresponds to The difference between the state of the state and the state corresponding to the second state data; according to the loss function of each of the supplementary state data sets in the at least one supplementary state data set, the fourth state estimation data is determined, wherein, The loss function of each set of supplementary state data includes a difference parameter of each set of supplementary state data, and is positively correlated with the difference parameter of each set of supplementary state data.
  • a state estimation device including: a memory and a processor, the memory is used to store program instructions, and when the program instructions are executed in the processor, the processor is used to: obtain the second state data, the second state data is associated with the first collected state data, the first collected state data includes the data of the target collected by the first sensor during the first time period, the target’s The data is at least associated with the first state of the target within the first time period, and the fourth time period corresponding to the second state data includes a first sub-time period and a second sub-time period, and the first There is a time interval between the sub-time period and the second sub-time period, and the time interval does not belong to the fourth time period; according to the second state data, determine fourth state estimation data, and the fourth state Estimation data is used to estimate a second state of the object at the time interval.
  • the processor is specifically configured to: determine at least one supplementary state data set, the initial data and The termination data is determined according to the second state data, and each set of supplementary state data corresponds to a difference parameter, and the difference parameter of each set of supplementary state data is used to indicate that each set of supplementary state data corresponds to The difference between the state of the state and the state corresponding to the second state data; according to the loss function of each of the supplementary state data sets in the at least one supplementary state data set, the fourth state estimation data is determined, wherein, The loss function of each set of supplementary state data includes a difference parameter of each set of supplementary state data, and is positively correlated with the difference parameter of each set of supplementary state data.
  • the processor is specifically configured to: perform state estimation in a first time sequence according to the first state data, and obtain a first estimated state corresponding to a second time period data, the first state data is associated with the first acquisition state data; according to the first state data, the state estimation is performed in the second time sequence opposite to the first time sequence, and the corresponding third Second state estimation data of a time period, where the second time period overlaps with the third time period; according to the first state estimation data and the second state estimation data, determine the third state estimation data , the third state estimation data is used to estimate the first state.
  • the first time starting point of the second time period is an initial time point
  • the processor is specifically configured to: determine initial state data according to the first state data, The initial state data is used to represent the initial estimated state of the target within the first time period; according to the first state data and the initial state data, the initial time point is determined, and at the initial time Point the difference between the first state and the initially estimated state is less than a preset value.
  • the first estimated state data includes first data, and the first data is a second time starting point of the third time period according to the second time sequence Input data for state estimation.
  • the second time starting point of the third time period is the last time point of the first time period along the first time sequence.
  • the first state estimation data includes second data
  • the second data corresponds to a state estimation result of the target within a first preset time period
  • the first The preset duration is a period of time after the first time period along the first time sequence
  • the second state estimation data includes third data
  • the third data corresponds to the second preset duration
  • the second preset duration is a period of time after the first time period along the second time sequence.
  • the second state data is third state estimation data
  • the processor is specifically configured to: determine initial state data according to the first state data, and the initial The state data is used to represent the initial estimated state of the target within the first time period, and the first state data is associated with the first collected state data; according to the first state data and the initial state data to determine an initial time point at which the difference between the first state and the initial estimated state is smaller than a preset value; according to the first state data, the initial time point is used as the first At the starting point of time, the state estimation of the target is performed to determine the third state estimation data, and the third state estimation data is used to estimate the first state.
  • a computer program storage medium has program instructions, and when the program instructions are executed in a computer device, the computer device is used to implement any one of the first aspect to the third aspect.
  • the method in one aspect or any implementation manner of the first aspect to the third aspect.
  • a computer program product which is characterized in that it includes program instructions, and when the program instructions are executed, any one of the first to third aspects or any of the first to third aspects A method in one implementation is executed.
  • a chip characterized in that the chip includes at least one processor, and when program instructions are executed in the at least one processor, any one of the first to third aspects or The method in any implementation manner of the first aspect to the third aspect is executed.
  • Fig. 1 is a functional block diagram of a vehicle to which the embodiment of the present application is applicable.
  • Fig. 2 is a schematic flowchart of a state estimation method.
  • Fig. 3 is a schematic flowchart of a state estimation method provided by an embodiment of the present application.
  • Fig. 4 is a schematic flowchart of another state estimation method provided by an embodiment of the present application.
  • Fig. 5 is a schematic flow chart of another state estimation method provided by an embodiment of the present application.
  • Fig. 6 is a schematic flow chart of another state estimation method provided by an embodiment of the present application.
  • Fig. 7 is a schematic flowchart of a method for determining first state data of a target provided by an embodiment of the present application.
  • Fig. 8 is a schematic flowchart of a forward state estimation method provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a reverse state estimation method provided by an embodiment of the present application.
  • Fig. 10 is a schematic diagram of first state data, first estimated state data, second estimated state data, and third estimated state data provided by an embodiment of the present application.
  • Fig. 11 is a schematic diagram of another first state data, first estimated state data, second estimated state data, and third estimated state data provided by the embodiment of the present application.
  • Fig. 12 is a schematic flowchart of a method for updating third estimated state data provided by an embodiment of the present application.
  • Fig. 13 is a schematic structural diagram of a state estimation device provided by an embodiment of the present application.
  • Fig. 14 is a schematic structural diagram of a state estimation device provided by another embodiment of the present application.
  • Fig. 1 is a schematic structural diagram of a vehicle to which the embodiment of the present application is applicable.
  • Various subsystems may be included in the vehicle 100 , such as a sensing system 120 and a computing platform 150 .
  • vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements.
  • each subsystem and element of the vehicle 100 may be interconnected by wire or wirelessly.
  • the sensing system 120 may include several sensors that sense information about the environment around the vehicle 100 .
  • the sensing system 120 may include a positioning system, and the positioning system may include a global positioning system (global positioning system, GPS), a Beidou system or other positioning systems, an inertial measurement unit (inertial measurement unit, MU), a laser radar, a millimeter wave radar , ultrasonic radar, laser rangefinder, camera device, etc. in one or more.
  • Sensing system 120 may also include sensors of the interior systems of the monitored vehicle (eg, interior air quality monitor, fuel gauge, oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding properties (position, shape, orientation, velocity, etc.). This detection and recognition is an important guarantee for the safe driving of vehicles.
  • a positioning system may be used to estimate the geographic location where vehicle 100 is located.
  • IMU can be used to sense a vehicle's position and orientation changes based on inertial acceleration.
  • IMU 122 may be a combination accelerometer and gyroscope.
  • Radar can utilize radio information to sense objects within the vehicle's surrounding environment.
  • radar may be used to sense the velocity, acceleration, heading, etc. of the objects.
  • the radar can be lidar, millimeter wave radar, ultrasonic radar, etc.
  • Laser range finders can use laser light to sense objects in the environment in which the vehicle is located.
  • a laser range finder may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
  • Cameras may be used to capture multiple images of the vehicle's surroundings.
  • the camera may be a still camera or a video camera.
  • the computing platform 150 may include processors 151 to 15n (n is a positive integer).
  • a processor is a circuit with signal processing capabilities.
  • the processor may be a circuit with instruction reading and execution capabilities, such as a central processing unit (central processing unit, CPU), a microprocessor, a graphics Processor (graphics processing unit, GPU) (can be understood as a microprocessor), or digital signal processor (digital signal processor, DSP), etc.; in another implementation, the processor can To achieve a certain function, the logical relationship of the hardware circuit is fixed or reconfigurable, such as the hardware implemented by the processor as an application-specific integrated circuit (ASIC) or programmable logic device (PLD) circuits, such as FPGAs.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the process of the processor loading the configuration file to realize the configuration of the hardware circuit can be understood as the process of the processor loading instructions to realize the functions of some or all of the above units.
  • it can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as a neural network processing unit (neural network processing unit, NPU), tensor processing unit (tensor processing unit, TPU), deep learning processing Unit (deep learning processing unit, DPU), etc.
  • computing platform 150 may also include a memory for storing instructions. Part or all of the processors 151 to 15n can call instructions in the memory and execute the instructions to realize corresponding functions.
  • memory may also store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by the computer system 150 during operation of the vehicle in autonomous, semi-autonomous, and/or manual modes.
  • a processor in computing platform 150 may be located remotely from the vehicle and be in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle while others are executed by a remote processor, including taking the necessary steps to perform a single maneuver.
  • one or more of these components described above may be installed separately from or associated with the vehicle 100 .
  • the components described above may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as limiting the embodiment of the present application.
  • the vehicle 100 may be an autonomous vehicle traveling on the road, which can recognize objects in its surroundings to determine adjustments to the current speed.
  • Objects may be other vehicles, traffic control devices, or other types of objects.
  • each identified object may be considered independently and based on the object's respective characteristics, such as its current speed, acceleration, distance to the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to adjust.
  • the vehicle 100 or a computing device associated with the vehicle 100 may be based on the identified characteristics of the object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.) ) to predict the behavior of the identified object.
  • the state of the surrounding environment e.g., traffic, rain, ice on the road, etc.
  • the aforementioned vehicles may be cars, trucks, motorcycles, buses, boats, airplanes, helicopters, lawn mowers, recreational vehicles, playground vehicles, construction equipment, trams, golf carts, trains, and trolleys, etc., the present application Examples are not particularly limited.
  • Intelligent driving technology includes perception, decision-making, control and other stages.
  • the perception module is the "eye" of the intelligent vehicle.
  • the perception module receives the surrounding environment information, and understands the environment in which the cognition is located through machine learning technology.
  • the decision-making module uses the information output by the perception module to predict the behavior of traffic participants, so as to make behavior decisions for the own vehicle.
  • the control module calculates the lateral acceleration and longitudinal acceleration of the vehicle according to the output of the decision-making module, and controls the passing of the own vehicle.
  • the perception module may include various sensors in the sensing system 120 .
  • the output result of the sensing module may include the output of each sensor in the sensing system 120 .
  • the perception module may also include all or part of the processors in the computing platform 150 .
  • the output result of the perception module may include data obtained by processing the output of each sensor. There may be errors in the output results of the perception module, and the output results of the vehicle perception module can be optimized and adjusted to improve accuracy.
  • Fig. 2 is a schematic flowchart of a state estimation method.
  • the method 400 includes S410 to S430 and can be executed by the computing platform 150 .
  • first state information of the target vehicle collected by the sensing system 120 is acquired.
  • the first state information may be used to represent the position, speed acceleration, etc. of the target vehicle.
  • the target vehicles may be other vehicles around the vehicle 100 .
  • the first status information may be understood as collected data, which may be obtained by fusing data collected by sensors such as radar 123 , laser rangefinder 124 , and camera 125 in the sensing system 120 .
  • the fusion of data collected by each sensor can also be understood as information fusion, data fusion, sensor information fusion or multi-sensor information fusion, which is used to associate, correlate and synthesize data and information obtained from single and multiple information sources .
  • online state estimation is performed according to the first state information to determine second state information.
  • the first state information can be corrected to determine the corrected second state information of the target vehicle.
  • the behavior of the target vehicle may be predicted according to the second state information, so as to make a behavior decision for the own vehicle, that is, the vehicle 100 .
  • the lateral acceleration and longitudinal acceleration of the vehicle can be calculated to control the driving of the own vehicle.
  • the accuracy of the corrected second state information of the target vehicle obtained through the method 400 is still low.
  • the second status information can be verified.
  • a high-precision sensor may be added to the vehicle during the test.
  • the state estimation of the target vehicle is carried out by using the data measured by the high-precision sensors added in the vehicle to obtain the verification state information. Whether the second state information is accurate can be determined according to the verification state information.
  • the second state information can be verified.
  • the way the data measured by the high-precision sensor estimates the state of the target vehicle has an impact on the accuracy of the verified state information.
  • embodiments of the present application provide a method and device for estimating a state of a target.
  • Fig. 3 is a schematic flowchart of a state estimation method provided by an embodiment of the present application.
  • the method 500 includes steps S510 to S540.
  • the first status data is associated with the first collected status data
  • the first collected status data includes information from the first sensor at the first time
  • the data of the target collected during the period, the data of the target is at least associated with the first state of the target within the first time period.
  • the first collected state data includes the data of the target collected by the first sensor during the first time period.
  • the first state data may be determined according to the first collection state data.
  • the first collected status data may be raw data collected by the first sensor.
  • the first collection state data collected by the first sensor may be received, and the first state data may be determined according to the first collection state data.
  • the first sensor may include a camera.
  • the data collected by the camera can be an image.
  • the device for executing method 500 may receive the image collected by the camera within the first time period, process the image within the first time period, obtain the position of the target recorded in the image, and determine the speed, acceleration, acceleration, and acceleration of the target. acceleration etc.
  • the first collection state data may include images collected by the camera within the first time period, and the first state data may include state information of the target obtained by processing the images collected by the camera within the first time period.
  • the state information of the target includes one or more of the position, velocity, acceleration, jerk, etc. of the target within the first time period.
  • the first state data may include state information of the target determined according to images collected by the camera within the first time period.
  • the first state data may be received at S510.
  • the first sensor may include one or more of radars such as laser radar, millimeter wave radar, and ultrasonic radar.
  • the data collected by radar can be point cloud data.
  • the radar can process the point cloud data collected in the first time period to obtain the status information of the target.
  • the apparatus for performing the method 500 may receive state information processed by the radar.
  • the first state data may include state information of the target determined according to the point cloud data within the first time period.
  • the device for executing the method 500 may also receive the point cloud data collected by the radar in the first time period, and process the point cloud data in the first time period to obtain the status information of the target.
  • the first sensor may include a camera.
  • the processor can process the images collected by the camera within the first time period to obtain state information of the target.
  • the state information of the object obtained by processing by the processor may be received.
  • the first sensor may comprise one or more sensors.
  • the state information of the target corresponding to the sensor may be used as the first state data to represent the first state of the target within the first time period.
  • the data output by some types of sensors among the multiple sensors may be raw data (such as image or point cloud data), and the data output by some sensors may be obtained after processing the raw data Status information of the target.
  • the first state data may include state information of the target corresponding to each sensor, or the first state data may include output data of each sensor, that is, the first state data may include first collection state data.
  • the state information of the target corresponding to each sensor may be fused to obtain the first state data of the target within the first time period.
  • the first state data may be used to represent the first state of the object within the first time period.
  • a weighted average operation may be performed on the state of the target in the state information corresponding to each sensor at each time point.
  • the first status data includes weighted average calculation results at the multiple time points.
  • the time points in the status information of the targets corresponding to the sensors may not be exactly the same.
  • the difference algorithm may be used to determine the state of the target at multiple unified time points in the first time period, so as to determine the adjusted state information of the sensor.
  • the multiple uniform time points may have the same or different time intervals.
  • a weighted average operation may be performed on the states in the adjusted state information of each sensor at each unified time point.
  • the first status data includes weighted average calculation results at the multiple time points.
  • the state estimation is performed according to the second time sequence opposite to the first time sequence, and the second state estimation data corresponding to the third time period is obtained, and the second time period is the same as The third time periods overlap.
  • the order of time can be understood as forward, and the order opposite to the order of time can be understood as reverse.
  • the first time sequence can be forward or direction.
  • the second time sequence is reverse; otherwise, if the first time sequence is reverse, the second time sequence is forward.
  • third state estimation data is determined according to the first state estimation data and the second state estimation data, and the third state estimation data is used for estimating the first state.
  • Every state estimation will produce an error, and the error is related to the time sequence of state estimation.
  • S510 to S540 according to the first state data of the target, perform two state estimations in reverse order in time, and determine the third state estimation data for estimating the state of the target according to the results of the two state estimations, the two Part or all of the errors in the results of the secondary state estimation can cancel each other out, thereby improving the accuracy of the third state estimation data.
  • the state estimation of the target is carried out, and the accuracy of the initial state of the target affects the accuracy of the state estimation result.
  • the first time starting point may be used as the first time point to perform state estimation, and then perform state estimation on each time point located after the first time point along the first time sequence, to obtain the first estimated state data.
  • the time period corresponding to the first estimated state data is the second time period. If the first time sequence is positive, as shown in FIG. 10 , the first time starting point of the second time period may be the time point corresponding to the leftmost data of the first estimated state data.
  • initial state data may be determined according to the first state data.
  • the initial state data is used to represent the initial estimated state of the target within the first time period.
  • an initial time point may be determined according to the first state and the initial state data, and the difference between the first state data and the initial estimated state at the initial time point is smaller than a preset value.
  • the first time starting point of the second time period may be an initial time point.
  • the first state data is associated with a first state of the object within a first time period.
  • the device for executing the method 500 may determine the first state of the object within the first time period according to the first state data.
  • the difference between the first state and the initial estimated state is large, at least one of the first state and the initial estimated state has a large difference from the true state of the target.
  • the time point at which the difference between the first state and the initial estimated state is less than the preset value is the initial time point. It can be considered that the first state and the initial estimated state of the target at the initial time point are both convergent, and the true state of the target is The difference is small. Therefore, performing state estimation with the initial time point as the first time starting point of the second time period can make the accuracy of the first estimated state data higher.
  • the first status data may include one or more items of data. Specifically, reference may be made to the description of FIG. 4 .
  • the second starting point in time may be used as the first time point for performing state estimation, and then perform state estimation on each time point located after the second starting point in time along the second time sequence, to obtain the second estimated state data.
  • the time period corresponding to the second estimated state data is the third time period. If the first time sequence is forward and the second time sequence is reverse, as shown in FIG. 10 , the second time starting point of the third time period may be the time point corresponding to the rightmost data of the second estimated state data.
  • S530 may be performed after S520, and the first data in the first estimated state data is used as the second time starting point for the third time period according to the second time sequence
  • the input data for performing state estimation is performed for state estimation according to the second time sequence.
  • the first estimated state data obtained by performing state estimation according to the first time sequence may include data of the target at multiple time points.
  • the data at each time point in the first estimated state data may be a state estimation result at that time point, and are used to represent the state of the target at that time point.
  • Data at a time point in the first estimated state data may be used as the first data.
  • the data at the next time point is determined based on the data at the previous time point.
  • the first time point along the second time sequence can be understood as the time point when state estimation starts according to the second time sequence. There is no other time point before this time point according to the second time sequence, and the state estimation result at this time point can be understood as initial data for state estimation according to the second time sequence.
  • the first estimated state data tends to converge, that is, the estimation result obtained after performing state estimation for a period of time is more accurate.
  • the state indicated by the first data selected in the first estimated state data is more accurate. Therefore, compared with the first state, the first data in the first estimated state data is used as the input data for state estimation on the second time starting point of the third time period according to the second time sequence, so that the second estimated state data The accuracy is higher.
  • the length of the time period in which the second time period corresponding to the first estimated state data coincides with the third time period corresponding to the second state estimation data affects the accuracy of the third state estimation data.
  • the second time starting point of the third time period may be set as the last time point along the first time sequence of the first time period corresponding to the first state data. Prolonging the overlapping time period of the second time period and the third time period as much as possible can make the accuracy of the third state estimation data higher.
  • the method 500 can be performed based on data collected by the sensor system 120 in the vehicle 100 .
  • the sensing system 120 may include a first sensor for acquiring first collected state data.
  • the computing platform 150 is used to estimate the state of the target according to the first collected state data, and plan and control the driving of the vehicle according to the state estimation result of the target.
  • the computing platform 150 or other processing systems can be used to execute the method 500 to evaluate the accuracy of the state estimation results of the objects determined during the vehicle driving process.
  • Method 500 may also be performed based on data collected by a first sensor outside of vehicle 100 .
  • the first sensor may be arranged on the vehicle 100 , for example, on the top of the vehicle 100 . Compared to the sensors in the sensing system 120, the first sensor may have a higher accuracy.
  • the apparatus for executing the method 500 may be the computing platform 150, or other processors, for example, a processor in a server or other devices.
  • the sensing system 120 is used to collect data; the computing platform 150 is used to estimate the state of the target based on the data collected by the sensing system 120, and plan and control the driving of the vehicle according to the state estimation result of the target.
  • the first sensor may also collect data, that is, the first sensor acquires the first collection status data during the running of the vehicle.
  • the third state estimation data may exceed the first time period.
  • the first state estimation data includes second data, and/or the second state estimation data includes third data.
  • the second data corresponds to the state estimation result of the target within the first preset time period.
  • the first preset duration is a duration following the first time period along the first time sequence.
  • the third data corresponds to a state estimation result of the target within a second preset time period, and the second preset time period is a period of time following the first time period along the second time sequence.
  • the state estimation for the first preset duration can be performed again, to obtain The second data corresponding to the first preset duration.
  • the first estimated state data includes second data.
  • the state estimation for the second preset duration can be performed again, so as to obtain the corresponding The third data at the second preset duration.
  • the second estimated state data includes third data.
  • the third estimated state data is determined based on the first estimated state data and the second estimated state data.
  • the third state estimation data may include the second data and/or the third data.
  • the third state estimation data may exceed the first time period.
  • the first sensor may not be able to sense the target within a period of time in the first time period. That is to say, the first time period corresponding to the first state data and the time period corresponding to the third state estimation data may not be continuous.
  • the device for executing the method 500 may determine fourth state estimation data according to the second state data corresponding to the fourth time period.
  • the second state data is the first state data or the third state estimation data.
  • the fourth time period includes the first sub-time period and the second sub-time period. There is a time interval between the first sub-time period and the second sub-time period, and the time interval does not belong to the fourth time period.
  • the fourth state estimation data is used to estimate the second state of the object at the time interval.
  • the second state data of the target in the fourth time period determine the second state of the target in the time interval, so as to determine the state of the target in the continuous time region formed by the fourth time period and the time interval, and obtain the state estimation of the target The result is more complete.
  • time interval does not belong to the fourth time period, and it may be that all or part of the time interval does not belong to the fourth time period.
  • the second state data may be the first state data.
  • the first sub-time period may be located before the second sub-time period.
  • the first estimated state data may include the state estimation results of the target within the first sub-time period, and may also include the first preset time period after the first sub-time period along the first time sequence.
  • the state estimation result of the target Obtain the second estimated state data through S530, the second estimated state data includes the state estimation result of the target in the second sub-time period, and may also include the target in the second preset time period after the second sub-time period along the second time sequence state estimation results. Therefore, when the duration of the time interval is less than the sum of the first preset duration and the second preset duration, the third state estimation data determined in S540 can be used to estimate the second state of the target in the time interval.
  • the apparatus performing method 500 may determine at least one set of supplemental status data.
  • the start data and end data included in each supplementary status data set in the at least one supplementary status data set are determined according to the second status data, and each supplementary status data set corresponds to a difference parameter.
  • the difference parameter of each supplementary state data set is used to represent the difference between the state corresponding to the supplementary state data set and the state corresponding to the second state data.
  • the device executing method 500 may also determine fourth state estimation data according to the loss function of each supplementary state data set in the at least one supplementary state data set, wherein the loss function of each supplementary state data set includes the supplementary state The difference parameter of the data set, and is positively correlated with the difference parameter of each supplementary state data set.
  • the second state data is the first state data or the third state estimation data.
  • the loss function of each supplementary state data set includes a difference parameter of the supplementary state data set, that is, the calculation result of the loss function is obtained according to the difference parameter.
  • the loss function is positively correlated with the difference parameter, and it can also be understood that the calculation result of the loss function is positively correlated with the difference parameter.
  • Each supplementary state data set can be understood as a trajectory, and at least one supplementary state data set is a trajectory bundle.
  • the first time period may include a third sub-time period and a fourth sub-time period, and along the first time sequence, the third sub-time period may be located before the fourth sub-time period.
  • a fifth sub-time period exists between the third sub-time period and the fourth sub-time period.
  • the first time period may not include the fifth sub-time period.
  • the first estimated state data includes the state estimation results of the target within the first preset time period after the third sub-time period along the first time sequence
  • the second estimated state data includes the state estimation results after the fourth sub-time period along the second time sequence If the state estimation result of the target within the second preset time period of , and the second state data is the third state estimation data, if the length of the fifth sub-time period exceeds the sum of the first preset time length and the second preset time length , then there is a time interval for the third state estimation data. At this time, by determining a target trajectory in the trajectory bundle, the loss function corresponding to the target trajectory is the smallest.
  • the fourth state estimation data is used to represent the target trajectory.
  • Fig. 4 is a schematic flowchart of a state estimation method provided by an embodiment of the present application.
  • the state estimation method 700 includes S710 to S740.
  • the first state data is associated with the first collected state data
  • the first collected state data includes The data of the target collected during a period of time, the data of the target is at least associated with a first state of the target within the first period of time.
  • the first collected state data includes the data of the target collected by the first sensor during the first time period.
  • the first state data may be determined according to the first collection state data.
  • the first collected state data may be raw data collected by the first sensor.
  • the first collection state data collected by the first sensor may be received, and the first state data may be determined according to the first collection state data.
  • first status data may be received at S710. Specifically, reference may be made to the description of FIG. 3 .
  • the first sensor may comprise one or more sensors.
  • the state information of the target corresponding to the sensor may be used as the first state data to represent the first state of the target within the first time period.
  • the data output by some types of sensors among the multiple sensors may be raw data (such as image or point cloud data), and the data output by some sensors may be obtained after processing the raw data Status information of the target.
  • the first state data may include state information of the target corresponding to each sensor, or the first state data may include output data of each sensor.
  • the state information of the target corresponding to each sensor may be fused to obtain the first state data of the target within the first time period.
  • the first state data may be used to represent the first state of the object within the first time period.
  • the device for executing the method 700 may determine the first state marked within the first time period according to the acquired first state data.
  • the state estimation of the target can be performed to determine the initial state data.
  • an initial time point is determined according to the first state data and the initial state data, at which a difference between the first state and the initial estimated state is smaller than a preset value.
  • the difference between the first state and the initial estimated state is large, at least one of the first state and the initial estimated state has a large difference from the true state of the target.
  • the time point at which the difference between the first state and the initial estimated state is less than the preset value is the initial time point. It can be considered that the first state and the initial estimated state of the target at the initial time point are both convergent, and the true state of the target is The difference is small.
  • the time point at which the difference between the first state and the initial estimated state is smaller than a preset value is used as the initial time point for state estimation of the target, which can improve the accuracy of state estimation results.
  • the first state data may include one or more items of data, wherein one item of data may include the collected first collected state data, that is, the raw data collected by the first sensor, and the other item of data may include the data collected by the first sensor.
  • the status information of the target obtained from raw data processing.
  • initial state estimation may be performed according to the raw data of the first sensor to determine the initial state data.
  • the state information of the target may be determined according to the raw data of the first sensor, and at S730, the initial state data may be compared with the state information of the target to determine an initial time point.
  • an initial state estimation may be performed according to the state information of the target to determine the initial state data.
  • the state information of the target may be compared with the initial state data to determine an initial time point.
  • the first state data includes the raw data of the first sensor and the state information of the target
  • an initial state estimation may be performed according to the raw data of the first sensor or the state information of the target to determine the initial state data.
  • the state information or original data of the target in the first state data may be compared with the initial state data to determine an initial time point.
  • the target state information may be re-determined according to the raw data of the first sensor, and at S730, the re-determined target state information is compared with the initial state data to determine the initial time point.
  • the state estimation can be performed according to the first time sequence according to the first state data, and the first estimated state data corresponding to the second time period can be obtained, and the starting point of the first time is the second time The starting point of the segment along the first time sequence.
  • the state estimation may also be performed according to the first state data in a second time sequence opposite to the first time sequence, to obtain second state estimation data corresponding to a third time period, and the second time period is the same as the The above-mentioned third time period overlaps.
  • third state estimation data may be determined according to the first state estimation data and the second state estimation data. Third state estimation data is used to estimate the first state.
  • Every state estimation will produce an error, and the error is related to the time sequence of state estimation.
  • part or all of the errors in the results of the two state estimations can cancel each other out, so that the accuracy of the third state estimation data can be improved.
  • the first data may be used as input data for performing state estimation on the second time starting point of the third time period according to the second time sequence.
  • the first estimated state data includes first data.
  • the first data in the first estimated state data is used as the input data for state estimation of the second time starting point of the third time period according to the second time sequence, that is, the first data is used as the state estimation according to the second time sequence Estimated starting state.
  • the state indicated by the first estimated state data is more accurate. Therefore, using the first data as the initial state for state estimation according to the second time sequence can make the accuracy of the second estimated state data higher.
  • the second time starting point of the third time period may be set as the last time point along the first time sequence of the first time period corresponding to the first state data. Prolonging the overlapping time period of the second time period and the third time period as much as possible can make the accuracy of the third state estimation data higher.
  • the third state estimation data may exceed the first time period, so as to make a complete evaluation of the target estimation result determined by the vehicle 100 based on the data collected by the sensor system 120 in the vehicle 100 .
  • the first state estimation data includes second data, and/or the second state estimation data includes third data.
  • the second data corresponds to the state estimation result of the target within the first preset time period.
  • the first preset duration is a duration following the first time period along the first time sequence.
  • the third data corresponds to a state estimation result of the target within a second preset time period, and the second preset time period is a period of time following the first time period along the second time sequence.
  • the first sensor may not be able to sense the target within a period of time in the first time period. That is to say, the first time period corresponding to the first state data and the time period corresponding to the third state estimation data may not be continuous.
  • the device for performing the method 500 may determine fourth state estimation data according to the second state data corresponding to the fourth time period.
  • the second state data is the first state data or the third state estimation data.
  • the fourth time period includes the first sub-time period and the second sub-time period. There is a time interval between the first sub-time period and the second sub-time period, and the time interval does not belong to the fourth time period.
  • the fourth state estimation data is used to estimate the second state of the object at the time interval.
  • the second state data of the target in the fourth time period determine the second state of the target in the time interval, so as to determine the state of the target in the continuous time region formed by the fourth time period and the time interval, and obtain the state estimation of the target
  • the result is more complete.
  • a complete evaluation of the target estimation result determined by the vehicle 100 based on the data collected by the sensor system 120 in the vehicle 100 can be performed.
  • an apparatus performing method 500 may determine at least one set of supplemental status data.
  • the start data and end data included in each supplementary status data set in the at least one supplementary status data set are determined according to the second status data, and each supplementary status data set corresponds to a difference parameter.
  • the difference parameter of each supplementary state data set is used to represent the difference between the state corresponding to the supplementary state data set and the state corresponding to the second state data.
  • the device performing method 500 may also determine fourth state estimation data according to a loss function of each supplementary state data set in the at least one supplementary state data set, wherein the loss function of each supplementary state data set includes the supplementary state data The difference parameter of the set and is positively correlated with the difference parameter of each supplementary state data set.
  • the difference between the state of the target in the time interval and the state in the fourth time period is considered, and the state of the target in the time interval represented by the determined fourth state estimation data is more in line with the state of the target in the fourth time period state, making the estimation data of the fourth state more reasonable.
  • Fig. 5 is a schematic flowchart of a state estimation method provided by an embodiment of the present application.
  • the sensing system 120 is used to collect data; the computing platform 150 is used to estimate the state of the target based on the data collected by the sensing system 120, and plan and control the driving of the vehicle according to the state estimation result of the target.
  • a first sensor may be provided on top of the vehicle 100 .
  • the first sensor may have a higher accuracy than the sensors in the sensing system 120 .
  • the first sensor can also collect information.
  • the state estimation of the target is performed based on the information collected by the first sensor outside the vehicle 100 , and the obtained state data can be used to evaluate the accuracy of the state estimation result determined by the computing platform 150 based on the data collected by the sensing system 120 .
  • the time when the first sensor can sense the target is not completely consistent with the time when the sensing system 120 can sense the target. Therefore, the state data obtained based on the information collected by the first sensor cannot fully evaluate the state estimation result determined by the computing platform 150 .
  • the state estimation method 800 includes S810 to S820.
  • the second status data is associated with the first collected status data
  • the first collected status data includes information collected by the first sensor from the target within the first time period data
  • the data of the target is at least associated with the first state of the target within the first time period
  • the fourth time period corresponding to the second state data includes a first sub-time period and a second sub-time period, there is a time interval between the first sub-time period and the second sub-time period, and the time interval does not belong to the fourth time period.
  • the time interval does not belong to the fourth time period. It may be that all or part of the time interval does not belong to the fourth time period.
  • the second state of the target in the time interval is determined, so that the state data obtained based on the first sensor is more complete.
  • the target When the target just appears within the collection range of the sensing system 120, there may be a large deviation between the state estimation result determined by the computing platform 150 and the actual state of the target. However, due to the occlusion of surrounding objects, the time when the first sensor can perceive the target is not completely consistent with the time when the sensing system 120 can perceive the target, and it may not be possible to evaluate the state estimation result determined by the computing platform 150 when the target appears.
  • At S820 at least one supplementary state data set may be determined, the start data and end data included in each supplementary state data set in the at least one supplementary state data set are determined according to the second state data, and each The supplementary state data set corresponds to a difference parameter, and the difference parameter of each supplementary state data set is used to represent the difference between the state corresponding to each supplementary state data set and the state corresponding to the second state data.
  • the fourth state estimation data may be determined according to a loss function of each of the supplementary state data sets in the at least one supplementary state data set, wherein the loss function of each of the supplementary state data sets includes the The difference parameter of each supplementary state data set, and is positively correlated with the difference parameter of each supplementary state data set.
  • the second state data is the first state data or the third state estimation data.
  • the loss function of each supplementary state data set includes a difference parameter of the supplementary state data set, that is, the calculation result of the loss function is obtained according to the difference parameter.
  • the loss function is positively correlated with the difference parameter, and it can also be understood that the calculation result of the loss function is positively correlated with the difference parameter.
  • Each supplementary state data set can be understood as a trajectory, and at least one supplementary state data set is a trajectory bundle.
  • the first time period may include a third sub-time period and a fourth sub-time period, and along the first time sequence, the third sub-time period may be located before the fourth sub-time period.
  • a fifth sub-time period exists between the third sub-time period and the fourth sub-time period.
  • the first time period may not include the fifth sub-time period.
  • the first estimated state data includes the state estimation results of the target within the first preset time period after the third sub-time period along the first time sequence
  • the second estimated state data includes the state estimation results after the fourth sub-time period along the second time sequence If the state estimation result of the target within the second preset time period of , and the second state data is the third state estimation data, if the length of the fifth sub-time period exceeds the sum of the first preset time length and the second preset time length , then there is a time interval for the third state estimation data.
  • a trajectory bundle may be determined and a target trajectory may be determined in the trajectory bundle, and the loss function corresponding to the target trajectory is the smallest.
  • the fourth state estimation data is used to represent the target trajectory.
  • the second status data may be the first status data.
  • the second state data may also be third state estimation data.
  • state estimation may be performed in a first time sequence according to the first state data to obtain first estimated state data corresponding to a second time period, the first state data and the first collection state Data is associated.
  • state estimation may also be performed according to the first state data in a second time sequence opposite to the first time sequence, to obtain second state estimation data corresponding to a third time period, the first The second time period overlaps with the third time period.
  • the third state estimation data may be determined according to the first state estimation data and the second state estimation data.
  • the third state estimation data is used to estimate the first state.
  • Every state estimation will produce an error, and the error is related to the time sequence of state estimation.
  • part or all of the errors in the results of the two state estimations can cancel each other out, so that the accuracy of the third state estimation data can be improved.
  • the state estimation of the target is carried out, and the accuracy of the initial state of the target affects the accuracy of the state estimation result.
  • initial state data may be determined according to the first state data, and the initial state data is used to represent the initial estimated state of the target within the first time period .
  • the initial time point may be determined according to the first state data and the initial state data, and the difference between the first state and the initial estimated state at the initial time point is smaller than a preset value.
  • the first data in the first estimated state data may be used as input data for performing state estimation on the second time starting point of the third time period according to the second time sequence.
  • the first data in the first estimated state data is used as the input data for performing state estimation on the second time starting point of the third time period according to the second time sequence, so that the accuracy of the second estimated state data is higher.
  • the second time starting point of the third time period may be set as the last time point along the first time sequence of the first time period corresponding to the first state data. Prolonging the overlapping time period of the second time period and the third time period as much as possible can make the accuracy of the third state estimation data higher.
  • the third state estimation data may exceed the first time period, so as to make a complete evaluation of the target estimation result determined by the vehicle 100 based on the data collected by the sensor system 120 in the vehicle 100 .
  • the first state estimation data includes second data, and/or the second state estimation data includes third data.
  • the second data corresponds to the state estimation result of the target within the first preset time period.
  • the first preset duration is a duration following the first time period along the first time sequence.
  • the third data corresponds to a state estimation result of the target within a second preset time period, and the second preset time period is a period of time following the first time period along the second time sequence.
  • the second state data may be the third state estimation data.
  • initial state data may be determined according to the first state data, the initial state data is used to represent the initial estimated state of the target within the first time period, and the first state data and the second A collection status data is associated.
  • an initial time point may be determined according to the first state data and the initial state data, and at the initial time point, the difference between the first state and the initially estimated state is smaller than a preset value.
  • the state estimation of the target may be performed with the initial time point as the first time starting point, so as to determine the third state estimation data.
  • Third state estimation data is used to estimate the first state.
  • Taking the time point at which the difference between the first state and the initial estimated state is smaller than a preset value as the initial time point for state estimation of the target can improve the accuracy of the state estimation result, that is, the third state estimation data.
  • the accuracy of the fourth state estimation data is higher.
  • Fig. 6 is a schematic flowchart of a state estimation method provided by an embodiment of the present application.
  • the state estimation method 600 includes S610 to S640.
  • the method 600 can be executed by an industrial computer.
  • Industrial computer also known as industrial control computer, can be used to detect and control the production process, electromechanical equipment, and process equipment.
  • An industrial computer can be a state estimation device.
  • the first state data of the target may include the collection state of the target at each collection time point.
  • the collection state of the target at a certain collection time point is used to represent the movement state of the target determined according to the data collected by the sensor at the collection time point. That is to say, the first state data of the target may be determined according to the data collected by the first sensor.
  • the first sensor may periodically collect data, and the time point at which the sensor collects data is the collection time point.
  • the first sensor may be disposed on the vehicle 100 , for example, may be disposed on the top of the vehicle 100 .
  • the motion state of the target at a certain point in time may include the position, velocity, acceleration, etc. of the target at the point in time.
  • the industrial computer can perform S611 to S618, as shown in FIG. 7 .
  • the vehicle database may include multiple vehicle types and features, dimensions, and confidence levels corresponding to each vehicle type.
  • the size corresponding to each vehicle type may include one or more of the length, width, and height of the vehicle.
  • the size corresponding to at least one vehicle type is determined according to the actual measurement of the type of vehicle or the acquisition of parameters of the type of vehicle, and the confidence level of the size corresponding to the vehicle type is 1.
  • the feature of the target may be obtained by the industrial computer performing feature extraction on the data collected by the first sensor on the vehicle. For example, feature extraction may be performed on an image of the target captured by a camera to obtain features of the target.
  • the vehicle type that maximizes the matching degree C is determined as the vehicle type corresponding to the target.
  • the vehicle database is updated.
  • the vehicle type corresponding to the target and the features, dimensions and confidence levels corresponding to the vehicle type may be added to the vehicle database to update the vehicle database.
  • the characteristics of the vehicle type corresponding to the target can be determined according to the acquired vehicle characteristics.
  • the size of the vehicle type to which the target belongs can be determined according to the perception result of the vehicle.
  • the vehicle features of the target may be acquired multiple times, and the features of the vehicle type corresponding to the target are determined according to the vehicle features of the target acquired multiple times.
  • the size of the vehicle may be measured multiple times, and the size of the vehicle type to which the target belongs is determined according to the multiple measurement results.
  • the confidence level corresponding to the vehicle type may be determined according to the degree of difference between the vehicle characteristics of the target acquired multiple times and/or the degree of difference between the multiple measurement results of the vehicle size.
  • the confidence level corresponding to the vehicle type can be negatively correlated with the degree of difference between the vehicle characteristics of the target acquired multiple times, and the confidence level corresponding to the vehicle type can be negatively correlated with the degree of difference between the multiple measurement results of the vehicle size .
  • the centroid of the target is determined according to the vehicle type corresponding to the target and the vehicle database.
  • the contour of the target is filled.
  • the outline of the target can be understood as the cuboid area where the target is located.
  • the contour of the target is completed based on the vertex of the corner closest to the ego vehicle.
  • the contour of the target is completed based on the midpoint of the long side closest to the ego vehicle.
  • a vehicle in the vehicle database, can be represented as a rectangle, and the size of the vehicle can represent the length, width and height of the rectangle. Therefore, the centroid of the vehicle can be understood as the centroid of the rectangle.
  • the first state data of the target is determined according to the data collected by the sensor, and the first state data is used to represent the motion state of the target's centroid at each collection time point, such as position, velocity, and acceleration.
  • the motion state of the target is represented by the motion state of the closest point to the vehicle, which may lead to inaccurate determination of the motion state of the target.
  • the motion state of the target is represented by a fixed point in the target, such as the motion state of the centroid of the target, so that the first state data can reflect the motion state of the target more accurately.
  • a certain fixed point in the target may also be the left front, right front vertex, left midpoint, right midpoint, etc. of the goal. That is to say, in S616, a certain fixed point in the target can also be determined according to the vehicle type corresponding to the target and the vehicle database, and the first state data of the target determined in S617 can be used to represent the motion state of the fixed point in the target .
  • Targets can be vehicles or other traffic participants such as pedestrians.
  • Data filtering is a data processing technique that removes noise and restores real data. Since the observation data includes the influence of noise and interference in the system, the accuracy of the data used for initial state estimation can be improved through data filtering.
  • a filtering algorithm may be used to process the first state data.
  • the first state data may be processed by using a linear Kalman filter algorithm or the like.
  • the Kalman filter algorithm is also called the dynamic model of the target, which can be understood as an autoregressive data processing algorithm.
  • the Kalman filter algorithm uses the state transition equation to dynamically describe the discrete-time system and describe the target motion behavior.
  • the result of performing data filtering on the first state data may be used as the initial state data.
  • curve fitting is performed on the relationship between the position of the target and time in the result of data filtering on the first state data.
  • the velocity and acceleration of the target at each acquisition time point are determined according to the fitted curve.
  • the initial state data may include the initial estimated state of the target at each acquisition time point determined according to the fitted curve.
  • the initial estimated state of the target includes the position, velocity, acceleration, etc. of the target.
  • the relationship of the position of the target over time is more accurate. Perform curve fitting on the relationship between the position of the target and time in the data filtering results of the first state data, and determine the initial state data according to the fitted curve.
  • the initial state data includes each of the target's position, velocity, acceleration, etc. versus time determined from the fitted curve.
  • the initial state data may include an initial estimated state of the target at each of the various acquisition time points.
  • the initial estimated state of the target includes position, velocity, acceleration, etc.
  • an initial time point is determined, where the initial time point is a time point that minimizes the difference between the first state data and the initial state data.
  • the calculation of weighted summation may be performed on the difference between the first state data and the initial state data in the position of the target at each collection time point, the difference in speed, and the difference in acceleration.
  • the weights corresponding to position differences, the weights corresponding to speed differences, and the weights corresponding to acceleration differences may be the same or different. Weights can be preset.
  • the difference between the first state data and the initial state data at a certain acquisition time point can be expressed as:
  • X ab , v ab , a ab are the position, velocity, and acceleration in the first state data respectively
  • X f , v f , a f are the position, velocity, and acceleration in the initial state data respectively
  • ⁇ 1 , ⁇ 2 , ⁇ 3 are the weights corresponding to position, velocity and acceleration respectively.
  • the first state data or the initial state data at the initial time point may be used as the initial state of the target at the initial time point.
  • the first state data can also be used to correct the position, velocity, and acceleration of the initial time point in the initial state data, so as to obtain the initial state of the target at the initial time point.
  • At S640 perform forward and reverse state estimation on the target according to the first state data to determine third estimated state data of the target, and the initial time point is the starting time point of forward tracking or reverse tracking.
  • the industrial computer can perform S641 to S643.
  • the positive state estimation of the target can also be understood as the state estimation of the state of the target in chronological order.
  • the chronological order may also be referred to as the first chronological order.
  • the industrial computer can use the Kalman filter algorithm to estimate the forward state of the target, as shown in Figure 8.
  • the initial time point is taken as the time point t0, and the initial state is taken as the forward estimation state X of the target at the time point t0.
  • ⁇ T is the duration between the time point t0 and the time point t1
  • means multiplication
  • Time point t1 may be a time point next to time point t0
  • F( ⁇ T) is the state transition matrix used in the case of state estimation in time order.
  • the forward estimated state of the target at time point t1 is determined.
  • the forward estimated state of the target at the time point t1 may be a weighted average of the state of the target at the time point t1 and the forward predicted state of the target at the time point t1 in the first state data.
  • the industrial computer can repeat S6412 to S6414.
  • the next time point after the time point t1 is the time point t1 when S6412 is performed again.
  • S6412 to S6414 may be stopped when the time point t1 is the last time point in chronological order in the first status data. Or, after the time point t1 is the last time point in the first status data in chronological order, S6412 and S6414 may also be repeated for a preset number of times.
  • the forward prediction state of the target at time point t1 can be taken as the target at time point The forward estimated state of t1.
  • S642 take the last collection time point in the first state data as the starting time point, and perform reverse state estimation on the target according to the first state data of the target at each time point, so as to determine the second estimated state of the target data.
  • Performing reverse state estimation on the target can also be understood as performing state estimation on the state of the target according to a second time sequence opposite to the time sequence.
  • the Kalman filter algorithm can be used to estimate the reverse state of the target, as shown in Figure 9.
  • the last ith collection time point in the first state data is taken as time point t0
  • the forward estimated state of the target at time point t0 is taken as the reverse estimated state at time point t0
  • i is a preset value
  • the last collection time point in the first state data may be taken as the time point t0.
  • the last certain time point in the first state data is taken as the starting time point in the state estimation process in reverse order of time, and the time points for state estimation in reverse order of time are increased as much as possible.
  • the forward estimated state of the target in the first estimated state data generally converges and is relatively accurate. Taking the forward estimated state of the target at any one of the last several time points in the first state data as the initial state of performing S642 can make the second estimated state data determined in S642 more accurate.
  • time point t1 is the previous time point of the time point t0.
  • F(- ⁇ T) is the state transition matrix used in the reverse state estimation process of the target.
  • the time point t1 is taken as the time point t0.
  • the industrial computer can repeat S6422 to S6424.
  • the repetition of S6422 to S6424 may be stopped when the time point t0 is the last time point along the second time sequence (ie, the first time point along the time sequence) in the first state data.
  • S6422 and S6424 may also be repeated a preset number of times.
  • the reverse prediction state of the target at time point t1 can be used as the target at time Backward estimated state for point t1.
  • FIG. 10 shows the time relationship among the first state data, the first estimated state data, the second estimated state data, and the third estimated state data.
  • going to the right along the horizontal direction is the direction of increasing time, that is, the positive direction; going to the left along the horizontal direction is the direction opposite to the order of time, that is, the direction.
  • the first estimated state data may include the part after the initial time point (that is, the right side of the initial time point) in the time period corresponding to the first state data and the first preset after the time period corresponding to the first state data.
  • the forward estimated state of the target within the duration T1.
  • S641 after the time point t1 is the last time point in the first state data in chronological order
  • S6412 and S6414 are repeated for a preset number of times to obtain the forward estimated state of the target within the first preset time duration T1.
  • the second estimated state data may include a time period corresponding to the first state data and a reverse estimated state of the target within a second preset time period T2 before the time period. After the time point t0 in S642 is the last time point in the first state data along the second time sequence, S6422 to S6424 are repeated for a preset number of times to obtain a reverse estimate of the target within the second preset duration T2 state.
  • the preset times corresponding to repeating S6422 to S6424 may be the same as or different from the preset times corresponding to repeating S6412 and S6414. That is to say, the second preset duration T2 may be the same as or different from the first preset duration T1.
  • third estimated state data is determined according to the first estimated state data and the second estimated state data.
  • the state of each time point in the first estimated state data and the second estimated state data of the target can be weighted and averaged to obtain the estimate of the target at each time point in this part state.
  • the third estimated state data may include estimated states of the target at multiple acquisition time points.
  • the third estimated state data may also include the state of the target at various time points before the initial time point in the first state data and the second preset duration before the time period corresponding to the first state data, the first state
  • the estimated state of the target may be the forward estimated state of the target.
  • the estimated state of the target may be a reverse estimate of the target state.
  • the estimated state of the target at a certain time point during the time period when the target is blocked can be the time The forward estimation state and the reverse estimation state at each time point in the segment, or the weighted average of the forward estimation state and the reverse estimation state.
  • the estimated state of the target is that the target is blocked.
  • the state of the time period T3 before the occlusion is determined by the forward state estimation.
  • the estimated state of the target is a reverse estimated state determined by reverse state estimation of the state of the time period T4 after the target is blocked.
  • the first preset duration T1 corresponding to the positive state estimation for the time period T3 before the target is blocked is the same as the first preset duration T1 for the target being blocked
  • the second preset duration T2 corresponding to the reverse state estimation in the time period T4 after the end of occlusion has all or part of the time overlap. For each time point in the time overlapping part, the weighted average of the forward estimated state and the reverse estimated state can be used as the estimated state of the target at this time point.
  • S650 may be performed.
  • S650 may include S651 to S653.
  • a Frenet coordinate system is established according to the vehicle trajectory reference line, and the road information and the start state and end state of the target are represented by the Frenet coordinate system.
  • the vehicle trajectory reference line can also be understood as the lane centerline.
  • the Frenet coordinate system can be understood as a Cartesian rectangular coordinate system with the vehicle trajectory reference line as the horizontal axis and the direction perpendicular to the lane centerline as the vertical axis.
  • the estimated state of the target at the beginning and end of the occlusion time period may be represented by a Cartesian coordinate system.
  • the state of the target can be expressed as (x, y, ⁇ , v, a) T in the Cartesian coordinate system, where x and y are used to represent the coordinates of the target in the Cartesian coordinate system, and ⁇ is used to represent the heading of the target Angle (that is, the direction of velocity), v is used to represent the speed of the target, and a is used to represent the acceleration of the target.
  • the state of the target can be expressed as the longitudinal state and sideways Among them, s represents the longitudinal coordinate of the target in the Frenet coordinate system, Indicates the differential of the longitudinal coordinate of the target in the Frenet coordinate system to time (or can be understood as the derivative of time), that is, the longitudinal velocity of the target, Represents the second derivative of the longitudinal coordinate of the target in the Frenet coordinate system to time, that is, the longitudinal acceleration, l represents the horizontal coordinate of the target in the Frenet coordinate system, Indicates the differential of the lateral coordinate of the target in the Frenet coordinate system to s, which is used to represent the lateral velocity, Indicates the second derivative of the lateral coordinate of the target in the Frenet coordinate system to s, which is used to represent the lateral acceleration.
  • Other road information can also be represented by the Frenet coordinate system, such as the area where other traffic participants except the target are located.
  • Other traffic participants other than the target may include the own vehicle, other vehicles, pedestrians, obstacles, etc.
  • the space occupancy map can be used to represent the area where other traffic participants except the target are located in the Frenet coordinate system. Different time points may correspond to different space occupancy maps.
  • a space occupancy map may also be called an occupied grid map (OGM).
  • the space occupancy map can include multiple grids, each grid is used to represent a position in the surrounding environment of the target, and the occupied grid indicates that there are other traffic participants in the position corresponding to the grid; otherwise, the unoccupied The grid indicates that there are other traffic participants in the position corresponding to the grid.
  • a trajectory bundle is generated based on lattice sampling.
  • a trajectory bundle includes multiple trajectories, and each trajectory can be expressed as a corresponding relationship between the position and time of the target.
  • the initial state of each trajectory is the start state of the target represented by Frenet coordinates at the beginning of the occlusion period
  • the end state of each trajectory is the end state of the target represented by Frenet coordinates at the end of the occlusion period.
  • the cost value of each trajectory can be calculated.
  • the loss function cost can be expressed as the sum of multiple sub-functions.
  • the multiple sub-functions include a sub-function cost lon for longitudinal loss, a sub-function cost lon for lateral loss, a sub-function cost safe for collision safety loss, and a sub-function cost jerk for jerk loss , one or more of the subfunction cost cen used to represent the loss of centripetal acceleration, the subfunction cost latcom used to represent the loss of lateral acceleration, the subfunction cost comfort used to represent the loss of comfort, the loss of driving history state cost his , etc. indivual.
  • the subfunction cost lon used to represent the longitudinal loss can be expressed as:
  • a and b are preset coefficients
  • cost speed is used to represent the speed loss of the trajectory, and is negatively correlated with the average speed of the trajectory
  • cost dist is used to represent the lateral distance loss of the trajectory, and is positively correlated with the total lateral distance of the trajectory
  • the symbol "+" means addition
  • fraction means dividing the numerator by the denominator.
  • the calculation result of the fraction formula can be expressed in the form of integer, fraction or decimal.
  • the calculation result of the fraction formula can be expressed in the form of 1 or 2 decimal places.
  • the sub-function cost lat used to represent the lateral loss can be expressed as:
  • T is the number of time points in the trajectory
  • s latoffset (t) is used to represent the offset between the trajectory and the road centerline at time t, Indicates the accumulation of F(t) values when the value of t is 0 to T respectively.
  • T is the number of time points in the trajectory.
  • the sub-function cost safe used to express the collision safety loss can be expressed as:
  • N is used to represent the number of objects
  • cost c (i, t) is used to represent the loss of the trajectory colliding with the i-th object at time point t.
  • the continuous occupied grid can be understood as an object.
  • Loss cost c (i,t) can be 0 or a preset value.
  • the target is located at the position corresponding to the time point t in the trajectory. If the contour of the target intersects the area where the i-th object is located, the target will collide with the i-th object at the time point t along the trajectory. The cost c (i, t) is default value. If the contour of the target does not intersect with the area where the i-th object is located, the target travels along this trajectory and collides with the i-th object. Traveling along this trajectory will not collide with the i-th object at time t, cost c (i ,t) is 0.
  • the preset value can be set to be much larger than the value of other sub-functions, so that the value of the loss function cost in the case where the trajectory makes the target collide with any object is much larger than that in the case where the trajectory makes the target not collide with any object The value of the loss function cost.
  • the sub-function cost jerk used to represent the jerk loss can be expressed as:
  • d jerk (t) is used to represent the jerk when driving according to the trajectory at time t
  • d jerk_upper is the preset jerk, which can represent the maximum value of the jerk.
  • Jerk is the derivative of acceleration with respect to time.
  • the sub-function cost cen used to represent the centripetal acceleration loss can be expressed as:
  • k is a preset coefficient
  • v is used to represent the speed at time t according to the trajectory.
  • the sub-function cost latcom used to represent the loss of lateral acceleration can be expressed as:
  • the sub-function cost comfort used to represent the comfort loss can be expressed as:
  • cost comfort cost jerk + cost cen + cost latcom
  • the sub-function cost his used to represent the driving history state loss can be expressed as:
  • state(t) is used to represent the state characteristics of driving according to the trajectory at time t, and can be determined according to at least one of the speed, acceleration, jerk, etc. of the target driving according to the trajectory at time t;
  • state 0 is used for Represents a state characteristic determined from a target's historical state. State 0 may be determined according to the estimated state of the target at each acquisition time point.
  • the third estimated state data is updated according to the target trajectory.
  • the updated third estimated state data may be determined according to the third estimated state data and the target trajectory determined in S640.
  • the estimated state of the target during the time period when the target is blocked is the state of the target in the case of driving along the target track.
  • the target trajectory is used to indicate the estimated state of the target at various supplementary time points in the time interval. Therefore, according to the target trajectory, the third estimated state data can be updated, and the updated third estimated state data includes the estimated state of the target indicated by the target trajectory at each supplementary time point.
  • the cost function can be expressed as the sum of multiple sub-costs, wherein the sub-function cost his representing the driving history state loss is used to represent the relationship between the state characteristics of the trajectory and the historical state characteristics of the target difference.
  • the sub-function cost his representing the driving history state loss is used to represent the relationship between the state characteristics of the trajectory and the historical state characteristics of the target difference.
  • Fig. 13 is a schematic structural diagram of a state estimation device provided by an embodiment of the present application.
  • the state estimation device 2000 includes an acquisition module 2010 and a processing module 2020 .
  • the obtaining module 2010 is configured to obtain the first state data of the target within the first time period, the first state data is associated with the first collection state data, and the first collection state data includes data from The data of the target collected by the first sensor during the first time period, the data of the target is at least associated with the first state of the target within the first time period.
  • the processing module 2020 is configured to perform state estimation according to the first time sequence according to the first state data, and obtain the first estimated state data corresponding to the second time period.
  • the processing module 2020 is further configured to, according to the first state data, perform state estimation according to a second time sequence opposite to the first time sequence, to obtain second state estimation data corresponding to a third time period.
  • the second time period overlaps with the third time period.
  • the processing module 2020 is further configured to determine third state estimation data according to the first state estimation data and the second state estimation data, where the third state estimation data is used to estimate the first state.
  • the first time starting point of the second time period is an initial time point.
  • the processing module 2020 is further configured to, according to the first state data, determine initial state data, where the initial state data is used to represent the initial estimated state of the target within the first time period;
  • the processing module 2020 is further configured to, according to the first state data and the initial state data, determine the initial time point, where the difference between the first state and the initial estimated state at the initial time point is less than default value.
  • the first estimated state data includes first data, and the first data is input data for performing state estimation on a second time starting point of the third time period according to the second time sequence.
  • the second time starting point of the third time period is the last time point of the first time period along the first time sequence.
  • the first state estimation data includes second data, the second data corresponds to the state estimation result of the target within a first preset time length, and the first preset time length is along the first
  • the time sequence is a period of time after the first time period; or, the second state estimation data includes third data, and the third data corresponds to the state estimation result of the target within the second preset time period, so
  • the second preset duration is a duration following the first time period along the second time sequence.
  • the processing module 2020 is further configured to determine fourth state estimation data according to second state data corresponding to a fourth time period, where the second state data is the first state data or the third state Estimated data, the fourth time period includes a first sub-time period and a second sub-time period, there is a time interval between the first sub-time period and the second sub-time period, and the time interval does not belong to the The fourth time period, the fourth state estimation data is used to estimate the second state of the target in the time interval.
  • the processing module 2020 is specifically configured to determine at least one supplementary state data set, the start data and end data included in each supplementary state data set in the at least one supplementary state data set are based on the second state data determined, and each set of supplementary state data corresponds to a difference parameter, and the difference parameter of each set of supplementary state data is used to indicate that the state corresponding to each set of supplementary state data corresponds to the second state data difference between states.
  • the processing module 2020 is further configured to determine the fourth state estimation data according to a loss function of each of the supplementary state data sets in the at least one supplementary state data set, wherein the loss of each of the supplementary state data sets
  • the function includes and is positively correlated with the difference parameter of each set of supplementary state data.
  • the acquisition module 2010 is configured to acquire the first state data of the target within the first time period, the first state data is associated with the first collection state data, and the first collection state The data includes data of the target collected by a first sensor during the first time period, the data of the target being associated with at least a first state of the target during the first time period.
  • the processing module 2020 is configured to determine initial state data according to the first state data, where the initial state data is used to represent an initial estimated state of the target within the first time period.
  • the processing module 2020 is further configured to, according to the first state data and the initial state data, determine an initial time point, at which the difference between the first state and the initial estimated state is less than a preset value.
  • the processing module 2020 is further configured to, according to the first state data, use the initial time point as a first time starting point to perform state estimation of the target.
  • the processing module 2020 is specifically configured to perform state estimation according to the first time sequence according to the first state data, to obtain the first estimated state data corresponding to the second time period, and the first time starting point is the The starting point of the second time period along the first time sequence.
  • the processing module 2020 is further configured to, according to the first state data, perform state estimation according to a second time sequence opposite to the first time sequence, to obtain second state estimation data corresponding to a third time period.
  • the second time period overlaps with the third time period.
  • the processing module 2020 is further configured to determine third state estimation data according to the first state estimation data and the second state estimation data, and the three state estimation data are used to estimate the first state.
  • the first estimated state data includes first data, and the first data is input data for performing state estimation on a second time starting point of the third time period according to the second time sequence.
  • the second time starting point of the third time period is the last time point of the first time period along the first time sequence.
  • the first state estimation data includes second data, the second data corresponds to the state estimation result of the target within a first preset time length, and the first preset time length is along the first
  • the time sequence is a period of time after the first time period; or, the second state estimation data includes third data, and the third data corresponds to the state estimation result of the target within the second preset time period, so
  • the second preset duration is a duration following the first time period along the second time sequence.
  • the processing module 2020 is further configured to determine fourth state estimation data according to second state data corresponding to a fourth time period, where the second state data is the first state data or the third state Estimated data, the fourth time period includes a first sub-time period and a second sub-time period, there is a time interval between the first sub-time period and the second sub-time period, and the time interval does not belong to the The fourth time period, the fourth state estimation data is used to estimate the second state of the target in the time interval.
  • the processing module 2020 is specifically configured to determine at least one supplementary state data set, the start data and end data included in each supplementary state data set in the at least one supplementary state data set are based on the second state data determined, and each set of supplementary state data corresponds to a difference parameter, and the difference parameter of each set of supplementary state data is used to indicate that the state corresponding to each set of supplementary state data corresponds to the second state data difference between states.
  • the processing module 2020 is further configured to determine the fourth state estimation data according to a loss function of each of the supplementary state data sets in the at least one supplementary state data set, wherein the loss of each of the supplementary state data sets
  • the function includes and is positively correlated with the difference parameter of each set of supplementary state data.
  • the acquiring module 2010 is configured to acquire second status data, the second status data is associated with the first collected status data, and the first collected status data includes The data of the target collected during the first time period, the data of the target is at least associated with the first state of the target within the first time period, and the fourth time period corresponding to the second state data It includes a first sub-time period and a second sub-time period, a time interval exists between the first sub-time period and the second sub-time period, and the time interval does not belong to the fourth time period.
  • the processing module 2020 is configured to determine fourth state estimation data according to the second state data, where the fourth state estimation data is used to estimate a second state of the target in the time interval.
  • the processing module 2020 is specifically configured to determine at least one supplementary state data set, the start data and end data included in each supplementary state data set in the at least one supplementary state data set are based on the second state data determined, and each set of supplementary state data corresponds to a difference parameter, and the difference parameter of each set of supplementary state data is used to indicate that the state corresponding to each set of supplementary state data corresponds to the second state data difference between states.
  • the processing module 2020 is further configured to determine the fourth state estimation data according to a loss function of each of the supplementary state data sets in the at least one supplementary state data set, wherein the loss of each of the supplementary state data sets
  • the function includes and is positively correlated with the difference parameter of each set of supplementary state data.
  • the processing module 2020 is specifically configured to perform state estimation according to the first time sequence according to the first state data, to obtain first estimated state data corresponding to a second time period, and the first state data is related to the associated with the first acquisition status data.
  • the processing module 2020 is further configured to, according to the first state data, perform state estimation according to a second time sequence opposite to the first time sequence, to obtain second state estimation data corresponding to a third time period.
  • the second time period overlaps with the third time period.
  • the processing module 2020 is further configured to determine the third state estimation data according to the first state estimation data and the second state estimation data, where the third state estimation data is used to estimate the first state.
  • the first time starting point of the second time period is an initial time point.
  • the processing module 2020 is further configured to determine initial state data according to the first state data, where the initial state data is used to represent an initial estimated state of the target within the first time period.
  • the processing module 2020 is further configured to, according to the first state data and the initial state data, determine the initial time point, where the difference between the first state and the initial estimated state at the initial time point is less than default value.
  • the first estimated state data includes first data, and the first data is input data for performing state estimation on a second time starting point of the third time period according to the second time sequence.
  • the second time starting point of the third time period is the last time point of the first time period along the first time sequence.
  • the first state estimation data includes second data, the second data corresponds to the state estimation result of the target within a first preset time length, and the first preset time length is along the first
  • the time sequence is a period of time after the first time period; or, the second state estimation data includes third data, and the third data corresponds to the state estimation result of the target within the second preset time period, so
  • the second preset duration is a duration following the first time period along the second time sequence.
  • the second state data is third state estimation data.
  • the processing module 2020 is further configured to determine initial state data according to the first state data, the initial state data is used to represent the initial estimated state of the target within the first time period, and the first state data associated with the first acquisition status data.
  • the processing module 2020 is further configured to, according to the first state data and the initial state data, determine an initial time point, at which the difference between the first state and the initial estimated state is less than a preset value.
  • the processing module 2020 is further configured to, according to the first state data, use the initial time point as the first time point to perform state estimation of the target, so as to determine the third state estimation data, the third state Estimation data is used to estimate the first state.
  • Fig. 14 is a schematic structural diagram of a state estimation apparatus 3000 provided by an embodiment of the present application.
  • the state estimation device 3000 may include: at least one processor 3010 and a memory 3020, the memory 3020 may be used to store program instructions, and when the program instructions are executed in the at least one processor 1210, the state estimation device 3000 Each step or method or operation or function performed by the state estimation device above is realized.
  • the processor is a circuit with signal processing capabilities.
  • the processor may be a circuit with instruction reading and execution capabilities, such as a CPU, a microprocessor, a GPU (which can It is understood as a kind of microprocessor), or DSP, etc.; in another implementation, the processor can realize certain functions through the logic relationship of the hardware circuit, and the logic relationship of the hardware circuit is fixed or reconfigurable, such as processing
  • the device is a hardware circuit implemented by ASIC or PLD, such as FPGA.
  • the process of the processor loading the configuration file to realize the configuration of the hardware circuit can be understood as the process of the processor loading instructions to realize the functions of some or all of the above units.
  • it can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as NPU, TPU, DPU, etc.
  • each unit in the above device can be one or more processors (or processing circuits) configured to implement the above method, for example: CPU, GPU, NPU, TPU, DPU, microprocessor, DSP, ASIC, FPGA , or a combination of at least two of these processor forms.
  • SOC system-on-a-chip
  • the SOC may include at least one processor for implementing any of the above methods or realizing the functions of each unit of the device.
  • the at least one processor may be of different types, such as including CPU and FPGA, CPU and artificial intelligence processor, CPUs and GPUs, etc.
  • the embodiment of the present application also provides an industrial computer, which includes the aforementioned state estimation device.
  • An embodiment of the present application further provides a computer program storage medium, wherein the computer program storage medium has program instructions, and when the program instructions are executed, the foregoing method is executed.
  • An embodiment of the present application further provides a chip, which is characterized in that the chip includes at least one processor, and when program instructions are executed on the at least one processor, the foregoing method is executed.
  • the division of units in the above device is only a division of logical functions, and may be fully or partially integrated into one physical entity or physically separated during actual implementation.
  • the units in the device can be implemented in the form of a processor calling software; for example, the device includes a processor, the processor is connected to a memory, instructions are stored in the memory, and the processor calls the instructions stored in the memory to implement any of the above methods Or realize the functions of each unit of the device, wherein the processor is, for example, a general-purpose processor, such as a CPU or a microprocessor, and the memory is a memory in the device or a memory outside the device.
  • the units in the device may be implemented in the form of hardware circuits, and part or all of the functions of the units may be realized through the design of the hardware circuits.
  • the hardware circuits may be understood as one or more processors; for example, in one implementation, The hardware circuit is an ASIC, and through the design of the logical relationship between the components in the circuit, the functions of some or all of the above units are realized; Take the Field Programmable Gate Array (FPGA) as an example, which can include a large number of logic gate circuits, and configure the connection relationship between the logic gate circuits through configuration files, so as to realize the functions of some or all of the above units. All the units of the above device can be realized in the form of calling software by the processor, or in the form of hardware circuit, or partly in the form of calling software by the processor, and the rest can be realized in the form of hardware circuit.
  • FPGA Field Programmable Gate Array
  • "at least one” means one or more, and “multiple” means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three kinds of relationships, for example, A and/or B may indicate that A exists alone, A and B exist simultaneously, or B exists alone. Among them, A and B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an “or” relationship.
  • “At least one of the following” and similar expressions refer to any combination of these items, including any combination of single items or plural items.
  • At least one of a, b and c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c can be single or multiple.
  • Prefixes such as “first” and “second” are used in the embodiments of this application only to distinguish different description objects, and have no limiting effect on the position, order, priority, quantity or content of the described objects.
  • the described object is "status data”
  • the ordinal number before “status data” in “first status data” and “second status data” does not limit the position or order or priority of "status data”
  • the described object is "time period”
  • the ordinal number before “time period” in “first time period” and “second time period” does not limit the position or order between "time period” or priority.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the computer software product is stored in a storage medium and includes several instructions to make A computer device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Abstract

一种状态估计方法和装置,能够提高状态估计结果的准确度。状态估计方法包括:获取目标在第一时间段内的第一状态数据,第一状态数据与第一采集状态数据相关联,第一采集状态数据包括来自第一传感器在第一时间段内采集的目标的数据,目标的数据至少与目标在第一时间段内的第一状态相关联;根据第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据;根据第一状态数据,按照与第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,第二时间段与第三时间段存在重合;根据第一状态估计数据和第二状态估计数据确定第三状态估计数据,第三状态估计数据用于估计第一状态。

Description

一种状态估计方法和装置 技术领域
本申请涉及智能驾驶领域,具体涉及一种状态估计方法和装置。
背景技术
随着社会的发展,智能运输设备、智能家居设备、机器人等智能终端正在逐步进入人们的日常生活中。传感器在智能终端上发挥着十分重要的作用。安装在智能终端上的各式各样的传感器,比如毫米波雷达,激光雷达,摄像头,超声波雷达等,在智能终端的运动过程中感知周围的环境,收集数据,进行移动物体的辨识,以及静止场景如车道线、标示牌的识别,并结合导航仪及地图数据进行路径规划。传感器可以预先察觉到可能发生的危险并辅助甚至自主采取必要的规避手段,有效增加了智能终端的安全性和舒适性。
智能驾驶技术包括感知、决策、控制等阶段。感知模块是智能车辆的“眼睛”,感知模块接收周围环境信息,通过机器学习技术了解认知所处的环境。感知模块可以根据各个传感器的输出,对其他交通参与者进行状态估计,实现对其他各个交通参与者的跟踪。决策模块利用感知模块输出的信息,对交通参与者的行为进行预测,从而对自车进行行为决策。控制模块根据决策模块的输出计算车辆的横向加速度和纵向加速度,控制自车通行。
感知模块对各个交通参与者进行状态估计的状态估计结果是对车辆中各个传感器采集的数据进行处理得到的。考虑到车辆的成本,通常在车辆中设置性价比高的传感器。但是,性价比高的传感器对周围环境进行感知的精度可能较低。
可以在车辆顶部设置高精度传感器。对高精度传感器采集的数据进行处理,处理结果可以用于对车辆中感知模块确定的状态估计结果的准确度进行评估。
对高精度传感器采集的数据进行处理的方式,影响着处理结果的准确度,从而影响对感知模块的评估结果的准确度。
发明内容
本申请提供一种状态估计方法和装置,能够提高状态估计结果的准确度。
第一方面,提供一种状态估计方法,包括:获取目标在第一时间段内的第一状态数据,所述第一状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联;根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据;根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
根据目标的第一状态数据,进行在时间顺序上相反的两次状态估计,并根据该两次状 态估计的结果确定用于估计目标的状态的第三状态估计数据,该两次状态估计的结果中的误差中的部分或全部能够相互抵消,从而提高第三状态估计数据的准确度。
结合第一方面,在一些可能的实现方式中,所述第二时间段的第一时间起点为初始时间点,所述方法还包括:根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;根据所述第一状态数据和所述初始状态数据,确定所述初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
对目标进行状态估计,目标的初始状态的准确度对状态估计的结果的准确度产生影响。在第一状态和初始估计状态差异较大的情况下,第一状态和初始估计状态中的至少一个与目标的真实状态存在较大差异。将第一状态和初始估计状态之间的差异小于预设值的时间点为初始时间点,可以认为在初始时间点目标的第一状态和初始估计状态均是收敛的,与目标的真实状态的差异较小。因此,将初始时间点作为第二时间段的第一时间起点进行状态估计,可以使得第一估计状态数据准确度更高。
结合第一方面,在一些可能的实现方式中,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
与目标在第一时间段内的第一状态相比,在第一估计状态数据中选取的第一数据表示的状态更为准确。因此,与第一状态相比,将第一估计状态数据中的第一数据作为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据,使得第二估计状态数据的准确度更高。
结合第一方面,在一些可能的实现方式中,所述第三时间段的第二时间起点为所述第一时间段沿所述第一时间顺序的最后一个时间点。
第一估计状态数据对应的第二时间段与第二状态估计数据对应的第三时间段重合的时间段的长度对第三状态估计数据的准确度产生影响。尽可能延长第二时间段与第三时间段重合时间长度,可以使得第三状态估计数据的准确度更高。
结合第一方面,在一些可能的实现方式中,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;和/或,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
第三状态估计数据是根据第一估计状态数据和第二估计状态数据确定的。从而,第三状态估计数据可以超出第一时间段。
结合第一方面,在一些可能的实现方式中,根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
根据目标在第四时间段的第二状态数据,确定目标在时间间隔的第二状态,从而可以确定目标在第四时间段以及时间间隔构成的连续时间区域内的状态,得到的目标的状态估 计结果更完整。
结合第一方面,在一些可能的实现方式中,所述根据对应于第四时间段的第二状态数据,确定所述第四状态估计数据,包括:确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
在计算损失函数时,考虑目标在时间间隔的状态与第四时间段中的状态之间的差异,确定的第四状态估计数据表示的目标在时间间隔的状态更加符合目标在第四时间段的状态,使得第四状态估计数据更加合理和准确。
第二方面,提供一种状态估计方法,包括:获取目标在第一时间段内的第一状态数据,所述第一状态数据与所述第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联;根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值;根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计。
在第一状态和初始估计状态差异较大的情况下,第一状态和初始估计状态中的至少一个与目标的真实状态存在较大差异。将第一状态和初始估计状态之间的差异小于预设值的时间点为初始时间点,可以认为在初始时间点目标的第一状态和初始估计状态均是收敛的,与目标的真实状态的差异较小。通过将使得第一状态和所述初始估计状态之间的差异小于预设值的时间点作为对目标进行状态估计的初始时间点,可以提高状态估计结果的准确度。
结合第二方面,在一些可能的实现方式中,所述根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计,包括:根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据,所述第一时间起点为所述第二时间段沿所述第一时间顺序的起点;根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述三状态估计数据用于估计所述第一状态。
结合第二方面,在一些可能的实现方式中,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
结合第二方面,在一些可能的实现方式中,所述第三时间段的第二时间起点为所述第一时间段沿第一时间顺序的最后一个时间点。
结合第二方面,在一些可能的实现方式中,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;和/或,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
结合第二方面,在一些可能的实现方式中,所述方法还包括:根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
结合第二方面,在一些可能的实现方式中,所述根据对应于第四时间段的第二状态数据,确定所述第四状态估计数据,包括:确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
第三方面,提供一种状态估计方法,包括:获取第二状态数据,所述第二状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联,所述第二状态数据对应的第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段;根据所述第二状态数据,确定第四状态估计数据,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
根据目标在第四时间段的第二状态数据,确定目标在时间间隔的第二状态,从而使得基于第一传感器得到的状态数据更为完整,提高基于第一传感器得到的状态数据的准确度。
结合第三方面,在一些可能的实现方式中,所述根据所述第二状态数据,确定第四状态估计数据,包括:确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
结合第三方面,在一些可能的实现方式中,所述第二状态数据为第三状态估计数据,所述获取第二状态数据,包括:根据所述第一状态数据,按照第一时间顺序进行状态估计, 得到对应于第二时间段的第一估计状态数据,所述第一状态数据与所述第一采集状态数据相关联;根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;根据所述第一状态估计数据和所述第二状态估计数据,确定所述第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
结合第三方面,在一些可能的实现方式中,所述第二时间段的第一时间起点为初始时间点,所述方法还包括:根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;根据所述第一状态数据和所述初始状态数据,确定所述初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
结合第三方面,在一些可能的实现方式中,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
结合第三方面,在一些可能的实现方式中,所述第二时间起点为所述第一时间段沿所述第一时间顺序的最后一个时间点。
结合第三方面,在一些可能的实现方式中,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;或者,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
结合第三方面,在一些可能的实现方式中,所述第二状态数据为第三状态估计数据,所述获取第二状态数据,包括:根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态,所述第一状态数据与所述第一采集状态数据相关联;根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值;根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计,以确定所述第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
第四方面,提供一种状态估计装置,包括获取模块和处理模块;所述获取模块用于,获取目标在第一时间段内的第一状态数据,所述第一状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联;所述处理模块用于,根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据;所述处理模块还用于,根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;所述处理模块还用于,根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
结合第四方面,在一些可能的实现方式中,所述第二时间段的第一时间起点为初始时间点,所述处理模块还用于:根据所述第一状态数据,确定初始状态数据,所述初始状态 数据用于表示所述目标在所述第一时间段内的初始估计状态;根据所述第一状态数据和所述初始状态数据,确定所述初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
结合第四方面,在一些可能的实现方式中,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
结合第四方面,在一些可能的实现方式中,所述第二时间起点为所述第一时间段沿所述第一时间顺序的最后一个时间点。
结合第四方面,在一些可能的实现方式中,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;和/或,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
结合第四方面,在一些可能的实现方式中,所述处理模块还用于,根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
结合第四方面,在一些可能的实现方式中,所述处理模块具体用于:确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
第五方面,提供.一种状态估计装置,其特征在于,包括:获取模块和处理模块;所述获取模块用于,获取目标在第一时间段内的第一状态数据,所述第一状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联;所述处理模块用于,根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;所述处理模块还用于,根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值;所述处理模块还用于,根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计。
结合第五方面,在一些可能的实现方式中,所述处理模块具体用于:根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据,所述第一时间起点为所述第二时间段沿所述第一时间顺序的起点;根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段 的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述三状态估计数据用于估计所述第一状态。
结合第五方面,在一些可能的实现方式中,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
结合第五方面,在一些可能的实现方式中,所述第二时间起点为所述第一时间段沿第一时间顺序的最后一个时间点。
结合第五方面,在一些可能的实现方式中,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;和/或,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
结合第五方面,在一些可能的实现方式中,所述处理模块还用于,根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
结合第五方面,在一些可能的实现方式中,所述处理模块具体用于:确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
第六方面,提供一种状态估计装置,其特征在于,包括:获取模块和处理模块;所述获取模块用于,获取第二状态数据,所述第二状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联,所述第二状态数据对应的第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段;所述处理模块用于,根据所述第二状态数据,确定第四状态估计数据,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
结合第六方面,在一些可能的实现方式中,所述处理模块具体用于:确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;根据所述至少一个补充状态数据集合 中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
结合第六方面,在一些可能的实现方式中,所述第二状态数据为第三状态估计数据,所述获取模块具体用于:根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据,所述第一状态数据与所述第一采集状态数据相关联;根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;根据所述第一状态估计数据和所述第二状态估计数据,确定所述第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
结合第六方面,在一些可能的实现方式中,所述第二时间段的第一时间起点为初始时间点,所述获取模块具体用于:根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;根据所述第一状态数据和所述初始状态数据,确定所述初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
结合第六方面,在一些可能的实现方式中,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
结合第六方面,在一些可能的实现方式中,所述第三时间段的第二时间起点为所述第一时间段沿所述第一时间顺序的最后一个时间点。
结合第六方面,在一些可能的实现方式中,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;和/或,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
结合第六方面,在一些可能的实现方式中,所述第二状态数据为第三状态估计数据,所述获取模块具体用于:根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态,所述第一状态数据与所述第一采集状态数据相关联;根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值;根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计,以确定所述第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
第七方面,提供一种状态估计装置,包括存储器和处理器,存储器用于存储程序指令,当所述程序指令在所述处理器中执行时,所述处理器用于获取目标在第一时间段内的第一状态数据,所述第一状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联;根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据;根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状 态估计数据,所述第二时间段与所述第三时间段存在重合;根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
结合第七方面,在一些可能的实现方式中,所述第二时间段的第一时间起点为初始时间点,所述处理器还用于:根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;根据所述第一状态数据和所述初始状态数据,确定所述初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
结合第七方面,在一些可能的实现方式中,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
结合第七方面,在一些可能的实现方式中,所述第二时间起点为所述第一时间段沿所述第一时间顺序的最后一个时间点。
结合第七方面,在一些可能的实现方式中,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;或者,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
结合第七方面,在一些可能的实现方式中,所述处理器还用于,根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
结合第七方面,在一些可能的实现方式中,所述处理器具体用于:确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
第八方面,提供一种状态估计装置,包括:存储器和处理器,所述存储器用于存储程序指令,当所述程序指令在所述处理器中执行时,所述处理器用于:获取目标在第一时间段内的第一状态数据,所述第一状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联;根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值;根据所述第一状态数据,以所述初始时 间点为第一时间起点,进行所述目标的状态估计。
结合第八方面,在一些可能的实现方式中,所述处理器具体用于:根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据,所述第一时间起点为所述第二时间段沿所述第一时间顺序的起点;根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述三状态估计数据用于估计所述第一状态。
结合第八方面,在一些可能的实现方式中,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
结合第八方面,在一些可能的实现方式中,所述第二时间起点为所述第一时间段沿第一时间顺序的最后一个时间点。
结合第八方面,在一些可能的实现方式中,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;和/或,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
结合第八方面,在一些可能的实现方式中,所述处理器还用于:根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
结合第八方面,在一些可能的实现方式中,所述处理器具体用于:确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
第九方面,提供一种状态估计装置,包括:存储器和处理器,所述存储器用于存储程序指令,当所述程序指令在所述处理器中执行时,所述处理器用于:获取第二状态数据,所述第二状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联,所述第二状态数据对应的第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段;根据所述第二状态数据,确定第四状态估计数据,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
结合第九方面,在一些可能的实现方式中,所述处理器具体用于:确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
结合第九方面,在一些可能的实现方式中,所述处理器具体用于:根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据,所述第一状态数据与所述第一采集状态数据相关联;根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;根据所述第一状态估计数据和所述第二状态估计数据,确定所述第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
结合第九方面,在一些可能的实现方式中,所述第二时间段的第一时间起点为初始时间点,所述处理器具体用于:根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;根据所述第一状态数据和所述初始状态数据,确定所述初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
结合第九方面,在一些可能的实现方式中,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
结合第九方面,在一些可能的实现方式中,所述第三时间段的第二时间起点为所述第一时间段沿所述第一时间顺序的最后一个时间点。
结合第九方面,在一些可能的实现方式中,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;和/或,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
结合第九方面,在一些可能的实现方式中,所述第二状态数据为第三状态估计数据,所述处理器具体用于:根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态,所述第一状态数据与所述第一采集状态数据相关联;根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值;根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计,以确定所述第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
第十方面,提供一种计算机程序存储介质,所述计算机程序存储介质具有程序指令,当所述程序指令在计算机设备中执行时,所述计算机设备用于实现第一方面至第三方面中 任一方面或第一方面至第三方面任一种实现方式中的方法。
第十一方面,提供一种计算机程序产品,其特征在于,包括程序指令,当所述程序指令被执行时,使得第一方面至第三方面中任一方面或第一方面至第三方面任一种实现方式中的方法被执行。
第十二方面,提供一种芯片,其特征在于,所述芯片包括至少一个处理器,当程序指令在所述至少一个处理器中执行时,使得第一方面至第三方面中任一方面或第一方面至第三方面任一种实现方式中的方法被执行。
附图说明
图1是本申请实施例适用的一种车辆的功能框图。
图2是一种状态估计方法的示意性流程图。
图3是本申请实施例提供的一种状态估计方法的示意性流程图。
图4是本申请实施例提供的另一种状态估计方法的示意性流程图。
图5是本申请实施例提供的又一种状态估计方法的示意性流程图。
图6是本申请实施例提供的又一种状态估计方法的示意性流程图。
图7是本申请实施例提供的一种确定目标的第一状态数据的方法的示意性流程图。
图8是本申请实施例提供的一种正向的状态估计方法的示意性流程图。
图9是本申请实施例提供的一种反向的状态估计方法的示意性流程图。
图10是本申请实施例提供的一种第一状态数据、第一估计状态数据、第二估计状态数据、第三估计状态数据的示意图。
图11是本申请实施例提供的另一种第一状态数据、第一估计状态数据、第二估计状态数据、第三估计状态数据的示意图。
图12是本申请实施例提供的一种更新第三估计状态数据的方法的示意性流程图。
图13是本申请一个实施例提供的一种状态估计装置的示意性结构图。
图14是本申请另一个实施例提供的一种状态估计装置的示意性结构图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
图1是本申请实施例适用的一种车辆的示意性结构图。
车辆100中可以包括各种子系统,例如,传感系统120和计算平台150。
可选地,车辆100可以包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车辆100的每个子系统和元件可以通过有线或者无线互连。
传感系统120可以包括感测关于车辆100周边的环境的信息的若干个传感器。
例如,传感系统120可以包括定位系统,定位系统可以包括全球定位系统(global positioning system,GPS)、北斗系统或者其他定位系统、惯性测量单元(inertial measurement unit,MU)、激光雷达、毫米波雷达、超声波雷达、激光测距仪、摄像装置等中的一中或多种。传感系统120还可以包括被监视车辆的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是使得车辆能够安 全行驶的重要保证。
定位系统可以用于估计设置车辆100的地理位置。
IMU可以用于基于惯性加速度来感测车辆的位置和朝向变化。在一个实施例中,IMU122可以是加速度计和陀螺仪的组合。
雷达可以利用无线电信息来感测车辆的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达还可用于感测物体的速度、加速度、前进方向等。雷达可以是激光雷达、毫米波雷达、超声波雷达等。
激光测距仪可以利用激光来感测车辆所位于的环境中的物体。在一些实施例中,激光测距仪可以包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。
摄像装置可以用于捕捉车辆的周边环境的多个图像。例如,摄像装置可以是静态相机或视频相机。
车辆100的部分或所有功能可以由计算平台150控制。计算平台150可以包括处理器151至15n(n为正整数)。处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如中央处理单元(central processing unit,CPU)、微处理器、图形处理器(graphics processing unit,GPU)(可以理解为一种微处理器)、或数字信号处理器(digital signal processor,DSP)等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为专用集成电路(application-specific integrated circuit,ASIC)或可编程逻辑器件(programmable logic device,PLD)实现的硬件电路,例如FPGA。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如神经网络处理单元(neural network processing unit,NPU)、张量处理单元(tensor processing unit,TPU)、深度学习处理单元(deep learning processing unit,DPU)等。此外,计算平台150还可以包括存储器,存储器用于存储指令。处理器151至15n中的部分或全部处理器可以调用存储器中的指令,执行指令,以实现相应的功能。除了指令以外,存储器还可存储数据,例如,道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆自主、半自主和/或手动模式中操作期间被计算机系统150使用。
计算平台150中的处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述车辆100中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
可选地,车辆100可以是在道路行进的自动驾驶汽车,可以识别其周围环境内的物体以确定对当前速度的调整。物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的 当前速度、加速度、与车辆的间距等,可以用来确定自动驾驶汽车所要调整的速度。
可选地,车辆100或者与车辆100相关联的计算设备(如图1的计算机系统150)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰等等)来预测所述识别的物体的行为。
上述车辆可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
智能驾驶技术包括感知、决策、控制等阶段。感知模块是智能车辆的“眼睛”,感知模块接收周围环境信息,通过机器学习技术了解认知所处的环境。决策模块利用感知模块输出的信息,对交通参与者的行为进行预测,从而对自车进行行为决策。控制模块根据决策模块的输出计算车辆的横向加速度和纵向加速度,控制自车通行。
在车辆100中,感知模块可以包括传感系统120中的各个传感器。感知模块的输出结果可以包括传感系统120中的各个传感器的输出。感知模块还可以包括计算平台150中的全部或部分处理器。感知模块的输出结果可以包括对各个传感器的输出进行处理得到的数据。感知模块的输出结果可能存在误差,可以对车辆感知模块的输出结果进行优化调整,以提高准确度。
图2是一种状态估计方法的示意性流程图。
方法400包括S410至S430,可以由计算平台150执行。
在S410,获取传感系统120采集的目标车辆的第一状态信息。第一状态信息可以用于表示目标车辆的位置、速度加速度等。
目标车辆可以是车辆100周围的其他车辆。
第一状态信息可以理解为采集数据,可以是对传感系统120中雷达123、激光测距仪124、相机125等传感器采集的数据进行融合得到的。
对各个传感器采集的数据进行的融合,也可以理解为信息融合、数据融合、传感器信息融合或多传感器信息融合,用于对从单个和多个信息源获取的数据和信息进行关联、相关和综合。
在S420,根据第一状态信息,进行在线状态估计,以确定第二状态信息。
通过在线状态估计,可以对第一状态信息进行修正,确定目标车辆修正后的第二状态信息。
在S420之后,可以根据第二状态信息,对目标车辆的行为进行预测,从而对自车即车辆100进行行为决策。根据对自车的行为决策,可以计算车辆的横向加速度和纵向加速度,控制自车行驶。
通过方法400得到的目标车辆修正后的第二状态信息的准确度依然较低。
可以对第二状态信息进行验证。
在车辆中,为了降低成本,可以设置性价比较高的传感器,精度和准确度较低。为了对第二状态信息进行验证,可以在测试时车辆中增设高精度传感器。
利用车辆中增设的高精度传感器测量得到的数据对目标车辆进行状态估计,以得到验证状态信息。可以根据验证状态信息,确定第二状态信息是否准确。
通过增设高精度的传感器,可以对第二状态信息进行验证。但是高精度传感器测量得 到的数据对目标车辆进行状态估计的方式,对验证状态信息的准确度产生影响。
为了提高验证状态信息的准确度,本申请实施例提供一种目标的状态估计方法和装置。
图3是本申请实施例提供的一种状态估计方法的示意性流程图。方法500包括步骤S510至S540。
在S510,获取目标在第一时间段内的第一状态数据,所述第一状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联。
第一采集状态数据包括第一传感器在第一时间段采集的目标的数据。第一状态数据可以是根据第一采集状态数据确定的。或者,第一采集状态数据可以是第一传感器采集的原始数据。
在一些实施例中,在S510可以接收第一传感器采集的第一采集状态数据,并根据第一采集状态数据确定第一状态数据。
例如,第一传感器可以包括摄像头。摄像头采集的数据可以是图像。用于执行方法500的装置可以接收摄像头在第一时间段内采集的图像,对该第一时间段内的图像进行处理,得到图像中记录的目标的位置,并确定目标的速度、加速度、加加速度等。第一采集状态数据可以包括摄像头在第一时间段内采集的图像,第一状态数据可以包括对摄像头在第一时间段内采集的图像进行处理得到的目标的状态信息。
目标的状态信息包括目标在第一时间段内的位置、速度、加速度、加加速度等中的一个或多个。第一状态数据可以包括根据摄像头在第一时间段内采集的图像确定的目标的状态信息。
在另一些实施例中,在S510可以接收第一状态数据。
第一传感器可以包括激光雷达、毫米波雷达、超声波雷达等雷达中的一个或多个。雷达采集的数据可以是点云数据。雷达可以对第一时间段内采集的点云数据进行处理,以得到目标的状态信息。用于执行方法500的装置可以接收雷达处理得到的状态信息。第一状态数据可以包括根据第一时间段内的点云数据确定的目标的状态信息。
应当理解,用于执行方法500的装置也可以接收雷达第一时间段内采集的点云数据,并对该第一时间段内的点云数据进行处理,以得到目标的状态信息。
第一传感器可以包括摄像头。处理器可以对摄像头在第一时间段内采集的图像进行处理,得到目标的状态信息。在S510可以接收该处理器处理得到的目标的状态信息。
第一传感器可以包含一个或多个传感器。
在第一传感器包括一个传感器的情况下,该传感器对应的目标的状态信息可以作为第一状态数据,用于表示目标在第一时间段内的第一状态。
在第一传感器包括多个传感器的情况下,多个传感器中部分类型的传感器输出的数据可以是原始数据(例如图像或点云数据),部分传感器输出的数据可以是对原始数据处理之后得到的目标的状态信息。
在一些实施例中,第一状态数据可以包括各个传感器对应的目标的状态信息,或者第一状态数据可以包括各个传感器的输出数据,即第一状态数据可以包括第一采集状态数 据。
在另一些实施例中,在S510可以对各个传感器对应的目标的状态信息进行融合,以得到目标在第一时间段内的第一状态数据。第一状态数据可以用于表示目标在第一时间段内的第一状态。
例如,可以对每个时间点下各个传感器对应的状态信息中目标的状态进行加权平均运算。第一状态数据包括该多个时间点下的加权平均运算结果。
又例如,可以对各个传感器对应的目标的状态信息中时间点可能不完全相同。对于每个传感器对应的目标的状态信息,可以利用差值算法确定第一时间段中多个统一时间点下目标的状态,以确定该传感器调整后的状态信息。该多个统一时间点可以具有相同或不同的时间间隔相同。之后,可以对每个统一时间点下各个传感器调整后的状态信息中的状态进行加权平均运算。第一状态数据包括该多个时间点下的加权平均运算结果。
在S520,根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据。
在S530,根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合。
时间的先后顺序可以理解为正向,与时间先后顺序相反的顺序可以理解为反向。第一时间顺序可以是正向,也可以是方向。
如果第一时间顺序为正向,则第二时间顺序为反向;反之,如果第一时间顺序为反向,则第二时间顺序为正向。
在S540,根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
每次进行状态估计都会产生误差,且误差与进行状态估计的时间顺序相关。通过S510至S540,根据目标的第一状态数据,进行在时间顺序上相反的两次状态估计,并根据该两次状态估计的结果确定用于估计目标的状态的第三状态估计数据,该两次状态估计的结果中的误差中的部分或全部能够相互抵消,从而提高第三状态估计数据的准确度。
对目标进行状态估计,目标的初始状态的准确度对状态估计的结果的准确度产生影响。
在S520,可以根据所述第一状态数据,将第一时间起点作为第一个时间点进行状态估计,之后,对沿第一时间顺序位于第一个时间点之后的各个时间点进行状态估计,以得到第一估计状态数据。第一估计状态数据对应的时间段即为第二时间段。如果第一时间顺序为正向,则如图10所示,第二时间段的第一时间起点可以是第一估计状态数据最左端的数据对应的时间点。
为了提高第一估计状态数据的准确度,在S520之前,可以根据所述第一状态数据,确定初始状态数据。初始状态数据用于表示目标在第一时间段内的初始估计状态。之后,可以根据第一状态和初始状态数据,确定初始时间点,在初始时间点第一状态数据和初始估计状态之间的差异小于预设值。第二时间段的第一时间起点可以是初始时间点。
第一状态数据与目标在第一时间段内的第一状态相关联。用于执行方法500的装置可以根据第一状态数据确定目标在第一时间段内的第一状态。
在第一状态和初始估计状态差异较大的情况下,第一状态和初始估计状态中的至少一个与目标的真实状态存在较大差异。将第一状态和初始估计状态之间的差异小于预设值的时间点为初始时间点,可以认为在初始时间点目标的第一状态和初始估计状态均是收敛的,与目标的真实状态的差异较小。因此,将初始时间点作为第二时间段的第一时间起点进行状态估计,可以使得第一估计状态数据准确度更高。
应当理解,第一状态数据可以包括一项或多项数据。具体地,可以参见图4的说明。
在S530,可以根据所述第一状态数据,将第二时间起点作为第一个进行状态估计的时间点,之后,对沿第二时间顺序位于第二时间起点之后的各个时间点进行状态估计,以得到第二估计状态数据。第二估计状态数据对应的时间段即为第三时间段。如果第一时间顺序为正向,第二时间顺序为反向,则如图10所示,第三时间段的第二时间起点可以是第二估计状态数据最右端的数据对应的时间点。
为了提高第二估计状态数据的准确度,可以在S520之后进行S530,并且将第一估计状态数据中的第一数据作为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据,进行按照第二时间顺序进行状态估计。
按照第一时间顺序进行状态估计得到的第一估计状态数据,可以包括目标在多个时间点的数据。第一估计状态数据中每个时间点的数据是可以该时间点的状态估计结果,用于表示目标在该时间点的状态。可以将第一估计状态数据中一个时间点的数据作为第一数据。
进行状态估计的过程中,后一个时间点的数据是根据前一个时间点的数据确定的。沿第二时间顺序的第一个时间点可以理解为开始按照第二时间顺序进行状态估计的时间点。按照第二时间顺序在该时间点之前不存在其他时间点,该时间点的状态估计结果可以理解为按照第二时间顺序进行状态估计的初始数据。
在S520按照第一时间顺序进行一段时间的状态估计之后,第一估计状态数据趋于收敛,即进行一段时间的状态估计之后得到的估计结果准确度更高。与目标在第一时间段内的第一状态相比,在第一估计状态数据中选取的第一数据表示的状态更为准确。因此,与第一状态相比,将第一估计状态数据中的第一数据作为按照第二时间顺序对第三时间段的第二时间起点进行状态估计的输入数据,使得第二估计状态数据的准确度更高。
第一估计状态数据对应的第二时间段与第二状态估计数据对应的第三时间段重合的时间段的长度对第三状态估计数据的准确度产生影响。
可以将第三时间段的第二时间起点设置为第一状态数据对应的第一时间段沿第一时间顺序的最后一个时间点。尽可能延长第二时间段与第三时间段重合时间长度,可以使得第三状态估计数据的准确度更高。
方法500可以基于车辆100中传感系统120采集数据进行。在车辆行驶过程中,传感系统120可以包括第一传感器,用于获取第一采集状态数据。计算平台150用于根据第一采集状态数据对目标进行状态估计,并根据目标的状态估计结果对车辆行驶进行规划和控制。计算平台150或其他处理系统可以用于执行方法500,对于车辆行驶过程确定的目标的状态估计结果的准确度进行评估。
方法500也可以基于车辆100之外的第一传感器采集的数据进行。第一传感器可以设置在车辆100上,例如设置在车辆100的顶部。与传感系统120中的传感器相比,第一传 感器可以具有更高的精度。用于执行方法500的装置可以是计算平台150,也可以是其他处理器,例如,可以是服务器或其他设备中的处理器。
在车辆行驶过程中,传感系统120用于采集数据;计算平台150用于根据传感系统120采集的数据对目标进行状态估计,并根据目标的状态估计结果对车辆行驶进行规划和控制。在车辆行驶过程中,第一传感器也可以采集数据,即第一传感器在车辆行驶过程中进行第一采集状态数据的获取。
在车辆100顶部上设置第一传感器的情况下,由于周围物体的遮挡,第一传感器能够感知目标的时间与传感系统120能够感知目标的时间不完全一致。为了能够准确对车辆行驶过程计算平台150确定的目标的状态估计结果进行评估,第三状态估计数据可以超出第一时间段。
第一状态估计数据包括第二数据,和/或,第二状态估计数据包括第三数据。其中,第二数据对应第一预设时长内对目标的状态估计结果。第一预设时长为沿第一时间顺序在第一时间段之后的一段时长。第三数据对应第二预设时长内对目标的状态估计结果,第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
也就是说,在S520,按照第一时间顺序的状态估计,在进行到第一时间段中沿第一时间顺序的最后一个时间点之后,可以再进行第一预设时长的状态估计,以得到对应于第一预设时长的第二数据。第一估计状态数据包括第二数据。
类似地,在S530,按照第二时间顺序的状态估计,在进行到第一时间段中沿第二时间顺序的最后一个时间点之后,可以再进行第二预设时长的状态估计,以得到对应于第二预设时长的第三数据。第二估计状态数据包括第三数据。
第三状态估计数据是根据第一估计状态数据和第二估计状态数据确定的。例如,第三状态估计数据可以包括第二数据和/或第三数据。从而,第三状态估计数据可以超出第一时间段。
在采用方法500对目标进行数据采集的过程中,由于其他物体的遮挡,第一传感器可能在第一时间段内中的一段时间间隔内无法感知目标。也就是说,第一状态数据对应的第一时间段、第三状态估计数据对应的时间段可能不是连续的。
用于执行方法500的装置,可以根据对应于第四时间段的第二状态数据,确定第四状态估计数据。第二状态数据为所述第一状态数据或者所述第三状态估计数据。第四时间段包括第一子时间段和第二子时间段。第一子时间段与所述第二子时间段之间存在时间间隔,时间间隔不属于第四时间段。第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
根据目标在第四时间段的第二状态数据,确定目标在时间间隔的第二状态,从而可以确定目标在第四时间段以及时间间隔构成的连续时间区域内的状态,得到的目标的状态估计结果更完整。
应当理解,时间间隔不属于第四时间段,可以是该时间间隔中的全部或部分不属于第四时间段。
在一些实施例中,第二状态数据可以是第一状态数据。沿第一时间顺序,第一子时间段可以位于第二子时间段之前。通过S520得到第一估计状态数据,第一估计状态数据可以包括第一子时间段内目标的状态估计结果,还可以包括沿第一时间顺序在第一子时间段 之后的第一预设时长内目标的状态估计结果。通过S530得到第二估计状态数据,第二估计状态数据包括第二子时间段内目标的状态估计结果,还可以包括沿第二时间顺序在第二子时间段之后的第二预设时长内目标的状态估计结果。从而,在时间间隔的时长小于第一预设时长与第二预设时长之和的情况下,通过S540确定的第三状态估计数据中可以用于估计目标在时间间隔中的第二状态。
在另一些实施例中,执行方法500的装置可以确定至少一个补充状态数据集合。该至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据第二状态数据确定的,且每个补充状态数据集合对应一个差异参数。每个补充状态数据集合的差异参数用于表示该补充状态数据集合对应的状态与第二状态数据对应的状态之间的差异。
之后,执行方法500的装置还可以根据该至少一个补充状态数据集合中每个补充状态数据集合的损失函数,确定第四状态估计数据,其中,每个补充状态数据集合的损失函数包含该补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
第二状态数据为第一状态数据或者第三状态估计数据。
每个补充状态数据集合的损失函数包括该补充状态数据集合的差异参数,即该损失函数计算结果是根据该差异参数得到的。损失函数与差异参数正相关,也可以理解为损失函数的计算结果与差异参数正相关。
每个补充状态数据集合可以理解为一个轨迹,至少一个补充状态数据集合即为轨迹束。在轨迹束中确定使得损失函数计算结果最小的轨迹,作为第四状态估计数据,每个轨迹的损失函数与该轨迹的差异参数正相关,每个轨迹的差异参数用于表示该轨迹的状态与第二状态数据的状态之间的差异。也就是说,在计算损失函数时,考虑目标在时间间隔的状态与第四时间段中的状态之间的差异,确定的第四状态估计数据表示的目标在时间间隔的状态更加符合目标在第四时间段的状态,使得第四状态估计数据更加合理。
第一时间段可以包括第三子时间段和第四子时间段,沿第一时间顺序,第三子时间段可以位于第四子时间段之前。第三子时间段和第四子时间段之间存在第五子时间段。第一时间段可以不包括第五子时间段。
在第一估计状态数据包括沿第一时间顺序在第三子时间段之后的第一预设时长内目标的状态估计结果,第二估计状态数据包括沿第二时间顺序在第四子时间段之后的第二预设时长内目标的状态估计结果,且第二状态数据为第三状态估计数据的情况下,如果第五子时间段的时长超过第一预设时长与第二预设时长之和,则第三状态估计数据存在时间间隔。此时,可以通过在轨迹束中确定一个目标轨迹,目标轨迹对应的损失函数最小。第四状态估计数据用于表示目标轨迹。
图4是本申请实施例提供的一种状态估计方法的示意性流程图。状态估计方法700包括S710至S740。
在S710,获取目标在第一时间段内的第一状态数据,所述第一状态数据与所述第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联。
第一采集状态数据包括第一传感器在第一时间段采集的目标的数据。第一状态数据可以是根据第一采集状态数据确定的。第一采集状态数据可以是第一传感器采集的原始数据。
在S710可以接收第一传感器采集的第一采集状态数据,并根据第一采集状态数据确定第一状态数据。或者,在S710可以接收第一状态数据。具体地,可以参见图3的说明。
第一传感器可以包含一个或多个传感器。
在第一传感器包括一个传感器的情况下,该传感器对应的目标的状态信息可以作为第一状态数据,用于表示目标在第一时间段内的第一状态。
在第一传感器包括多个传感器的情况下,多个传感器中部分类型的传感器输出的数据可以是原始数据(例如图像或点云数据),部分传感器输出的数据可以是对原始数据处理之后得到的目标的状态信息。
在一些实施例中,第一状态数据可以包括各个传感器对应的目标的状态信息,或者第一状态数据可以包括各个传感器的输出数据。
在另一些实施例中,在S710可以对各个传感器对应的目标的状态信息进行融合,以得到目标在第一时间段内的第一状态数据。第一状态数据可以用于表示目标在第一时间段内的第一状态。
用于执行方法700的装置根据获取的第一状态数据,可以确定标在第一时间段内的第一状态。
在S720,根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态。
可以根据第一状态数据,对目标进行状态估计,以确定初始状态数据。
在S730,根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
在S740,根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计。
在第一状态和初始估计状态差异较大的情况下,第一状态和初始估计状态中的至少一个与目标的真实状态存在较大差异。将第一状态和初始估计状态之间的差异小于预设值的时间点为初始时间点,可以认为在初始时间点目标的第一状态和初始估计状态均是收敛的,与目标的真实状态的差异较小。
通过S710至S740,将使得第一状态和所述初始估计状态之间的差异小于预设值的时间点作为对目标进行状态估计的初始时间点,可以提高状态估计结果的准确度。
应当理解,第一状态数据可以包括一项或多项数据,其中,一项数据可以包括采集的第一采集状态数据即第一传感器采集的原始数据,另一项数据可以包括对第一传感器的原始数据处理得到的目标的状态信息。
如果第一状态数据仅包括第一传感器的原始数据,在S720可以根据第一传感器的原始数据进行初始状态估计,以确定初始状态数据。在S730之前可以根据第一传感器的原始数据确定目标的状态信息,在S730可以将初始状态数据与目标的状态信息进行比较,以确定初始时间点。
如果第一状态数据仅包括目标的状态信息,在S720可以根据目标的状态信息进行初 始状态估计,以确定初始状态数据。在S730,可以将目标的状态信息与初始状态数据进行比较,以确定初始时间点。
如果第一状态数据包括第一传感器的原始数据和目标的状态信息,在S720可以根据第一传感器的原始数据或目标的状态信息中进行初始状态估计,以确定初始状态数据。在S730,可以将第一状态数据中的目标的状态信息或原始数据与初始状态数据进行比较,以确定初始时间点。或者,在S730之前可以根据第一传感器的原始数据进行目标的状态信息的重新确定,并在S730将重新确定的目标的状态信息与初始状态数据进行比较,以确定初始时间点。
为了进一步提高状态估计结果的准确度,在S740可以根据第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据,第一时间起点为第二时间段沿第一时间顺序的起点。并且,在S740还可以根据第一状态数据,按照与第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合。之后,可以根据第一状态估计数据和第二状态估计数据,确定第三状态估计数据。第三状态估计数据用于估计所述第一状态。
每次进行状态估计都会产生误差,且误差与进行状态估计的时间顺序相关。进行在时间顺序上相反的两次状态估计,该两次状态估计的结果中的误差中的部分或全部能够相互抵消,从而能够提高第三状态估计数据的准确度。
为了提高第二估计状态数据的准确度,可以将第一数据作为按照第二时间顺序对第三时间段的第二时间起点进行状态估计的输入数据。第一估计状态数据包括第一数据。
将第一估计状态数据中的第一数据作为按照所述第二时间顺序对第三时间段的第二时间起点进行状态估计的输入数据,也就是将第一数据作为按照第二时间顺序进行状态估计的起始状态。与第一状态数据相比,第一估计状态数据指示的状态更为准确。从而,将第一数据作为按照第二时间顺序进行状态估计的起始状态,可以使得第二估计状态数据的准确度更高。
可以将第三时间段的第二时间起点设置为第一状态数据对应的第一时间段沿第一时间顺序的最后一个时间点。尽可能延长第二时间段与第三时间段重合时间长度,可以使得第三状态估计数据的准确度更高。
基于车辆100之外的第一传感器采集的数据进行方法700的情况下,为了对车辆100在行驶过程中基于车辆100中的传感系统120采集的数据确定的目标估计结果进行评价,考虑到周围物体的遮挡,第三状态估计数据可以超出第一时间段,从而对车辆100基于车辆100中的传感系统120采集的数据确定的目标估计结果进行完整的评价。
第一状态估计数据包括第二数据,和/或,第二状态估计数据包括第三数据。其中,第二数据对应第一预设时长内对目标的状态估计结果。第一预设时长为沿第一时间顺序在第一时间段之后的一段时长。第三数据对应第二预设时长内对目标的状态估计结果,第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
在采用方法500对目标进行数据采集的过程中,由于其他物体的遮挡,第一传感器可能在第一时间段内中的一段时间间隔内无法感知目标。也就是说,第一状态数据对应的第一时间段、第三状态估计数据对应的时间段可能不是连续的。
用于执行方法500的装置,可以根据对应于第四时间段的第二状态数据,确定第四状 态估计数据。第二状态数据为所述第一状态数据或者所述第三状态估计数据。第四时间段包括第一子时间段和第二子时间段。第一子时间段与所述第二子时间段之间存在时间间隔,时间间隔不属于第四时间段。第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
根据目标在第四时间段的第二状态数据,确定目标在时间间隔的第二状态,从而可以确定目标在第四时间段以及时间间隔构成的连续时间区域内的状态,得到的目标的状态估计结果更完整。从而,能够对车辆100基于车辆100中的传感系统120采集的数据确定的目标估计结果进行完整的评价。
具体地,执行方法500的装置可以确定至少一个补充状态数据集合。该至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据第二状态数据确定的,且每个补充状态数据集合对应一个差异参数。每个补充状态数据集合的差异参数用于表示该补充状态数据集合对应的状态与第二状态数据对应的状态之间的差异。
执行方法500的装置可以还可以根据该至少一个补充状态数据集合中每个补充状态数据集合的损失函数,确定第四状态估计数据,其中,每个补充状态数据集合的损失函数包含该补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
在计算损失函数时,考虑目标在时间间隔的状态与第四时间段中的状态之间的差异,确定的第四状态估计数据表示的目标在时间间隔的状态更加符合目标在第四时间段的状态,使得第四状态估计数据更加合理。
图5是本申请实施例提供的一种状态估计方法的示意性流程图。
在车辆行驶过程中,传感系统120用于采集数据;计算平台150用于根据传感系统120采集的数据对目标进行状态估计,并根据目标的状态估计结果对车辆行驶进行规划和控制。
在车辆100的顶部可以设置第一传感器。与传感系统120中的传感器相比,第一传感器在可以具有更高的精度。在车辆行驶过程中,第一传感器也可以采集信息。基于车辆100之外的第一传感器采集的信息对目标进行状态估计,得到的状态数据可以用于对计算平台150基于传感系统120用于采集的数据确定的状态估计结果进行准确度的评估。
由于周围物体的遮挡,第一传感器能够感知目标的时间与传感系统120能够感知目标的时间不完全一致。因此,基于第一传感器采集的信息得到的状态数据无法对计算平台150确定的状态估计结果进行完整的评估。
为了解决上述问题,本申请实施例提供了一种状态估计方法。状态估计方法800包括S810至S820。
在S810,获取第二状态数据,所述第二状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联,所述第二状态数据对应的第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段。
时间间隔不属于第四时间段,可以是该时间间隔中的全部或部分不属于第四时间段。
在S820,根据所述第二状态数据,确定第四状态估计数据,所述第四状态估计数据 用于估计所述目标在所述时间间隔的第二状态。
通过S810至S820,根据目标在第四时间段的第二状态数据,确定目标在时间间隔的第二状态,从而使得基于第一传感器得到的状态数据更为完整。
在目标刚刚出现在传感系统120的采集范围内时,计算平台150确定的状态估计结果与目标的实际状态可能存在较大偏差。但是由于周围物体的遮挡,第一传感器能够感知目标的时间与传感系统120能够感知目标的时间不完全一致,可能无法对目标出现时计算平台150确定的状态估计结果进行评估。
通过S810至S820,在周围物体对目标形成一段时间的遮挡的情况下,能够对目标被遮挡的这段时间中目标的状态进行估计,从而对目标再次出现时计算平台150确定的状态估计结果进行评估,使得评估结果更为准确。
在S820可以确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异。
之后,可以根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
第二状态数据为第一状态数据或者第三状态估计数据。
每个补充状态数据集合的损失函数包括该补充状态数据集合的差异参数,即该损失函数计算结果是根据该差异参数得到的。损失函数与差异参数正相关,也可以理解为损失函数的计算结果与差异参数正相关。
每个补充状态数据集合可以理解为一个轨迹,至少一个补充状态数据集合即为轨迹束。在轨迹束中确定使得损失函数计算结果最小的轨迹,作为第四状态估计数据,每个轨迹的损失函数与该轨迹的差异参数正相关,每个轨迹的差异参数用于表示该轨迹的状态与第二状态数据的状态之间的差异。也就是说,在计算损失函数时,考虑目标在时间间隔的状态与第四时间段中的状态之间的差异,确定的第四状态估计数据表示的目标在时间间隔的状态更加符合目标在第四时间段的状态,使得第四状态估计数据更加合理和准确。
第一时间段可以包括第三子时间段和第四子时间段,沿第一时间顺序,第三子时间段可以位于第四子时间段之前。第三子时间段和第四子时间段之间存在第五子时间段。第一时间段可以不包括第五子时间段。
在第一估计状态数据包括沿第一时间顺序在第三子时间段之后的第一预设时长内目标的状态估计结果,第二估计状态数据包括沿第二时间顺序在第四子时间段之后的第二预设时长内目标的状态估计结果,且第二状态数据为第三状态估计数据的情况下,如果第五子时间段的时长超过第一预设时长与第二预设时长之和,则第三状态估计数据存在时间间隔。此时,在S820可以确定轨迹束并在轨迹束中确定一个目标轨迹,目标轨迹对应的损失函数最小。第四状态估计数据用于表示目标轨迹。
第二状态数据可以是第一状态数据。
或者,第二状态数据也可以是第三状态估计数据。
具体地,在S810可以根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据,所述第一状态数据与所述第一采集状态数据相关联。
并且,在S810,还可以根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合。
在第一状态估计数据和所述第二状态估计数据确定之后,可以根据第一状态估计数据和第二状态估计数据,确定第三状态估计数据。第三状态估计数据用于估计第一状态。
每次进行状态估计都会产生误差,且误差与进行状态估计的时间顺序相关。进行在时间顺序上相反的两次状态估计,该两次状态估计的结果中的误差中的部分或全部能够相互抵消,从而能够提高第三状态估计数据的准确度。
对目标进行状态估计,目标的初始状态的准确度对状态估计的结果的准确度产生影响。
为了提高第一估计状态数据的准确度,在S810可以根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态。之后,可以根据所述第一状态数据和所述初始状态数据,确定所述初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
将使得第一状态和所述初始估计状态之间的差异小于预设值的时间点作为对目标进行状态估计的初始时间点,可以提高状态估计结果的准确度。
为了提高第二估计状态数据的准确度,可以将第一估计状态数据中的第一数据作为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
将第一估计状态数据中的第一数据作为按照第二时间顺序对第三时间段的第二时间起点进行状态估计的输入数据,使得第二估计状态数据的准确度更高。
可以将第三时间段的第二时间起点设置为第一状态数据对应的第一时间段沿第一时间顺序的最后一个时间点。尽可能延长第二时间段与第三时间段重合时间长度,可以使得第三状态估计数据的准确度更高。
基于车辆100之外的第一传感器采集的数据进行方法700的情况下,为了对车辆100在行驶过程中基于车辆100中的传感系统120采集的数据确定的目标估计结果进行评价,考虑到周围物体的遮挡,第三状态估计数据可以超出第一时间段,从而对车辆100基于车辆100中的传感系统120采集的数据确定的目标估计结果进行完整的评价。
第一状态估计数据包括第二数据,和/或,第二状态估计数据包括第三数据。其中,第二数据对应第一预设时长内对目标的状态估计结果。第一预设时长为沿第一时间顺序在第一时间段之后的一段时长。第三数据对应第二预设时长内对目标的状态估计结果,第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
为了提高第四状态估计数据的准确度,第二状态数据可以是第三状态估计数据。
在S810可以根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态,所述第一状态数据与所述第一采集状态数据相关联。之后,可以根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。初始时间点 确定后,可以根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计,以确定所述第三状态估计数据。第三状态估计数据用于估计所述第一状态。
将将使得第一状态和所述初始估计状态之间的差异小于预设值的时间点作为对目标进行状态估计的初始时间点,可以提高状态估计结果即第三状态估计数据的准确度。
根据将第三状态估计数据作为第二状态数据,使得第四状态估计数据的准确度更高。
图6是本申请实施例提供的一种状态估计方法的示意性流程图。状态估计方法600包括S610至S640。方法600可以由工控机执行。
工控机,也可以称为工业控制计算机,可以用于对生产过程及机电设备、工艺装备进行检测与控制。工控机可以是一种状态估计装置。
在S610,获取目标的第一状态数据。
目标的第一状态数据可以包括目标在各个采集时间点的采集状态。目标在某个采集时间点的采集状态用于表示根据传感器采集的数据确定的目标在该采集时间点的运动状态。也就是说,目标的第一状态数据可以是根据第一传感器采集的数据确定的。第一传感器可以周期性进行数据的采集,传感器采集数据的时间点即为采集时间点。第一传感器可以设置在车辆100上,例如可以设置在车辆100的顶部。目标在某个时间点的运动状态可以包括目标在该时间点的位置、速度、加速度等。
在目标为车辆的情况下,工控机可以进行S611至S618,如图7所示。
在S611,获取车辆数据库。
车辆数据库可以包括多个车辆类型以及每个车辆类型对应的特征、尺寸、置信度。每个车辆类型对应的尺寸可以包括车辆的长、宽、高等中的一个或多个。在车辆数据库中,至少一个车辆类型对应的尺寸是根据对该类型的车辆进行实际测量或获取该类型的车辆的参数确定的,该车辆类型对应的尺寸的置信度为1。
在S612,获取目标的特征。
目标的特征可以是工控机对第一传感器对车辆采集的数据进行特征提取得到的。例如,可以对摄像头采集得到的该目标的图像进行特征提取,以得到该目标的特征。
在S613,判断目标与车辆数据库中每个车辆类型的匹配度C与预设匹配度之间的大小关系。
目标与车辆数据库中某种车辆类型的匹配度C可以表示为:C=A×B,其中,A用于表示目标的特征与该车辆类型的特征之间的相似程度,B用于表示车辆数据库中该车辆类型的置信度。
在存在至少一个车辆类型对应的匹配度C大于或等于预设匹配度的情况下,进行S614。
在S614,将使得匹配度C最大的车辆类型确定为该目标对应的车辆类型。
在不存在车辆类型对应的匹配度C大于或等于预设值匹配度的情况下,进行S615。
在S615,更新车辆数据库。
可以在车辆数据库中添加目标对应的车辆类型以及该车辆类型对应的特征、尺寸和置信度,以更新车辆数据库。
目标对应的车辆类型的特征可以根据获取的车辆特征确定。目标所属的车辆类型的尺寸可以根据对该车辆的感知结果确定。
例如,可以多次获取目标的车辆特征,根据该多次获取目标的车辆特征确定目标对应的车辆类型的特征。可以对车辆的尺寸进行多次测量,并根据该多次的测量结果确定目标所属的车辆类型的尺寸。
该车辆类型对应的置信度可以是根据多次获取目标的车辆特征之间的差异程度和/或车辆尺寸的多次测量结果之间的差异程度确定的。例如,该车辆类型对应的置信度可以与多次获取目标的车辆特征之间的差异程度负相关,该车辆类型对应的置信度可以与车辆尺寸的多次测量结果之间的差异程度成负相关。
在S616,根据目标对应的车辆类型,以及车辆数据库,确定目标的形心。
根据目标与自车的相对位置关系,以及目标与自车行驶方向是否相同,将目标的轮廓补齐。目标的轮廓可以理解为目标所在位置的长方体区域。
具体地,当目标位于自车的左前、右前、左后或右后方向时,以与自车最近的角的顶点为基准补齐目标的轮廓。当目标车辆位于自车的正前、正后、左侧或右侧时,以与目标车辆距离自车最近的长边中点为基准补齐目标的轮廓。
在车辆数据库中,车辆可以表示为长方形,车辆的尺寸可以表示长方形的长度、宽度和高度。因此,车辆的形心可以理解为长方形的形心。
在S617,根据传感器采集的数据,确定目标的第一状态数据,第一状态数据用于表示目标的形心在各个采集时间点的位置、速度、加速度等运动状态。
由于目标与自车位置关系的变化,以与车辆最近的点的运动状态表示目标的运动状态,可能导致确定的目标的运动状态不准确。以目标中某个固定的点,如目标的形心的运动状态表示目标的运动状态,使得第一状态数据能够更加准确的反映目标的运动状态。
应当理解,目标中某个固定的点也可以目标的左前方、右前方的顶点、左侧中点、右侧中点等。也就是说,在S616也可以根据目标对应的车辆类型以及车辆数据库,确定目标中某个固定的点,在S617确定的目标的第一状态数据可以用于表示目标中该固定的点的运动状态。
在S620,根据目标的第一状态数据进行初始状态估计,确定目标的初始状态数据。
目标可以是车辆或其他交通参与者如行人等。
数据滤波是去除噪声还原真实数据的一种数据处理技术。由于观测数据中包括系统中的噪声和干扰的影响,所以通过数据滤波可以提升用于进行初始状态估计的数据的准确性。
进行初始状态估计,可以利用滤波算法对第一状态数据进行处理。例如,可以利用线性卡尔曼(kalman)滤波算法等对第一状态数据进行处理。卡尔曼滤波算法又称为目标的动态模型,可以理解为一种自回归数据处理算法。卡尔曼滤波算法利用状态转移方程,对离散时间系统进行动态描述,描述目标运动行为。
对第一状态数据进行数据滤波后的结果可以作为初始状态数据。
或者,对第一状态数据进行数据滤波后的结果中目标的位置随时间的关系进行曲线拟合。根据拟合得到的曲线确定目标在各个采集时间点的速度和加速度。初始状态数据可以包括根据拟合得到的曲线确定的目标在各个采集时间点的初始估计状态。目标的初始估计状态包括目标的位置、速度、加速度等。第一状态数据的数据滤波结果中,与目标的速度、加速度随时间的关系相比,目标的位置随时间的关系较为准确。对第一状态数据的数据滤 波结果中目标的位置随时间的关系进行曲线拟合,根据拟合的曲线,确定初始状态数据。初始状态数据包括根据拟合的曲线确定的目标的位置、速度、加速度等中的每一个与时间的关系。例如,初始状态数据可以包括目标在各个采集时间点中每个采集时间点的初始估计状态。目标的初始估计状态包括位置、速度、加速度等。
在S630,确定初始时间点,初始时间点是使得第一状态数据和初始状态数据的差异最小的时间点。
可以对第一状态数据与初始状态数据中目标在各个采集时间点的位置的差异、速度的差异、加速度的差异进行加权求和的计算。位置的差异对应的权重、速度的差异对应的权重、加速度的差异对应的权重可以相同或不同。权重可以是预设的。
第一状态数据和初始状态数据中在某一采集时间点的差异可以表示为:
ω 1Σ|X ab-X f|+ω 2Σ|v ab-v f|+ω 3Σ|a ab-a f|
其中,X ab、v ab、a ab分别为第一状态数据中的位置、速度、加速度,X f、v f、a f分别为初始状态数据中的位置、速度、加速度,ω 1、ω 2、ω 3分别为位置、速度、加速度对应的权重。
可以将初始时间点的第一状态数据或初始状态数据作为目标在初始时间点的初始状态。或者,也可以利用第一状态数据对初始状态数据中初始时间点的位置、速度、加速度进行修正,以得到目标在初始时间点的初始状态。
在S640,根据第一状态数据,对目标进行正向和反向的状态估计,以确定目标的第三估计状态数据,初始时间点为正向跟踪或反向跟踪的起始时间点。
以初始时间点为正向跟踪的起始时间点为例进行说明。具体地,工控机可以进行S641至S643。
S641,以初始时间点为起始时间点,根据第一状态数据,对目标进行正向的状态估计,以确定第一估计状态数据。
对目标进行正向的状态估计,也可以理解为按照时间先后顺序对目标的状态进行状态估计。时间先后顺序也可以称为第一时间顺序。
工控机可以利用卡尔曼滤波算法对目标进行正向状态估计,如图8所示。
在S6411,将初始时间点作为时间点t0,将初始状态作为目标在时间点t0的正向估计状态X。
在S6412,根据时间点t0的正向估计状态X,确定目标在时间点t1的正向预测状态F(ΔT)×X。
其中,ΔT为时间点t0与时间点t1之间的时长,符号“×”表示相乘。时间点t1可以是时间点t0的下一个时间点。F(ΔT)为按照时间顺序进行状态估计情况下使用的状态转移矩阵。
在S6413,根据第一状态数据中目标在时间点t1的状态和目标在时间点t1的正向预测状态,以确定目标在时间点t1的正向估计态。
例如,目标在时间点t1的正向估计态可以是第一状态数据中目标在时间点t1的状态和目标在时间点t1的正向预测状态的加权平均值。
在S6414,将时间点t1作为时间点t0。
之后,工控机可以重复进行S6412至S6414。时间点t1之后的下一个时间点即为再次 进行S6412时的时间点t1。
可以在时间点t1为第一状态数据中按照时间先后顺序的最后一个时间点时,停止对S6412至S6414的重复。或者,在时间点t1为第一状态数据中按照时间先后顺序的最后一个时间点之后,还可以对S6412和S6414进行预设次数的重复。
在时间点t1为第一状态数据中按照时间先后顺序的最后一个时间点之后,在重复进行S6412至S6414的过程中,在S6413可以将目标在时间点t1的正向预测状态作为目标在时间点t1的正向估计状态。
应当理解,在时间点t1为第一状态数据中按照时间先后顺序的最后一个时间点之后,重复进行S6412和S6414,随着重复次数的增加,进行S6414得到的目标的正向估计状态的准确度降低。在时间点t1为第一状态数据中的最后一个时间点之后,可以对S6412和S6414进行预设次数的重复,从而使得S6414得到的正向估计状态具有较高准确度。
在S642,以第一状态数据中的最后一个采集时间点为起始时间点,根据目标在各个时间点的第一状态数据,对目标进行反向的状态估计,以确定目标的第二估计状态数据。
对目标进行反向的状态估计,也可以理解为按照与时间先后顺序相反的第二时间顺序对目标的状态进行状态估计。
可以利用卡尔曼滤波算法对目标进行反向的状态估计,如图9所示。
在S6421,将第一状态数据中的倒数第i个采集时间点作为时间点t0,将目标在时间点t0的正向估计状态作为时间点t0的反向估计状态,i为预设值。
例如,可以将第一状态数据中最后一个采集时间点作为时间点t0。将第一状态数据中最后的某个时间点作为在按照与时间相反的顺序进行状态估计过程中的起始时间点,尽可能增加按照与时间相反的顺序进行状态估计的时间点。
经过S6412至S6414的多次重复,在第一状态数据中最后几个时间点,第一估计状态数据中目标的正向估计状态一般是收敛的,较为准确。将目标在第一状态数据中最后几个时间点中任一个时间点的正向估计状态作为进行S642的起始状态,能够使得进行S642确定的第二估计状态数据更为准确。
在S6422,根据目标在时间点t0的反向估计状态X,确定目标在时间点t1的反向预测状态F(-ΔT)×X。
按照时间先后顺序,时间点t1为时间点t0的上一个时间点。F(-ΔT)为对目标进行反向的状态估计过程中使用的状态转移矩阵。
在S6423,根据第一状态数据中目标在时间点t1的状态和目标在时间点t1的反向预测状态,确定目标在时间点t1的反向估计状态。
在S6424,将时间点t1作为时间点t0。
之后,工控机可以重复进行S6422至S6424。
可以在时间点t0为第一状态数据中沿第二时间顺序的最后一个时间点(即沿时间先后顺序的第一个时间点)时,停止重复S6422至S6424。或者,在时间点t0为第一状态数据中沿第二时间顺序的最后一个时间点之后,还可以对S6422和S6424进行预设次数的重复。
在时间点t0为第一状态数据中沿第二时间顺序的最后一个时间点之后,在重复进行S6422至S6424的过程中,在S6423可以将目标在时间点t1的反向预测状态作为目标在时 间点t1的反向估计状态。
应当理解,在时间点t0为第一状态数据中沿第二时间顺序的最后一个时间点之后,重复进行S6422和S6424,随着重复次数的增加,进行S424得到的目标的反向估计状态准确度降低。在时间点t0为第一状态数据中的沿时间先后顺序的第一个时间点之后,可以对S6422和S6424进行预设次数的重复,从而使得S6424得到的反向估计状态具有较高准确度。
图10示出了第一状态数据、第一估计状态数据、第二估计状态数据、第三估计状态数据之间的时间关系。在图10中,沿水平方向向右为时间增加的方向,即正向;沿水平方向向左为与时间先后顺序相反的方向,即方向。如图10所示,第一估计状态数据可以包括第一状态数据对应的时间段中初始时间点之后(即初始时间点右侧)的部分以及第一状态数据对应的时间段之后第一预设时长T1内目标的正向估计状态。在S641中时间点t1为第一状态数据中按照时间先后顺序的最后一个时间点之后对S6412和S6414进行预设次数的重复,可以得到该第一预设时长T1内目标的正向估计状态。
第二估计状态数据可以包括第一状态数据对应的时间段以及该时间段之前第二预设时长T2内目标的反向估计状态。在在S642中时间点t0为第一状态数据中沿第二时间顺序的最后一个时间点之后对S6422至S6424进行预设次数的重复,可以得到该第二预设时长T2内目标的反向估计状态。
重复进行S6422至S6424对应的预设次数与重复进行S6412和S6414对应的预设次数可以相同或不同。也就是说,第二预设时长T2与第一预设时长T1可以相同或不同。
在S643,根据第一估计状态数据和第二估计状态数据,确定第三估计状态数据。
对第一状态数据中初始时间点之后的部分,可以将目标的第一估计状态数据和第二估计状态数据中各个时间点的状态进行加权平均计算,得到目标在该部分中各个时间点的估计状态。第三估计状态数据可以包括目标在多个采集时间点的估计状态。
如图10所示,第三估计状态数据还可以包括目标在第一状态数据中初始时间点之前各个时间点的状态以及第一状态数据对应的时间段之前的第二预设时长、第一状态数据对应的时间段之后的第一预设时长中的各个时间点的估计状态。在第一状态数据时间先后顺序最后一个时间点之后第一预设时长T1中各个时间点,目标的估计状态可以是目标的正向估计状态。在第一状态数据中初始时间点之前的各个时间点以及第一状态数据沿时间先后顺序第一个时间点之前第二预设时长的各个时间点,目标的估计状态可以是目标的反向估计状态。
在目标被遮挡的时长较小,小于或等于第一预设时长与第二预设时长之和的情况下,对于目标被遮挡的时间段内某个时间点的目标的估计状态可以是该时间段内各个时间点的正向估计状态、反向估计状态,或者也可以是正向估计状态和反向估计状态的加权平均值。
如图11所示,目标被遮挡的时长与第一预设时长与第二预设时长之和相等的情况下,目标开始被遮挡之后第一预设时长内,目标的估计状态为对目标被遮挡前时间段T3的状态进行正向的状态估计确定的正向估计状态。目标开始被遮挡且经过第一预设时长之后,目标的估计状态为对目标被遮挡结束之后时间段T4的状态进行反向的状态估计确定的反向估计状态。
在目标被遮挡的时长小于第一预设时长与第二预设时长之和的情况下,对目标被遮挡前时间段T3进行正向的状态估计对应的第一预设时长T1与对目标被遮挡结束之后时间段T4进行反向的状态估计对应的第二预设时长T2,存在全部或部分的时间重合。对于时间重合部分的各个时间点,可以将正向估计状态和反向估计状态的加权平均值作为该时间点目标的估计状态。
在目标被遮挡的时长较大,大于第一预设时长与第二预设时长之和的情况下,也就是说,在第一状态数据中在时间顺序上相邻的两个时间点之间的时间差大于第一预设时长与第二预设时长之和的情况下,可以进行S650。
在S650,利用格栅(lattice)算法,确定目标在第一状态数据中时间差大于预设值的两个相邻时间点之间的状态,以更新第三估计状态数据。
如图12所示,S650可以包括S651至S653。
在S651,确定目标在遮挡时间段开始时的开始状态和目标在遮挡时间段结束时的结束状态,开始状态和结束状态分别为第三估计状态数据中遮挡时间段开始时和结束时的估计状态。
在S652,根据车辆轨迹参考线,建立弗勒内(Frenet)坐标系,并以Frenet坐标系表示道路信息和目标的开始状态、结束状态。
车辆轨迹参考线也可以理解为车道中心线。Frenet坐标系可以理解为是以车辆轨迹参考线为横轴,以与车道中心线垂直方向为纵轴的笛卡尔直角坐标系。
在第三估计状态数据中,目标在遮挡时间段开始时和结束时的估计状态可以通过笛卡尔坐标系表示。目标的状态在笛卡尔坐标系下可以表示为(x,y,θ,v,a) T,其中,x、y用于表示目标在笛卡尔坐标系中的坐标,θ用于表示目标的航向角(即速度的方向),v用于表示目标的速度大小,a用于表示目标的加速度。
在Frenet坐标系中,目标的状态可以表示为纵向状态
Figure PCTCN2021143595-appb-000001
和侧向状态
Figure PCTCN2021143595-appb-000002
其中,s表示目标在Frenet坐标系中的纵向坐标,
Figure PCTCN2021143595-appb-000003
表示目标在Frenet坐标系中的纵向坐标对时间的微分(或者可以理解为对时间的导数),即目标的纵向速度,
Figure PCTCN2021143595-appb-000004
表示目标在Frenet坐标系中的纵向坐标对时间的二次导数,即纵向加速度,l表示目标在Frenet坐标系中的横向坐标,
Figure PCTCN2021143595-appb-000005
表示目标在Frenet坐标系中的横向坐标对s的微分,用于表示横向速度,
Figure PCTCN2021143595-appb-000006
表示目标在Frenet坐标系中的横向坐标对s的二次导数,用于表示横向加速度。
还可以通过Frenet坐标系表示其他道路信息,例如除目标之外的其他交通参与者所在的区域等。除目标之外的其他交通参与者,可以包括自车、其他车辆、行人、障碍物等。
可以利用空间占据图在Frenet坐标系中表示除目标之外的其他交通参与者所在的区域。不同的时间点可以对应于不同的空间占据图。
空间占据图也可以称为占据栅格图(occupied grid map,OGM)。空间占据图可以包括多个栅格,每个栅格用于表示目标周围环境中的一个位置,被占据的栅格表示即该栅格对应的位置存在其他交通参与者,反之,未被占据的栅格表示该栅格对应的位置存在其他交通参与者。
在S653,基于lattice采样生成轨迹束。
轨迹束包括多个轨迹,每个轨迹可以表示为目标的位置与时间的对应关系。
每个轨迹的起始状态为Frenet坐标表示的目标在遮挡时间段开始时的开始状态,每个 轨迹的终止状态为Frenet坐标表示的目标在遮挡时间段结束时的结束状态。
在S654,确定轨迹束中使得损失函数(cost)值最小的目标轨迹。
可以计算每个轨迹的cost值。
损失函数cost可以表示为多个子函数之和。该多个子函数包括用于表示纵向损失的子函数cost lon、用于表示横向损失的子函数cost lon、用于表示碰撞安全损失的子函数cost safe、用于表示加加速度损失的子函数cost jerk、用于表示向心加速度损失的子函数cost cen、用于表示横向加速度损失的子函数cost latcom、用于表示舒适性损失的子函数cost comfort、驾驶历史状态损失cost his等中的一个或多个。
用于表示纵向损失的子函数cost lon可以表示为:
Figure PCTCN2021143595-appb-000007
其中,a、b为预设系数,cost spead用于表示轨迹的速度损失,与轨迹的平均速度负相关,cost dist用于表示轨迹的横向距离损失,与轨迹的横向总距离正相关,符号“+”表示相加,分式表示分子除以分母,分式的计算结果可以表示为整数、分数或小数的形式,例如分式的计算结果可以表示为保留1位或两位小数的形式。
用于表示横向损失的子函数cost lat可以表示为:
Figure PCTCN2021143595-appb-000008
其中,T为轨迹中时间点的数量,s latoffset(t)用于表示轨迹在时刻t与道路中心线的偏移量,
Figure PCTCN2021143595-appb-000009
表示对t的取值分别为0至T情况下F(t)值的累加,在子函数cost lat中,
Figure PCTCN2021143595-appb-000010
T为轨迹中时间点的数量。
用于表示碰撞安全损失的子函数cost safe可以表示为:
Figure PCTCN2021143595-appb-000011
其中,N用于表示对象的数量,cost c(i,t)用于表示轨迹在时间点t与第i个对象碰撞的损失。
在空间占据图中,连续的被占据的栅格可以理解为一个对象。
损失cost c(i,t)可以为0或预设值。目标位于轨迹中时间点t对应的位置,如果目标的轮廓与第i个对象所在的区域相交,则目标沿该轨迹行驶在时间点t与第i个对象碰撞,cost c(i,t)为预设值。如果目标的轮廓与第i个对象所在的区域不相交,则目标沿该轨迹行驶与第i个对象碰撞,沿该轨迹行驶在时间点t与第i个对象不会发生碰撞,cost c(i,t)为0。可以将预设值设置为远远大于其他子函数的值,从而,轨迹使得目标与任一个对象发生碰撞情况下的损失函数cost的值远远大于轨迹使得目标未与任何对象发生碰撞情况下的损失函数cost的值。
用于表示加加速度损失的子函数cost jerk可以表示为:
Figure PCTCN2021143595-appb-000012
其中,d jerk(t)用于表示按照轨迹行驶在时刻t时的加加速度,d jerk_upper是预设加加速 度,可以表示加加速度的最大值。加加速度是指加速度对时间的导数。
用于表示向心加速度损失的子函数cost cen可以表示为:
Figure PCTCN2021143595-appb-000013
其中,k为预设系数,v用于表示按照轨迹行驶在时刻t时的速度。
用于表示横向加速度损失的子函数cost latcom可以表示为:
Figure PCTCN2021143595-appb-000014
其中,
Figure PCTCN2021143595-appb-000015
用于表示在时刻0至时刻T之间的最大值。
用于表示舒适性损失的子函数cost comfort可以表示为:
cost comfort=cost jerk+cost cen+cost latcom
用于表示驾驶历史状态损失的子函数cost his可以表示为:
Figure PCTCN2021143595-appb-000016
其中,state(t)用于表示按照轨迹行驶在时刻t时的状态特征,可以是根据按照轨迹行驶的目标在时刻t的速度、加速度、加加速度等中的至少一个确定的;state 0用于表示根据目标的历史状态确定的状态特征。state 0可以是根据目标在各个采集时间点的估计状态确定的。
在S655,根据目标轨迹,更新第三估计状态数据。
可以根据S640确定的第三估计状态数据和目标轨迹,确定更新后的第三估计状态数据。在更新后的第三估计状态数据中,在目标被遮挡的时间段中目标的估计状态为按照目标轨迹行驶情况下的目标的状态。
通过S650,可以确定在两个采集时间点之间的时间间隔大于预设值的情况下,确定目标在该时间间隔中的目标轨迹。目标轨迹用于指示目标在该时间间隔中各个补充时间点的估计状态。从而,根据目标轨迹,可以对第三估计状态数据进行更新,更新后的第三估计状态数据包括目标轨迹指示的目标在各个补充时间点的估计状态。
在S654利用各个轨迹的cost值确定目标轨迹时,cost函数可以表示为多个子cost之和,其中表示驾驶历史状态损失的子函数cost his用于表示轨迹的状态特征与目标的历史状态特征之间的差异。通过增加子函数cost his,使得确定的目标轨迹更加符合目标的行驶习惯,提高目标在各个补充时间点的估计状态的准确性。
上文结合图1至图12的描述了本申请实施例的方法实施例,下面结合图13至图14,描述本申请实施例的装置实施例。应理解,方法实施例的描述与装置实施例的描述相互对应,因此,未详细描述的部分可以参见前面方法实施例。
图13是本申请实施例提供的一种状态估计装置的示意性结构图。
状态估计装置2000包括获取模块2010和处理模块2020。
在一些实施例中,获取模块2010用于,获取目标在第一时间段内的第一状态数据,所述第一状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联。
处理模块2020用于,根据所述第一状态数据,按照第一时间顺序进行状态估计,得 到对应于第二时间段的第一估计状态数据。
处理模块2020还用于,根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合。
处理模块2020还用于,根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
可选地,所述第二时间段的第一时间起点为初始时间点。
处理模块2020还用于,根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;
处理模块2020还用于,根据所述第一状态数据和所述初始状态数据,确定所述初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
可选地,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
可选地,所述第三时间段的第二时间起点为所述第一时间段沿所述第一时间顺序的最后一个时间点。
可选地,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;或者,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
可选地,处理模块2020还用于,根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
可选地,处理模块2020具体用于,确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异。
处理模块2020还用于,根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
在另一些实施例中,获取模块2010用于,获取目标在第一时间段内的第一状态数据,所述第一状态数据与所述第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联。
处理模块2020用于,根据所述第一状态数据,确定初始状态数据,所述初始状态数 据用于表示所述目标在所述第一时间段内的初始估计状态。
处理模块2020还用于,根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
处理模块2020还用于,根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计。
可选地,处理模块2020具体用于,根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据,所述第一时间起点为所述第二时间段沿所述第一时间顺序的起点。
处理模块2020还用于,根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合。
处理模块2020还用于,根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述三状态估计数据用于估计所述第一状态。
可选地,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
可选地,所述第三时间段的第二时间起点为所述第一时间段沿第一时间顺序的最后一个时间点。
可选地,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;或者,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
可选地,处理模块2020还用于,根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
可选地,处理模块2020具体用于,确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异。
处理模块2020还用于,根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
在又一种实施例中,获取模块2010用于,获取第二状态数据,所述第二状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状 态相关联,所述第二状态数据对应的第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段。
处理模块2020用于,根据所述第二状态数据,确定第四状态估计数据,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
可选地,处理模块2020具体用于,确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异。
处理模块2020还用于,根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
可选地,处理模块2020具体用于,根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据,所述第一状态数据与所述第一采集状态数据相关联。
处理模块2020还用于,根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合。
处理模块2020还用于,根据所述第一状态估计数据和所述第二状态估计数据,确定所述第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
可选地,所述第二时间段的第一时间起点为初始时间点。
处理模块2020还用于,根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态。
处理模块2020还用于,根据所述第一状态数据和所述初始状态数据,确定所述初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
可选地,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
可选地,所述第三时间段的第二时间起点为所述第一时间段沿所述第一时间顺序的最后一个时间点。
可选地,所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;或者,所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
可选地,所述第二状态数据为第三状态估计数据。
处理模块2020还用于,根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态,所述第一状态数据与所述第 一采集状态数据相关联。
处理模块2020还用于,根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
处理模块2020还用于,根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计,以确定所述第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
图14是本申请一个实施例提供的状态估计装置3000的示意性结构图。状态估计装置3000可包括:至少一个处理器3010和存储器3020,所述存储器3020可用于存储程序指令,当所述程序指令在所述至少一个处理器1210中执行时,使得所述状态估计装置3000实现前文中的状态估计装置执行的各个步骤或方法或操作或功能。
在本申请实施例中,处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如CPU、微处理器、GPU(可以理解为一种微处理器)、或DSP等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为ASIC或PLD实现的硬件电路,例如FPGA。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如NPU、TPU、DPU等。
可见,以上装置中的各单元可以是被配置成实施以上方法的一个或多个处理器(或处理电路),例如:CPU、GPU、NPU、TPU、DPU、微处理器、DSP、ASIC、FPGA,或这些处理器形式中至少两种的组合。
此外,以上装置中的各单元可以全部或部分可以集成在一起,或者可以独立实现。在一种实现中,这些单元集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。该SOC中可以包括至少一个处理器,用于实现以上任一种方法或实现该装置各单元的功能,该至少一个处理器的种类可以不同,例如包括CPU和FPGA,CPU和人工智能处理器,CPU和GPU等。
本申请实施例还提供一种工控机,其包括前述的状态估计装置。
本申请实施例还提供一种计算机程序存储介质,其特征在于,所述计算机程序存储介质具有程序指令,当所述程序指令被执行时,使得前文中的方法被执行。
本申请实施例还提供一种芯片,其特征在于,所述芯片包括至少一个处理器,当程序指令在所述至少一个处理器中执行时,使得前文中的方法被执行。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
应理解以上装置中各单元的划分仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。此外,装置中的单元可以以处理器调用软件的形式实现;例如装置包括处理器,处理器与存储器连接,存储器中存储有指令,处理 器调用存储器中存储的指令,以实现以上任一种方法或实现该装置各单元的功能,其中处理器例如为通用处理器,例如CPU或微处理器,存储器为装置内的存储器或装置外的存储器。或者,装置中的单元可以以硬件电路的形式实现,可以通过对硬件电路的设计实现部分或全部单元的功能,该硬件电路可以理解为一个或多个处理器;例如,在一种实现中,该硬件电路为ASIC,通过对电路内元件逻辑关系的设计,实现以上部分或全部单元的功能;再如,在另一种实现中,该硬件电路为可以通过可编程逻辑器件PLD实现,以现场可编程门阵列(Field Programmable Gate Array,FPGA)为例,其可以包括大量逻辑门电路,通过配置文件来配置逻辑门电路之间的连接关系,从而实现以上部分或全部单元的功能。以上装置的所有单元可以全部通过处理器调用软件的形式实现,或全部通过硬件电路的形式实现,或部分通过处理器调用软件的形式实现,剩余部分通过硬件电路的形式实现。
本申请实施例中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示单独存在A、同时存在A和B、单独存在B的情况。其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项”及其类似表达,是指的这些项中的任意组合,包括单项或复数项的任意组合。例如,a,b和c中的至少一项可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请实施例中采用诸如“第一”、“第二”的前缀词,仅仅为了区分不同的描述对象,对被描述对象的位置、顺序、优先级、数量或内容等没有限定作用。例如,被描述对象为“状态数据”,则“第一状态数据”和“第二状态数据”中“状态数据”之前的序数词并不限制“状态数据”之间的位置或顺序或优先级;再如,被描述对象为“时间段”,则“第一时间段”和“第二时间段”中“时间段”之前的序数词并不限制“时间段”之间的位置或顺序或优先级。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (30)

  1. 一种状态估计方法,其特征在于,包括:
    获取目标在第一时间段内的第一状态数据,所述第一状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联;
    根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据;
    根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;
    根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
  2. 根据权利要求1所述的方法,其特征在于,所述第二时间段的第一时间起点为初始时间点,所述方法还包括:
    根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;
    根据所述第一状态数据和所述初始状态数据,确定所述初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
  3. 根据权利要求1或2所述的方法,其特征在于,
    所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
  4. 根据权利要求3所述的方法,其特征在于,所述第二时间起点为所述第一时间段沿所述第一时间顺序的最后一个时间点。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,
    所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;
    或者,
    所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述方法还包括:
    根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
  7. 根据权利要求6所述的方法,其特征在于,所述根据对应于第四时间段的第二状态数据,确定所述第四状态估计数据,包括:
    确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;
    根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
  8. 一种状态估计方法,其特征在于,包括:
    获取目标在第一时间段内的第一状态数据,所述第一状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联;
    根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;
    根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值;
    根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计,包括:
    根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据,所述第一时间起点为所述第二时间段沿所述第一时间顺序的起点;
    根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;
    根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述三状态估计数据用于估计所述第一状态。
  10. 根据权利要求9所述的方法,其特征在于,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
  11. 根据权利要求10所述的方法,其特征在于,所述第二时间起点为所述第一时间段沿第一时间顺序的最后一个时间点。
  12. 根据权利要求9-11中任一项所述的方法,其特征在于,
    所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;或者,
    所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段 时长。
  13. 根据权利要求8-12中任一项所述的方法,其特征在于,所述方法还包括:
    根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
  14. 根据权利要求13所述的方法,其特征在于,所述根据对应于第四时间段的第二状态数据,确定所述第四状态估计数据,包括:
    确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;
    根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
  15. 一种状态估计装置,其特征在于,包括:获取模块和处理模块;
    所述获取模块用于,获取目标在第一时间段内的第一状态数据,所述第一状态数据与第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联;
    所述处理模块用于,根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据;
    所述处理模块还用于,根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;
    所述处理模块还用于,根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述第三状态估计数据用于估计所述第一状态。
  16. 根据权利要求15所述的装置,其特征在于,所述第二时间段的第一时间起点为初始时间点,
    所述处理模块还用于,根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;
    所述处理模块还用于,根据所述第一状态数据和所述初始状态数据,确定所述初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值。
  17. 根据权利要求15或16所述的装置,其特征在于,
    所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
  18. 根据权利要求17所述的装置,其特征在于,所述第三时间段的第二时间起点为所述第一时间段沿所述第一时间顺序的最后一个时间点。
  19. 根据权利要求15-18中任一项所述的装置,其特征在于,
    所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;或者,
    所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
  20. 根据权利要求15-19中任一项所述的装置,其特征在于,
    所述处理模块还用于,根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
  21. 根据权利要求20所述的装置,其特征在于,所述处理模块具体用于:
    确定至少一个补充状态数据集合,所述至少一个补充状态数据集合中每个补充状态数据集合包括的起始数据和终止数据是根据所述第二状态数据确定的,且所述每个补充状态数据集合对应一个差异参数,所述每个补充状态数据集合的差异参数用于表示所述每个补充状态数据集合对应的状态与所述第二状态数据对应的状态之间的差异;
    根据所述至少一个补充状态数据集合中每个所述补充状态数据集合的损失函数,确定所述第四状态估计数据,其中,所述每个补充状态数据集合的损失函数包含所述每个补充状态数据集合的差异参数,且与所述每个补充状态数据集合的差异参数正相关。
  22. 一种状态估计装置,其特征在于,包括:获取模块和处理模块;
    所述获取模块用于,获取目标在第一时间段内的第一状态数据,所述第一状态数据与所述第一采集状态数据相关联,所述第一采集状态数据包括来自第一传感器在所述第一时间段内采集的所述目标的数据,所述目标的数据至少与所述目标在所述第一时间段内的第一状态相关联;
    所述处理模块用于,根据所述第一状态数据,确定初始状态数据,所述初始状态数据用于表示所述目标在所述第一时间段内的初始估计状态;
    所述处理模块还用于,根据所述第一状态数据和所述初始状态数据,确定初始时间点,在所述初始时间点所述第一状态和所述初始估计状态之间的差异小于预设值;
    所述处理模块还用于,根据所述第一状态数据,以所述初始时间点为第一时间起点,进行所述目标的状态估计。
  23. 根据权利要求22所述的装置,其特征在于,所述处理模块具体用于:
    根据所述第一状态数据,按照第一时间顺序进行状态估计,得到对应于第二时间段的第一估计状态数据,所述第一时间起点为所述第二时间段沿所述第一时间顺序的起点;
    根据所述第一状态数据,按照与所述第一时间顺序相反的第二时间顺序进行状态估计,得到对应于第三时间段的第二状态估计数据,所述第二时间段与所述第三时间段存在重合;
    根据所述第一状态估计数据和所述第二状态估计数据,确定第三状态估计数据,所述 三状态估计数据用于估计所述第一状态。
  24. 根据权利要求23所述的装置,其特征在于,所述第一估计状态数据包括第一数据,所述第一数据为按照所述第二时间顺序对所述第三时间段的第二时间起点进行状态估计的输入数据。
  25. 根据权利要求23或24所述的装置,其特征在于,
    所述第一状态估计数据包括第二数据,所述第二数据对应第一预设时长内对所述目标的状态估计结果,所述第一预设时长为沿所述第一时间顺序在所述第一时间段之后的一段时长;或者,
    所述第二状态估计数据包括第三数据,所述第三数据对应第二预设时长内对所述目标的状态估计结果,所述第二预设时长为沿所述第二时间顺序在所述第一时间段之后的一段时长。
  26. 根据权利要求22-25中任一项所述的装置,其特征在于,
    所述处理模块还用于,根据对应于第四时间段的第二状态数据,确定第四状态估计数据,所述第二状态数据为所述第一状态数据或者所述第三状态估计数据,所述第四时间段包括第一子时间段和第二子时间段,所述第一子时间段与所述第二子时间段之间存在时间间隔,所述时间间隔不属于所述第四时间段,所述第四状态估计数据用于估计所述目标在所述时间间隔的第二状态。
  27. 一种状态估计装置,其特征在于,包括存储器和处理器,所述存储器用于存储程序,所述处理器用于执行所述程序,以执行权利要求1-7中任一项所述的方法,或执行权利要求8-14中任一项所述的方法。
  28. 一种计算机程序存储介质,其特征在于,所述计算机程序存储介质具有程序指令,当所述程序指令被执行时,使得如权利要求1-7中任一项所述的方法,或使得执行权利要求8-14中任一项所述的方法被执行。
  29. 一种计算机程序产品,其特征在于,包括程序指令,当所述程序指令被执行时,使得如权利要求1-7中任一项所述的方法被执行,或使得权利要求8-14中任一项所述的方法被执行。
  30. 一种芯片,其特征在于,所述芯片包括至少一个处理器,当程序指令被所述至少一个处理器中执行时,使得如权利要求1-7中任一项所述的方法被执行,或使得权利要求8-14中任一项所述的方法被执行。
PCT/CN2021/143595 2021-12-31 2021-12-31 一种状态估计方法和装置 WO2023123325A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/143595 WO2023123325A1 (zh) 2021-12-31 2021-12-31 一种状态估计方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/143595 WO2023123325A1 (zh) 2021-12-31 2021-12-31 一种状态估计方法和装置

Publications (1)

Publication Number Publication Date
WO2023123325A1 true WO2023123325A1 (zh) 2023-07-06

Family

ID=86997209

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/143595 WO2023123325A1 (zh) 2021-12-31 2021-12-31 一种状态估计方法和装置

Country Status (1)

Country Link
WO (1) WO2023123325A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016074169A1 (zh) * 2014-11-12 2016-05-19 深圳市大疆创新科技有限公司 一种对目标物体的检测方法、检测装置以及机器人
CN110263870A (zh) * 2019-06-26 2019-09-20 深圳市悦动天下科技有限公司 运动状态识别方法、装置、智能终端和存储介质
US20200339146A1 (en) * 2017-12-19 2020-10-29 Veoneer Sweden Ab A state estimator
CN112101304A (zh) * 2020-11-06 2020-12-18 腾讯科技(深圳)有限公司 数据处理方法、装置、存储介质及设备
CN112154455A (zh) * 2019-09-29 2020-12-29 深圳市大疆创新科技有限公司 数据处理方法、设备和可移动平台
US20210001868A1 (en) * 2019-07-02 2021-01-07 Mitsubishi Electric Research Laboratories, Inc. Receding Horizon State Estimator

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016074169A1 (zh) * 2014-11-12 2016-05-19 深圳市大疆创新科技有限公司 一种对目标物体的检测方法、检测装置以及机器人
US20200339146A1 (en) * 2017-12-19 2020-10-29 Veoneer Sweden Ab A state estimator
CN110263870A (zh) * 2019-06-26 2019-09-20 深圳市悦动天下科技有限公司 运动状态识别方法、装置、智能终端和存储介质
US20210001868A1 (en) * 2019-07-02 2021-01-07 Mitsubishi Electric Research Laboratories, Inc. Receding Horizon State Estimator
CN112154455A (zh) * 2019-09-29 2020-12-29 深圳市大疆创新科技有限公司 数据处理方法、设备和可移动平台
CN112101304A (zh) * 2020-11-06 2020-12-18 腾讯科技(深圳)有限公司 数据处理方法、装置、存储介质及设备

Similar Documents

Publication Publication Date Title
EP3384360B1 (en) Simultaneous mapping and planning by a robot
JP6224370B2 (ja) 車両用コントローラ、車両システム
CN1940591B (zh) 使用传感器融合进行目标跟踪的系统和方法
JP5017392B2 (ja) 位置推定装置および位置推定方法
JP5162849B2 (ja) 不動点位置記録装置
EP3680877A1 (en) Method for determining the location of an ego-vehicle
US20230377317A1 (en) Systems and methods for intelligent selection of data for building a machine learning model
US11657591B2 (en) Autonomous vehicle system for intelligent on-board selection of data for building a remote machine learning model
US20220188695A1 (en) Autonomous vehicle system for intelligent on-board selection of data for training a remote machine learning model
JP4984659B2 (ja) 自車両位置推定装置
US20210389133A1 (en) Systems and methods for deriving path-prior data using collected trajectories
WO2023092451A1 (zh) 预测可行驶车道的方法和装置
US20220205804A1 (en) Vehicle localisation
WO2021102676A1 (zh) 物体状态获取方法、可移动平台及存储介质
WO2023123325A1 (zh) 一种状态估计方法和装置
WO2021084731A1 (ja) 情報処理装置、情報処理システム、情報処理方法、及び情報処理プログラム
EP4160269A1 (en) Systems and methods for onboard analysis of sensor data for sensor fusion
KR20230120615A (ko) 보행자의 위치를 결정하는 방법 및 장치
CN115359332A (zh) 基于车路协同的数据融合方法、装置、电子设备及系统
EP4134625A1 (en) Drive device, vehicle, and method for automated driving and/or assisted driving
US20230025579A1 (en) High-definition mapping
US20230009736A1 (en) Adaptive motion compensation of perception channels
Wang et al. Space headway calculation and analysis at turn movement trajectories using hybrid model
CN116022163A (zh) 基于超局部子图的自动驾驶车辆扫描匹配和雷达姿态估计器
CN114355333A (zh) 一种自动驾驶基于控制信息的目标筛选方法及系统、车辆