WO2022179047A1 - 一种状态信息估计方法及装置 - Google Patents

一种状态信息估计方法及装置 Download PDF

Info

Publication number
WO2022179047A1
WO2022179047A1 PCT/CN2021/109535 CN2021109535W WO2022179047A1 WO 2022179047 A1 WO2022179047 A1 WO 2022179047A1 CN 2021109535 W CN2021109535 W CN 2021109535W WO 2022179047 A1 WO2022179047 A1 WO 2022179047A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
current
previous
image
target object
Prior art date
Application number
PCT/CN2021/109535
Other languages
English (en)
French (fr)
Inventor
蔡之奡
李天威
王笑非
穆北鹏
刘一龙
童哲航
王宇桐
Original Assignee
魔门塔(苏州)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 魔门塔(苏州)科技有限公司 filed Critical 魔门塔(苏州)科技有限公司
Publication of WO2022179047A1 publication Critical patent/WO2022179047A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation

Definitions

  • the present invention relates to the field of automation technology, and in particular, to a state information estimation method and device.
  • a system for estimating the motion state information of a target namely a vehicle or a robot, can be called a state estimator of the robot.
  • the method for estimating the motion state information of the target is generally divided into two schemes: filtering and optimization.
  • the filtering scheme has good real-time performance and high precision, and is easier to deploy in solutions for automatic driving of vehicles and robot motion.
  • autonomous vehicles are often equipped with multiple cameras with different orientations to take pictures of the surrounding environment of the vehicle, obtain surrounding environmental information, and then combine the sensor data collected by other multi-sensors. Positioning is carried out to ensure the accurate positioning of the vehicle and thus ensure the safe driving of the vehicle.
  • the present invention provides a state information estimation method and device, so as to realize the estimation of the state information of an object of a multi-sensor system including any number of image acquisition devices.
  • the specific technical solutions are as follows:
  • an embodiment of the present invention provides a method for estimating state information, the method comprising:
  • the other sensors include an IMU
  • the matching point pair between each current image is determined as the first matching point pair corresponding to the current image
  • the current state information of the target object at the current moment is determined.
  • the initial state information includes: initial velocity information and initial posture information, wherein the initial posture information includes: initial posture information and initial position information;
  • the step of determining the initial state information of the target object at the current moment by using the IMU data corresponding to the previous moment of the current moment and the previous state information of the target object at the previous moment includes:
  • a first state transition equation is constructed; the initial state transition equation of the target object at the current moment is determined by using the first state transition equation attitude information;
  • a second state transition equation is constructed; the target object is determined by using the second state transition equation initial speed information at the current moment;
  • a third state transition equation is constructed; using the third state transition equation to determine that the target object is in the The initial position information at the current moment.
  • each feature point to be utilized determines each feature point to be utilized.
  • the steps of corresponding three-dimensional position information include:
  • the triangulation algorithm based on the first matching point pair and the second matching point pair corresponding to the current image, the first matching point pair and the second matching point pair corresponding to the previous N time images at the current moment, and the image acquisition corresponding to each current image
  • the device pose information of the device, the device pose information of the image acquisition device corresponding to the images at the previous N moments, and the pose information of the target object corresponding to the current image and the images at the previous N moments determine the corresponding feature points of each feature point to be used. 3D position information.
  • the intermediate pose information in the intermediate state information Using the three-dimensional position information, the intermediate pose information in the intermediate state information, the object pose information in the state information of the target object corresponding to the images at each time in the previous N images, and the device pose information of each image acquisition device and Internal parameter matrix, to determine the projection position information of the projected point in the image where the spatial point corresponding to each feature point to be used is located;
  • the current state information of the target object at the current moment is determined.
  • the step of determining the current state information of the target object at the current moment based on the reprojection error equation includes:
  • the current state information of the target object at the current moment is determined.
  • an embodiment of the present invention provides an apparatus for estimating state information, where the apparatus includes:
  • a first obtaining module configured to obtain a current image collected by a multi-image capture device set by the target object at the current moment and current sensor data collected by other sensors, wherein the other sensors include an IMU;
  • a first determining module configured to determine the initial state of the target object at the current moment by using the IMU data corresponding to the previous moment of the current moment and the previous state information of the target object at the previous moment information;
  • the second determination module is configured to use the feature points detected by each current image and the relative pose relationship between the image acquisition devices corresponding to each current image to determine the matching point pair between each current image, as the current image corresponding The first matching point pair of ;
  • the third determination module is configured to use the feature points detected by each current image and the feature points detected by the previous image to determine the matching point pair between each current image and its previous image, as the corresponding point of the current image.
  • the fourth determination module is configured to determine each pair based on the first matching point pair and the second matching point pair corresponding to the current image and the first matching point pair and the second matching point pair corresponding to the images at the previous N moments of the current moment.
  • the fifth determination module is configured to determine the current state information of the target object at the current moment based on the three-dimensional position information, the image position information of each feature point to be used, the initial state information and the current sensor data.
  • the initial state information includes: initial velocity information and initial pose information, wherein the initial pose information includes: initial attitude information and initial position information;
  • the first determining module is specifically configured to use the IMU data corresponding to the previous moment of the current moment to determine the angular velocity information and acceleration information of the target object at the previous moment;
  • a first state transition equation is constructed; the initial state transition equation of the target object at the current moment is determined by using the first state transition equation attitude information;
  • a second state transition equation is constructed; the target object is determined by using the second state transition equation initial speed information at the current moment;
  • a third state transition equation is constructed; using the third state transition equation to determine that the target object is in the The initial position information at the current moment.
  • the fourth determination module is specifically configured to, according to the triangulation algorithm, based on the first matching point pair and the second matching point pair corresponding to the current image, and the first matching point corresponding to the image at the previous N moments of the current moment. pair and the second matching point pair, the device pose information of the image capture device corresponding to each current image, the device pose information of the image capture device corresponding to each previous N time image, and the current image and each previous N time image corresponding to the device pose information.
  • the pose information of the target object determines the three-dimensional position information corresponding to each feature point to be used.
  • the fifth determining module includes:
  • a first determining unit configured to determine intermediate state information of the target object at the current moment based on the current sensor data and the initial state information
  • the second determining unit is configured to use the three-dimensional position information, the intermediate pose information in the intermediate state information, the object pose information in the state information of the target object corresponding to the images at each time of the previous N time images, and each image Collect the device pose information and internal parameter matrix of the device, and determine the projection position information of the projection point in the image where the spatial point corresponding to each feature point to be used is located;
  • a construction unit configured to construct a reprojection error equation based on the projection position information corresponding to each feature point to be used and the image position information of each feature point to be used in the current image
  • the third determination unit is configured to determine the current state information of the target object at the current moment based on the reprojection error equation.
  • the third determining unit is specifically configured to construct a target measurement equation based on the reprojection error equation
  • the current state information of the target object at the current moment is determined.
  • the method and device for estimating state information obtain the current image collected by the multi-image collection device set on the target object at the current moment and the current sensor data collected by other sensors, wherein other sensors Including IMU; use the IMU data corresponding to the previous moment of the current moment and the previous state information of the target object at the previous moment to determine the initial state information of the target object at the current moment; use the feature points detected by each current image and each current
  • the relative pose relationship between the image acquisition devices corresponding to the images is used to determine the matching point pair between each current image as the first matching point pair corresponding to the current image; the feature points detected by each current image and their previous The feature points detected by the image, determine the matching point pair between each current image and its previous image, as the second matching point pair corresponding to the current image; based on the first matching point pair and the second matching point pair corresponding to the current image And the first matching point pair and the second matching point pair corresponding to the first N time images of the current image, determine the three-
  • the state that is, use the tracking results of the respective feature points of multiple image capturing devices, that is, the second matching point pair corresponding to the current image and its previous N images, and the features between multiple image capturing devices.
  • the point association relationship is the first matching point pair corresponding to the current image and the image at the previous N moments, and the three-dimensional position information corresponding to each feature point to be used is constructed to construct a multi-state constraint, and the current sensor data of other sensor data is fused to determine the target.
  • the current state information of the object at the current moment in order to obtain a state estimation result with higher precision and higher robustness, so as to realize the estimation of the state information of the object of the multi-sensor system including any number of image acquisition devices.
  • the current image and the first matching point pair corresponding to the image at the previous N moments are constructed to construct the three-dimensional position information corresponding to each feature point to be used to construct multi-state constraints, and the current sensor data of other sensor data are fused to determine the target object at the current moment.
  • the current state information can be obtained to obtain a state estimation result with higher accuracy and higher robustness, so as to realize the estimation of the state information of the object of the multi-sensor system including any number of image acquisition devices.
  • the initial state of the target object at the current moment is obtained by constructing the state transition equation information to provide a basis for the subsequent determination of the current state information of the target object with higher accuracy and higher robustness at the current moment.
  • FIG. 1 is a schematic flowchart of a state information estimation method provided by an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of an apparatus for estimating state information according to an embodiment of the present invention.
  • the present invention provides a state information estimation method and device, so as to realize the estimation of the state information of an object of a multi-sensor system including any number of image acquisition devices.
  • the embodiments of the present invention will be described in detail below.
  • FIG. 1 is a schematic flowchart of a state information estimation method provided by an embodiment of the present invention. The method may include the following steps:
  • S101 Obtain the current image collected by the multi-image collection device set on the target object at the current moment and the current sensor data collected by other sensors.
  • sensors include the IMU.
  • the state information estimation method provided by the embodiment of the present invention can be applied to any electronic device with computing capability, and the electronic device can be a terminal or a server.
  • the functional software implementing the method may exist in the form of a separate client software, or may exist in the form of a plug-in of the currently relevant client software, for example, it may exist in the form of a functional module of an automatic driving system , this is all possible.
  • the electronic device may be a device set on the target object, or may be a device set on the target object, which are all possible.
  • the electronic device may be a multi-sensor state estimator, the multi-sensor state estimator: input data of various sensors, such as multi-image acquisition devices, IMU (Inertial measurement unit, inertial measurement unit), The data collected by sensors such as GNSS (Global Navigation Satellite System, Global Navigation Satellite System/Global Navigation Satellite System), wheel speedometer, wheel speed sensor, etc., satisfy the assumption that all sensors are approximately rigid body transformations, and the entire multi-sensor is obtained.
  • the system is to set various state information of the target object of the multi-sensor system, mainly including position information, attitude information and speed information.
  • the target object can be an autonomous vehicle or an intelligent robot.
  • the multi-image capture device set on the target object may be a system of any number of multi-image capture devices installed on the same rigid body, in any position and orientation, which is installed on the target object.
  • the multi-image acquisition device and other sensors can collect data in real time and send it to the electronic device, so that the electronic device can obtain the image captured by the multi-image acquisition device set by the target object at the current moment, as the current and obtain the sensor data collected by other sensors from the previous moment to the current moment as the current sensor data.
  • sensors may include but are not limited to: IMU (Inertial measurement unit, inertial measurement unit), wheel speed sensor, and GNSS (Global Navigation Satellite System, global satellite navigation system/Global Navigation Satellite System). It can also include: GPS (Global Positioning System, global positioning system) and radar.
  • IMU Inertial measurement unit, inertial measurement unit
  • wheel speed sensor and GNSS (Global Navigation Satellite System, global satellite navigation system/Global Navigation Satellite System). It can also include: GPS (Global Positioning System, global positioning system) and radar.
  • GPS Global Positioning System, global positioning system
  • radar Global Positioning System
  • the relative positions of the multi-image capturing device and the target object are fixed, and the relative positions of other sensors and the target object are fixed.
  • the pose information of any object among the image acquisition device, the target object and other sensors is determined, and the pose information of other objects is also determined.
  • S102 Determine the initial state information of the target object at the current moment by using the IMU data corresponding to the previous moment of the current moment and the previous state information of the target object at the previous moment.
  • the electronic device can obtain the state information of the target object at the previous moment as the previous state information, and obtain the IMU data corresponding to the previous moment at the current moment.
  • EKF error-state Kalman Filter
  • the initial state information of the target object at the current moment is predicted and determined by using the IMU data corresponding to the previous moment and the previous state information.
  • the state information may include, but is not limited to: pose information and velocity information of the target object, wherein the pose information includes position information and attitude information.
  • the IMU data corresponding to the previous moment is: the IMU data collected by the IMU set by the target object at the moment before the current moment and the moment before the current moment.
  • the prediction and determination of the state information of the target vehicle can be achieved through the following state transition equation. Specifically, it can be expressed by the following formula (1):
  • t k-1 represents the previous moment of the current moment
  • t k represents the current moment
  • u k represents the IMU data corresponding to the previous moment of the current moment
  • t k represents the current moment
  • u k represents the IMU data corresponding to the previous moment of the current moment
  • t k represents the current moment
  • u k represents the IMU data corresponding to the previous moment of the current moment
  • the initial state information Includes: Initial speed information and initial pose information, wherein the initial pose information includes: initial pose information and initial location information
  • the S102 may include the following steps 011-014:
  • 011 Determine the angular velocity information and acceleration information of the target object at the previous moment by using the IMU data at the previous moment at the current moment.
  • 012 Use the angular velocity information at the previous moment and the previous attitude information in the previous state information to construct a first state transition equation; use the first state transition equation to determine the initial posture information of the target object at the current moment.
  • 013 Use the previous attitude information, the acceleration information at the previous moment, and the previous speed information in the previous state information to construct a second state transition equation; use the second state transition equation to determine the initial velocity information of the target object at the current moment.
  • 014 Use the initial speed information, the previous speed information, and the previous position information in the previous state information to construct a third state transition equation; use the third state transition equation to determine the initial position information of the target object at the current moment.
  • the electronic device de-biases and de-graves the IMU data corresponding to the previous moment of the current moment to obtain the angular velocity information and acceleration information of the target object at the previous moment, using ⁇ k-1 and ⁇ k- 1 indicates.
  • a first state transition equation is constructed using the angular velocity information at the previous moment and the previous attitude information in the previous state information; the initial attitude information of the target object at the current moment is determined by using the first state transition equation.
  • the first state transition equation can be expressed by the following formula (2):
  • the electronic device uses the previous attitude information, the acceleration information at the previous moment, and the previous speed information in the previous state information to construct a second state transition equation; the second state transition equation is used to determine the initial speed of the target object at the current moment. information.
  • the second state transition equation can be expressed by the following formula (3):
  • the electronic device uses the initial speed information, the previous speed information and the previous position information in the previous state information to construct a third state transition equation; the third state transition equation is used to determine the initial position information of the target object at the current moment, specifically of.
  • the third state transition equation can be expressed by the following formula (4):
  • the electronic device may pre-store the relative pose relationship between the image capturing devices.
  • the electronic device may first use a preset feature point detection algorithm to perform feature point detection on each current image to obtain the feature points in each current image.
  • the preset feature point detection algorithm may be a detection algorithm that can detect feature points in an image, such as a FAST feature point detection algorithm.
  • the electronic device determines the region of interest in each current image according to the relative pose relationship between the image acquisition devices corresponding to each current image, where the region of interest in the current image is an area that overlaps with other current images.
  • a preset feature descriptor extraction algorithm is used to extract feature descriptors for the feature points in the region of interest in the current image, and the feature descriptors corresponding to each feature point in the region of interest in the current image are obtained.
  • the feature points in the region of interest in each current image are matched to obtain each current image.
  • the feature point pair matched between the images is the matching point pair, which is used as the first matching point pair corresponding to the current image.
  • the current image that is, the first matching point pair corresponding to the kth frame image can be expressed as: c i , c j represent the i-th image acquisition device and the j-th image acquisition device, respectively, the value of c i , c j is an integer between [1, n], where n represents the image set by the target vehicle The number of acquisition devices.
  • the preset feature descriptor extraction algorithm may be a BRIER feature descriptor extraction algorithm.
  • S104 Using the feature points detected in each current image and the feature points detected in the previous image, determine a matching point pair between each current image and its previous image as a second matching point pair corresponding to the current image.
  • the electronic device uses the feature points detected in the current image and the feature points detected in the previous image, and uses the sparse optical flow KLT algorithm to analyze the current image and the previous image.
  • the feature points on the current image are tracked to obtain a matching feature point pair between the current image and its previous image, that is, a matching point pair, as the second matching point pair corresponding to the current image.
  • the second matching point pair corresponding to the current image can be expressed as: c i represents the ith image acquisition device, and the value range of ci is an integer between [1, n], where n represents the number of image acquisition devices set by the target vehicle.
  • the embodiment of the present invention does not limit the sequence of execution of S104 and S103.
  • the electronic device can execute S103 first and then execute S104, or execute S104 first and then execute S103, or execute S103 and S104 in parallel.
  • S105 Determine the three-dimensional position corresponding to each feature point to be utilized based on the first matching point pair and the second matching point pair corresponding to the current image and the first matching point pair and the second matching point pair corresponding to the image at the previous N moments of the current moment information.
  • the images at the previous N moments at the current moment refer to the images collected by the multi-image acquisition device of the target object at each moment within the N moments before the current moment.
  • the electronic device After the electronic device obtains the first matching point pair and the second matching point pair corresponding to the current image, the electronic device refers to the multi-state constrained Kalman filter (MSCKF), in order to construct multi-constraint conditions, for the target object containing multiple image acquisition devices.
  • MSCKF multi-state constrained Kalman filter
  • the state expansion in the multi-state constrained Kalman filter is carried out. Specifically, first expand the corresponding sliding window, set the length of the sliding window to N+1, and the sliding window contains the pose information in the state information of the target object, which is specifically expressed as follows:
  • x k [ ⁇ k , ⁇ k-1 , ... ⁇ e ... ⁇ kN ], an integer in e ⁇ [kN,k];
  • x k represents the state information in the sliding window corresponding to the current image
  • ⁇ k represents the pose information in the state information of the target corresponding to the current image.
  • the state information of the IMU can be directly used to represent the state information of the target object.
  • the sliding window contains the state information of the target object, which is the pose information of the IMU set by the target object.
  • the state information of the target object can be determined by using the state information of the IMU and the pre-stored pose transformation relationship between the IMU and the target object, which is all possible.
  • ⁇ e is Indicates the pose information of the target object corresponding to the e-th time image in the current image and the previous N time images in the world coordinate system.
  • the first matching point pair and the second matching point pair corresponding to each image in the images at the previous N moments of the current moment are obtained; based on the first matching point pair and the second matching point pair corresponding to the current image, the The first matching point pair and the second matching point pair corresponding to the image at time N, the device pose information of the image capture device corresponding to the current image, and the device pose information of the image capture device corresponding to the image at each time in the previous N time images , and determine the three-dimensional position information corresponding to each feature point to be used.
  • the feature points to be utilized are: among the feature points detected in each current image and the images at the previous N moments, the feature points of the three-dimensional position information can be determined.
  • the process for determining the first matching point pair corresponding to the image at each moment in the images at the previous N moments at the current moment is the same as the process for determining the first matching point corresponding to the first matching point corresponding to the current image.
  • the determination process of the second matching point pair corresponding to the image is the same as the determination process corresponding to the second matching point corresponding to the current image, and details are not described herein again.
  • N is a positive integer.
  • the device pose information of the image capture device corresponding to the image may refer to the device pose information when the image capture device captures the image.
  • the S105 may include the following steps:
  • the triangulation algorithm based on the first matching point pair and the second matching point pair corresponding to the current image, the first matching point pair and the second matching point pair corresponding to the previous N time images at the current moment, and the image acquisition corresponding to each current image
  • the device pose information of the device, the device pose information of the image acquisition device corresponding to the images at the previous N moments, and the pose information of the target object corresponding to the current image and the images at the previous N moments determine the three-dimensional position corresponding to each feature point to be used information.
  • the relative positional relationship between the target object and the set image acquisition devices is determined, and the electronic device may pre-store the relative positional relationship between the target object and each of the set image acquisition devices.
  • the pose information of the target object is determined, the pose information of each image acquisition device is determined.
  • the triangulation algorithm based on the image position information of each feature point in the first matching point pair corresponding to the current image on its image and the image position information of each feature point in the second matching point pair on its image, the current moment The image position information of each feature point on its image in the first matching point pair corresponding to the image at each moment in the first N moments of the image and the image position information of each feature point on its image in the second matching point pair, each current The device pose information of the image capture device corresponding to the image, the device pose information of the image capture device corresponding to the image at each time of the previous N time images, and the pose information of the target object corresponding to the current image and each of the previous N time images, That is, the matching results of all the feature points of the current image corresponding to each pose information in the sliding window corresponding to the current image and its previous N images Perform triangulation to determine the three-dimensional position information corresponding to each feature point to be utilized.
  • the specific calculation process of the triangulation algorithm may refer to the calculation process of the triangulation algorithm in the related
  • the three-dimensional position information corresponding to each feature point to be used, the image position information of each feature point to be used, and the device pose information and internal parameter matrix of the image acquisition device corresponding to the image where each feature point to be used is located can be used.
  • construct the minimum reprojection error and use the Levenberg-Marquardt method to iteratively optimize the minimum reprojection error to obtain more accurate three-dimensional position information corresponding to each feature point to be used.
  • the three-dimensional position information set corresponding to each feature point to be used can be expressed as in, Indicates the three-dimensional position information of the spatial point corresponding to the qth feature point to be used in world coordinates.
  • the image position information of each feature point to be used is the image position information of each feature point to be used in the image where it is located.
  • the image where each feature point to be used is located includes the current image and the images at the previous N times at the current moment. .
  • S106 Determine the current state information of the target object at the current moment based on the three-dimensional position information, the image position information of each feature point to be used in the current image, the initial state information, and the current sensor data.
  • the electronic device can first refer to the error state Kalman filter theory, based on the current sensor corresponding to the current moment first obtained by the electronic device. Data and initial state information, update the state information of the target object, and obtain the intermediate state information of the target object. Then refer to the error state Kalman filter theory, use the three-dimensional position information, the image position information of each feature point to be used in the current image, and the intermediate state information to construct the corresponding measurement equation, and then, based on the measurement equation, determine the target object in the current image.
  • the current state information may be state information in the world coordinate system.
  • the state that is, use the tracking results of the respective feature points of multiple image capturing devices, that is, the second matching point pair corresponding to the current image and its previous N images, and the features between multiple image capturing devices.
  • the point association relationship is the first matching point pair corresponding to the current image and the image at the previous N time, and the three-dimensional position information corresponding to each feature point to be used is constructed to construct a multi-state constraint, and the current sensor data of other sensor data is fused to determine the target.
  • the current state information of the object at the current moment in order to obtain a state estimation result with higher precision and higher robustness, so as to realize the estimation of the state information of the object of the multi-sensor system including any number of image acquisition devices.
  • the S106 may include the following steps 021-024:
  • the current sensor data may include, but not be limited to: the current IMU data collected by the IMU from the previous moment to the current moment, the current GNSS data collected by the GNSS from the previous moment to the current moment at the current moment, and the current wheel speed data collected by the wheel speed sensor from the previous time to the current time. That is, the electronic device updates and iterates the initial state information based on the obtained current IMU data, current GNSS data and current wheel speed data to obtain intermediate state information of the target object.
  • z k represents the measured value, that is, the current IMU data, the current GNSS data or the current wheel speed data
  • R k represents the measured noise
  • h( ) represents the function that maps from the system state to the measured value, where the system state refers to the state information of the target object output by the system.
  • the initial state information is the state updated by the state. The initial value of the system state.
  • GNSS Indicates a certain frame of GNSS data in the current GNSS data, Represents the state of the system before the state update system that updates the state information of the target object, that is, the state information of the target object.
  • R GNSS indicates that when the GNSS data of this frame is substituted into the state update system for updating the state information of the target object, the system measurement noise.
  • 0 represents a certain frame of IMU data in the current IMU data
  • R static indicates that when the IMU data of the frame is substituted into the state update system that updates the state information of the target object, the system measures noise.
  • the state information of the target object may not be updated by using the current IMU data.
  • the filter update amount of the state update system can be calculated according to each of the above measurement equations.
  • the update equation corresponding to the filter update amount is expressed by the following formula (6):
  • k-1 represents the predicted state covariance
  • the initial value is the initial state covariance
  • the initial state covariance can be expressed by the following formula (7):
  • Q k is the state transition error, which is generally the noise parameter of the IMU and is a constant value.
  • K k is expressed as Kalman gain, indicating the magnitude of the current system state that needs to be adjusted; represents the current state information of the target object at the current moment, and P k
  • the electronic device uses the current sensor data and initial state information to determine the intermediate state information of the target object at the current moment, and then uses the pose information in the intermediate state information of the target object and the state of the target object corresponding to the images at each time of the previous N images.
  • the object pose information in the information, the three-dimensional position information corresponding to each feature point to be used, the image position information of each feature point to be used, and the device pose information and internal reference matrix of the image acquisition device corresponding to the image where each feature point to be used is located construct the reprojection error equation, and then determine the current state information of the target object at the current moment based on the reprojection error equation.
  • the process of constructing the reprojection error equation can be:
  • the space point corresponding to the feature point to be used is converted from the world coordinate system to the coordinate system where the target object is located, and then converted to the image corresponding to the feature point to be used.
  • the spatial point corresponding to the feature point to be utilized is changed from the image acquisition device corresponding to the image where the feature point to be utilized is located.
  • the spatial point corresponding to the feature point to be utilized is changed from the image acquisition device corresponding to the image where the feature point to be utilized is located.
  • the device coordinate system of the device project it to the image coordinate system of the image where the feature point to be used is located, and obtain the projection position information of the spatial point corresponding to the feature point to be utilized at the projection point of the image where the feature point to be utilized is located.
  • the image position information of the feature point to be used and the projection position information of the spatial point corresponding to the feature point to be used at the projection point of the image where the feature point to be used is located are used to construct a reprojection error equation .
  • the projection position information of the spatial point corresponding to the feature point to be used on the projection point of the image where the feature point to be used is located coincides with the image position of the feature point to be used on the image where it is located, and accordingly, reprojection
  • the error equation can be expressed by the following formula (8):
  • Image position information on the image collected by the ci -th image acquisition device that is, visual measurement
  • the 024 may include the following steps 0241-0242:
  • 0242 Determine the current state information of the target object at the current moment by using the target measurement equation and the filter update equation.
  • the target measurement equation is constructed based on the above-mentioned reprojection error equation.
  • the measurement equation of vision can be expressed as:
  • the state quantity in the measurement equation of vision contains
  • the image position information of the feature points to be used that is, the position of the feature points is not used as the state quantity in the target measurement equation.
  • left null space elimination of The residual effect of and finally obtain the visual target measurement equation
  • the pose information in the sliding window corresponding to the current image and the system state corresponding to the current moment are obtained by solving That is, the current state information of the target object at the current moment.
  • an embodiment of the present invention provides an apparatus for estimating state information.
  • the apparatus may include:
  • the first obtaining module 210 is configured to obtain the current image collected by the multi-image collecting device set by the target object at the current moment and the current sensor data collected by other sensors, wherein the other sensors include an IMU;
  • the first determination module 220 is configured to use the IMU data corresponding to the previous moment of the current moment and the previous state information of the target object at the previous moment to determine the initial state of the target object at the current moment. status information;
  • the second determination module 230 is configured to use the feature points detected by each current image and the relative pose relationship between the image acquisition devices corresponding to each current image to determine the matching point pair between each current image as the current image the corresponding first matching point pair;
  • the third determination module 240 is configured to use the feature points detected by each current image and the feature points detected by the previous image to determine the matching point pairs between each current image and its previous image, as the current image corresponding The second matching point pair of ;
  • the fourth determination module 250 is configured to determine based on the first matching point pair and the second matching point pair corresponding to the current image and the first matching point pair and the second matching point pair corresponding to the images at the previous N moments of the current moment. three-dimensional position information corresponding to each feature point to be utilized;
  • the fifth determination module 260 is configured to determine the current state information of the target object at the current moment based on the three-dimensional position information, the image position information of each feature point to be used, the initial state information and the current sensor data .
  • the state that is, use the tracking results of the respective feature points of multiple image capturing devices, that is, the second matching point pair corresponding to the current image and its previous N images, and the features between multiple image capturing devices.
  • the point association relationship is the first matching point pair corresponding to the current image and the image at the previous N moments, and the three-dimensional position information corresponding to each feature point to be used is constructed to construct a multi-state constraint, and the current sensor data of other sensor data is fused to determine the target.
  • the current state information of the object at the current moment in order to obtain a state estimation result with higher precision and higher robustness, so as to realize the estimation of the state information of the object of the multi-sensor system including any number of image acquisition devices.
  • the initial state information includes: initial velocity information and initial posture information, wherein the initial posture information includes: initial posture information and initial position information;
  • the first determining module 220 is specifically configured to determine the angular velocity information and acceleration information of the target object at the previous moment by using the IMU data corresponding to the previous moment of the current moment;
  • a first state transition equation is constructed; the initial state transition equation of the target object at the current moment is determined by using the first state transition equation attitude information;
  • a second state transition equation is constructed; the target object is determined by using the second state transition equation initial speed information at the current moment;
  • a third state transition equation is constructed; using the third state transition equation to determine that the target object is in the The initial position information at the current moment.
  • the fourth determining module 250 is specifically configured to, according to a triangulation algorithm, based on the first matching point pair and the second matching point pair corresponding to the current image, and the first N moments of the current moment
  • the first matching point pair and the second matching point pair corresponding to the images, the device pose information of the image capture device corresponding to each current image, the device pose information of the image capture device corresponding to the images at each previous N moments, and the current image and The pose information of the target object corresponding to the images at the first N moments is used to determine the three-dimensional position information corresponding to the feature points to be utilized.
  • the fifth determining module 260 includes:
  • a first determining unit (not shown in the figure), configured to determine the intermediate state information of the target object at the current moment based on the current sensor data and the initial state information;
  • the second determination unit (not shown in the figure) is configured to use the three-dimensional position information, the intermediate pose information in the intermediate state information, and the state information of the target object corresponding to the images at each time in the previous N time images.
  • the object pose information, the device pose information and the internal parameter matrix of each image acquisition device determine the projection position information of the projected point in the image where the spatial point corresponding to each feature point to be used is located;
  • a construction unit (not shown in the figure) is configured to construct a reprojection error equation based on the corresponding projection position information of each feature point to be utilized and the image position information of each feature point to be utilized in the current image;
  • a third determining unit (not shown in the figure) is configured to determine the current state information of the target object at the current moment based on the reprojection error equation.
  • the third determining unit is specifically configured to construct a target measurement equation based on the reprojection error equation
  • the current state information of the target object at the current moment is determined.
  • the modules in the apparatus in the embodiment may be distributed in the apparatus in the embodiment according to the description of the embodiment, and may also be located in one or more apparatuses different from this embodiment with corresponding changes.
  • the modules in the foregoing embodiments may be combined into one module, or may be further split into multiple sub-modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

提供了一种状态信息估计方法及装置,方法包括:获得在当前时刻目标对象的当前图像及当前传感器数据(S101);利用前一时刻对应的IMU数据及前一状态信息,确定目标对象的初始状态信息(S102);利用各当前图像的特征点,确定各当前图像的第一匹配点对(S103);利用各当前图像中的特征点及前一图像中的特征点,确定各当前图像与前一图像的第二匹配点对(S104);基于当前图像的第一匹配点对、第二匹配点对及前N时刻图像的第一匹配点对、第二匹配点对,确定各待利用特征点对应的三维位置信息(S105);基于三维位置信息、各待利用特征点的图像位置信息、初始状态信息及当前传感器数据,确定目标对象的当前状态信息(S106),以实现对包含任意数量图像采集设备的对象的状态信息的估计。

Description

一种状态信息估计方法及装置 技术领域
本发明涉及自动化技术领域,具体而言,涉及一种状态信息估计方法及装置。
背景技术
在车辆自动驾驶及机器人运动的建图方案和定位方案中,如何通过安装在车辆或者机器人上的各种传感器,估计得到其精准可靠的运动状态信息是一个非常重要的问题。估计目标即车辆或者机器人的运动状态信息的系统可以称为机器人的状态估计器。
相关技术中,用于估计目标的运动状态信息的方法一般分为滤波与优化两种方案。其中,滤波方案实时性好、精度较高,更易于轻量地部署在车辆自动驾驶及机器人运动的解决方案中。
目前,为了保证自动驾驶车辆的安全行驶,自动驾驶车辆常通常安装有多个不同朝向的相机,以针对车辆的四周环境进行拍摄,获取四周环境信息,进而结合其他多传感器所采集的传感器数据,进行定位,以保证车辆的准确定位进而保证车辆的安全行驶。
目前的滤波方案并没有针对包含任意数量相机的多传感器系统的车辆状态信息估计能力。那么,如何提供一种针对包含任意数量相机的多传感器系统的车辆状态信息进行估计的方法成为亟待解决的问题。
发明内容
本发明提供了一种状态信息估计方法及装置,以实现对包含任意数量图像采集设备的多传感器系统的对象的状态信息的估计。具体的技术方案如下:
第一方面,本发明实施例提供了一种状态信息估计方法,所述方法包括:
获得在当前时刻目标对象所设置的多图像采集设备采集的当前图像及其他传感器采集的当前传感器数据,其中,所述其他传感器包括IMU;
利用所述当前时刻的前一时刻对应的IMU数据以及所述前一时刻所述目标对象的前一状态信息,确定所述目标对象在所述当前时刻的初始状态信息;
利用各当前图像所检测出的特征点以及各当前图像对应的图像采集设备之间的相对位姿关系,确定各当前图像之间的匹配点对,作为当前图像对应的第一匹配点对;
利用各当前图像所检测出的特征点及其前一图像所检测出的特征点,确定各当前图 像与其前一图像之间的匹配点对,作为当前图像对应的第二匹配点对;
基于当前图像对应的第一匹配点对及第二匹配点对以及所述当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息;
基于所述三维位置信息、各待利用特征点的图像位置信息、所述初始状态信息及所述当前传感器数据,确定所述目标对象在当前时刻的当前状态信息。
可选的,所述初始状态信息包括:初始速度信息及初始位姿信息,其中,所述初始位姿信息包括:初始姿态信息以及初始位置信息;
所述利用所述当前时刻的前一时刻对应的IMU数据以及所述前一时刻所述目标对象的前一状态信息,确定所述目标对象在所述当前时刻的初始状态信息的步骤,包括:
利用所述当前时刻的前一时刻对应的IMU数据,确定所述前一时刻所述目标对象的角速度信息和加速度信息;
利用所述前一时刻的角速度信息以及所述前一状态信息中的前一姿态信息,构建第一状态转移方程;利用所述第一状态转移方程确定所述目标对象在所述当前时刻的初始姿态信息;
利用所述前一姿态信息、所述前一时刻的加速度信息以及所述前一状态信息中的前一速度信息,构建第二状态转移方程;利用所述第二状态转移方程确定所述目标对象在所述当前时刻的初始速度信息;
利用所述初始速度信息、所述前一速度信息以及所述前一状态信息中的前一位置信息,构建第三状态转移方程;利用所述第三状态转移方程确定所述目标对象在所述当前时刻的初始位置信息。
可选的,所述基于当前图像对应的第一匹配点对及第二匹配点对以及当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息的步骤,包括:
按照三角测量算法,基于当前图像对应的第一匹配点对及第二匹配点对、当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对、各当前图像对应的图像采集设备的设备位姿信息、各前N时刻图像对应的图像采集设备的设备位姿信息以及所述当前图像及各前N时刻图像对应的目标对象的位姿信息,确定各待利用特征点对应的三维位置信息。
可选的,所述基于所述三维位置信息、所述各待利用特征点的图像位置信息、所述初始状态信息以及所述当前传感器数据,确定所述目标对象在当前时刻的当前状态信息的步骤,包括:
基于所述当前传感器数据以及所述初始状态信息,确定所述目标对象在当前时刻的 中间状态信息;
利用所述三维位置信息、所述中间状态信息中的中间位姿信息、前N时刻图像各时刻图像所对应目标对象的状态信息中的对象位姿信息以及各图像采集设备的设备位姿信息和内参矩阵,确定各待利用特征点对应的空间点在其所在图像中投影点的投影位置信息;
基于各待利用特征点对应的投影位置信息和各待利用特征点在当前图像中的图像位置信息,构建重投影误差方程;
基于所述重投影误差方程,确定所述目标对象在当前时刻的当前状态信息。
可选的,所述基于所述重投影误差方程,确定所述目标对象在当前时刻的当前状态信息的步骤,包括:
基于所述重投影误差方程,构建目标测量方程;
利用所述目标测量方程以及滤波更新方程,确定所述目标对象在当前时刻的当前状态信息。
第二方面,本发明实施例提供了一种状态信息估计装置,所述装置包括:
第一获得模块,被配置为获得在当前时刻目标对象所设置的多图像采集设备采集的当前图像及其他传感器采集的当前传感器数据,其中,所述其他传感器包括IMU;
第一确定模块,被配置为利用所述当前时刻的前一时刻对应的IMU数据以及所述前一时刻所述目标对象的前一状态信息,确定所述目标对象在所述当前时刻的初始状态信息;
第二确定模块,被配置为利用各当前图像所检测出的特征点以及各当前图像对应的图像采集设备之间的相对位姿关系,确定各当前图像之间的匹配点对,作为当前图像对应的第一匹配点对;
第三确定模块,被配置为利用各当前图像所检测出的特征点及其前一图像所检测出的特征点,确定各当前图像与其前一图像之间的匹配点对,作为当前图像对应的第二匹配点对;
第四确定模块,被配置为基于当前图像对应的第一匹配点对及第二匹配点对以及所述当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息;
第五确定模块,被配置为基于所述三维位置信息、各待利用特征点的图像位置信息、所述初始状态信息及所述当前传感器数据,确定所述目标对象在当前时刻的当前状态信息。
可选的,所述初始状态信息包括:初始速度信息及初始位姿信息,其中,所述初始 位姿信息包括:初始姿态信息以及初始位置信息;
所述第一确定模块,被具体配置为利用所述当前时刻的前一时刻对应的IMU数据,确定所述前一时刻所述目标对象的角速度信息和加速度信息;
利用所述前一时刻的角速度信息以及所述前一状态信息中的前一姿态信息,构建第一状态转移方程;利用所述第一状态转移方程确定所述目标对象在所述当前时刻的初始姿态信息;
利用所述前一姿态信息、所述前一时刻的加速度信息以及所述前一状态信息中的前一速度信息,构建第二状态转移方程;利用所述第二状态转移方程确定所述目标对象在所述当前时刻的初始速度信息;
利用所述初始速度信息、所述前一速度信息以及所述前一状态信息中的前一位置信息,构建第三状态转移方程;利用所述第三状态转移方程确定所述目标对象在所述当前时刻的初始位置信息。
可选的,所述第四确定模块,被具体配置为按照三角测量算法,基于当前图像对应的第一匹配点对及第二匹配点对、当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对、各当前图像对应的图像采集设备的设备位姿信息、各前N时刻图像对应的图像采集设备的设备位姿信息以及所述当前图像及各前N时刻图像对应的目标对象的位姿信息,确定各待利用特征点对应的三维位置信息。
可选的,所述第五确定模块包括:
第一确定单元,被配置为基于所述当前传感器数据以及所述初始状态信息,确定所述目标对象在当前时刻的中间状态信息;
第二确定单元,被配置为利用所述三维位置信息、所述中间状态信息中的中间位姿信息、前N时刻图像各时刻图像所对应目标对象的状态信息中的对象位姿信息以及各图像采集设备的设备位姿信息和内参矩阵,确定各待利用特征点对应的空间点在其所在图像中投影点的投影位置信息;
构建单元,被配置为基于各待利用特征点对应的投影位置信息和各待利用特征点在当前图像中的图像位置信息,构建重投影误差方程;
第三确定单元,被配置为基于所述重投影误差方程,确定所述目标对象在当前时刻的当前状态信息。
可选的,所述第三确定单元,被具体配置为基于所述重投影误差方程,构建目标测量方程;
利用所述目标测量方程以及滤波更新方程,确定所述目标对象在当前时刻的当前状态信息。
由上述内容可知,本发明实施例提供的一种状态信息估计方法及装置,获得在当前时刻目标对象所设置的多图像采集设备采集的当前图像及其他传感器采集的当前传感器数据,其中,其他传感器包括IMU;利用当前时刻的前一时刻对应的IMU数据以及前一时刻目标对象的前一状态信息,确定目标对象在当前时刻的初始状态信息;利用各当前图像所检测出的特征点以及各当前图像对应的图像采集设备之间的相对位姿关系,确定各当前图像之间的匹配点对,作为当前图像对应的第一匹配点对;利用各当前图像所检测出的特征点及其前一图像所检测出的特征点,确定各当前图像与其前一图像之间的匹配点对,作为当前图像对应的第二匹配点对;基于当前图像对应的第一匹配点对及第二匹配点对以及当前图像的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息;基于三维位置信息、各待利用特征点的图像位置信息、初始状态信息及当前传感器数据,确定目标对象在当前时刻的当前状态信息。
应用本发明实施例,可以通过扩展状态,即利用多图像采集设备各自的特征点的跟踪结果即当前图像及其前N时刻图像对应的第二匹配点对,以及多图像采集设备之间的特征点关联关系即当前图像及其前N时刻图像对应的第一匹配点对,构建各待利用特征点对应的三维位置信息,以构建多状态约束,融合其他传感器数据的当前传感器数据,确定得到目标对象在当前时刻的当前状态信息,以获得精度更高、鲁棒性更高的状态估计结果,以实现对包含任意数量图像采集设备的多传感器系统的对象的状态信息的估计。当然,实施本发明的任一产品或方法并不一定需要同时达到以上所述的所有优点。
本发明实施例的创新点包括:
1、可以通过扩展状态,即利用多图像采集设备各自的特征点的跟踪结果即当前图像及其前N时刻图像对应的第二匹配点对,以及多图像采集设备之间的特征点关联关系即当前图像及其前N时刻图像对应的第一匹配点对,构建各待利用特征点对应的三维位置信息,以构建多状态约束,融合其他传感器数据的当前传感器数据,确定得到目标对象在当前时刻的当前状态信息,以获得精度更高、鲁棒性更高的状态估计结果,以实现对包含任意数量图像采集设备的多传感器系统的对象的状态信息的估计。
2、以误差状态卡尔曼滤波为基础,利用当前时刻的前一时刻的IMU数据,以及目标对应在前一时刻的前一状态信息,通过构建状态转移方程,得到目标对象在当前时刻的初始状态信息,以为后续确定精度更高且鲁棒性更高的目标对象在当前时刻的当前状态信息提供基础。
3、考虑到相较于图像,其他传感器数据的到来更早,首先基于更早到来的当前传感器数据以及初始状态信息,确定目标对象在当前时刻的中间状态信息,进而参照误差状态卡尔曼滤波理论,针对图像中各待利用特征点对应的空间点的投影位置信息以及各待 利用特征点的图像位置信息,构建重投影误差方程,进而构建目标测量方程,以优化中间状态信息,得到精度以及鲁棒性高的当前状态信息。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单介绍。显而易见地,下面描述中的附图仅仅是本发明的一些实施例。对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的状态信息估计方法的一种流程示意图;
图2为本发明实施例提供的状态信息估计装置的一种结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,本发明实施例及附图中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含的一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
本发明提供了一种状态信息估计方法及装置,以实现对包含任意数量图像采集设备的多传感器系统的对象的状态信息的估计。下面对本发明实施例进行详细说明。
图1为本发明实施例提供的状态信息估计方法的一种流程示意图。该方法可以包括如下步骤:
S101:获得在当前时刻目标对象所设置的多图像采集设备采集的当前图像及其他传感器采集的当前传感器数据。
其中,其他传感器包括IMU。
本发明实施例所提供的状态信息估计方法,可以应用于任一具有计算能力的电子设备,该电子设备可以为终端或者服务器。在一种实现中,实现该方法的功能软件可以以单独的客户端软件的形式存在,也可以以目前相关的客户端软件的插件的形式存在,例如可以以自动驾驶系统的功能模块的形式存在,这都是可以的。该电子设备可以为设置于目标对象的设备,也可以是为设置于目标对象的设备,这都是可以的。
在一种情况中,该电子设备可以为一种多传感器状态估计器,该多传感器状态估计 器:输入多种传感器的数据,例如多图像采集设备、IMU(Inertial measurement unit,惯性测量单元)、GNSS(Global Navigation Satellite System,全球卫星导航系统/全球导航卫星系统)、轮速计即轮速传感器等传感器所采集的数据,满足所有传感器之间都近似是刚体变换的假设下,得到整个多传感器系统即设置该多传感器系统的目标对象的各种状态信息,主要包括位置信息、姿态信息以及速度信息等。
目标对象可以是自动驾驶车辆也可以是智能机器人。目标对象所设置的多图像采集设备可以是设置于目标对象,为安装在同一刚体上的任意数量、任意位置与朝向的多图像采集设备系统。
在目标对象的行驶过程中,多图像采集设备以及其他传感器可以实时采集数据,并发送至电子设备,以使得电子设备获得在当前时刻目标对象所设置的多图像采集设备所采集的图像,作为当前图像;并获得其他传感器在当前时刻的前一时刻到当前时刻所采集的传感器数据,作为当前传感器数据。
其他传感器可以包括但不限于:IMU(Inertial measurement unit,惯性测量单元)、轮速传感器以及GNSS(Global Navigation Satellite System,全球卫星导航系统/全球导航卫星系统)。还可以包括:GPS(Global Positioning System,全球定位系统)以及雷达等。
在一种情况中,可以认为多图像采集设备与目标对象的相对位置固定,且其他传感器与目标对象的相对位置固定。相应的,在图像采集设备、目标对象以及其他传感器其中任一对象的位姿信息确定,其他对象的位姿信息也确定。
S102:利用当前时刻的前一时刻对应的IMU数据以及前一时刻目标对象的前一状态信息,确定目标对象在当前时刻的初始状态信息。
本步骤中,电子设备可以获得前一时刻目标对象的状态信息,作为前一状态信息,并获得当前时刻的前一时刻对应的IMU数据。基于误差状态卡尔曼滤波(ESKF,error-state Kalman Filter),利用前一时刻对应的IMU数据以及前一状态信息,预测确定目标对象在当前时刻的初始状态信息。其中,状态信息可以包括但不限于:目标对象的位姿信息以及速度信息,其中,位姿信息包括位置信息和姿态信息。前一时刻对应的IMU数据为:目标对象所设置的IMU在当前时刻的前一时刻的前一时刻,至当前时刻的前一时刻之间采集的IMU数据。
可以通过如下状态转移方程,实现对目标车辆的状态信息预测确定,具体的,可以利用如下公式(1)表示:
Figure PCTCN2021109535-appb-000001
其中,t k-1表示当前时刻的前一时刻,t k表示当前时刻,
Figure PCTCN2021109535-appb-000002
表示前一状态信息, u k表示当前时刻的前一时刻对应的IMU数据,
Figure PCTCN2021109535-appb-000003
当前时刻的初始状态信息。
在本发明的另一实施例中,该初始状态信息
Figure PCTCN2021109535-appb-000004
包括:初始速度信息
Figure PCTCN2021109535-appb-000005
及初始位姿信息,其中,初始位姿信息包括:初始姿态信息
Figure PCTCN2021109535-appb-000006
以及初始位置信息
Figure PCTCN2021109535-appb-000007
所述S102,可以包括如下步骤011-014:
011:利用当前时刻的前一时刻的IMU数据,确定前一时刻目标对象的角速度信息和加速度信息。
012:利用前一时刻的角速度信息以及前一状态信息中的前一姿态信息,构建第一状态转移方程;利用第一状态转移方程确定目标对象在所述当前时刻的初始姿态信息。
013:利用前一姿态信息、前一时刻的加速度信息以及前一状态信息中的前一速度信息,构建第二状态转移方程;利用第二状态转移方程确定目标对象在当前时刻的初始速度信息。
014:利用初始速度信息、前一速度信息以及前一状态信息中的前一位置信息,构建第三状态转移方程;利用第三状态转移方程确定目标对象在当前时刻的初始位置信息。
本实现方式中,电子设备对当前时刻的前一时刻对应的IMU数据进行去偏置以及去重力加速度得到,前一时刻目标对象的角速度信息和加速度信息,分别利用ω k-1与α k-1表示。
进而,利用前一时刻的角速度信息以及前一状态信息中的前一姿态信息,构建第一状态转移方程;利用第一状态转移方程确定目标对象在所述当前时刻的初始姿态信息。具体的,该第一状态转移方程,可以通过如下公式(2)表示:
Figure PCTCN2021109535-appb-000008
其中,
Figure PCTCN2021109535-appb-000009
表示前一状态信息中的前一姿态信息,
Figure PCTCN2021109535-appb-000010
表示四元数乘法。
进而,电子设备利用前一姿态信息、前一时刻的加速度信息以及前一状态信息中的前一速度信息,构建第二状态转移方程;利用第二状态转移方程确定目标对象在当前时刻的初始速度信息。具体的,该第二状态转移方程,可以通过如下公式(3)表示:
Figure PCTCN2021109535-appb-000011
其中,
Figure PCTCN2021109535-appb-000012
表示前一状态信息中的前一速度信息。
进而,电子设备利用初始速度信息、前一速度信息以及前一状态信息中的前一位置信息,构建第三状态转移方程;利用第三状态转移方程确定目标对象在当前时刻的初始位置信息,具体的。该第三状态转移方程,可以通过如下公式(4)表示:
Figure PCTCN2021109535-appb-000013
其中,
Figure PCTCN2021109535-appb-000014
表示前一状态信息中的前一位置信息。
S103:利用各当前图像所检测出的特征点以及各当前图像对应的图像采集设备之间的相对位姿关系,确定各当前图像之间的匹配点对,作为当前图像对应的第一匹配点对。
其中,设置于目标对象的各图像采集设备之间的相对位置关系固定,相应的,电子设备可以预先存储有各图像采集设备之间的相对位姿关系。
电子设备获得各当前图像之后,可以首先利用预设特征点检测算法,对各当前图像进行特征点检测,得到各当前图像中的特征点。一种情况中,该预设特征点检测算法可以FAST特征点检测算法等可以检测出图像中特征点的检测算法。
进而,电子设备根据各当前图像对应的图像采集设备之间的相对位姿关系,确定各当前图像中的感兴趣区域,其中,当前图像中的感兴趣区域为与其他当前图像存在重合的区域。针对每一当前图像,利用预设特征描述子提取算法,对该当前图像中感兴趣区域内的特征点进行特征描述子提取,得到该当前图像中感兴趣区域内各特征点对应的特征描述子。基于快速近似最近邻算法(Fast Library for Approximate Nearest Neighbors)以及各当前图像中感兴趣区域内各特征点对应的特征描述子,对各当前图像中感兴趣区域内的特征点进行匹配,得到各当前图像之间匹配的特征点对即匹配点对,作为当前图像对应的第一匹配点对。当前图像即第k帧图像对应的第一匹配点对可以表示为:
Figure PCTCN2021109535-appb-000015
c i,c j分别表示第i个图像采集设备和第j个图像采集设备,c i,c j的取值为[1,n]之间的整数,其中,n表示目标车辆所设置的图像采集设备的数量。
其中,预设特征描述子提取算法可以为BRIER特征描述子提取算法。
S104:利用各当前图像所检测出的特征点及其前一图像所检测出的特征点,确定各当前图像与其前一图像之间的匹配点对,作为当前图像对应的第二匹配点对。
本步骤中,电子设备针对每一当前图像,利用该当前图像所检测出的特征点及其前一图像所检测出的特征点,利用稀疏光流KLT算法,对该当前图像及其前一图像上的特征点进行跟踪,得到该当前图像与其前一图像之间的匹配特征点对即匹配点对,作为该当前图像对应的第二匹配点对。当前图像对应的第二匹配点对可以表示为:
Figure PCTCN2021109535-appb-000016
c i表示第i个图像采集设备,c i的取值范围为[1,n]之间的整数,其中,n表示目标车辆所设置的图像采集设备的数量。
其中,本发明实施例并不对S104与S103的先后执行顺序进行限定,电子设备可以先执行S103再执行S104,也可以先执行S104再执行S103,或者并行执行S103和S104,这都是可以的。
S105:基于当前图像对应的第一匹配点对及第二匹配点对以及当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息。
其中,当前时刻的前N时刻图像指的是:目标对象的多图像采集设备在当前时刻前 N时刻内的每一时刻采集的图像。
电子设备获得当前图像对应的第一匹配点对及第二匹配点对之后,电子设备参考多状态约束的卡尔曼滤波(MSCKF),为了构建多约束条件,在针对包含多图像采集设备的目标对象的状态信息估计过程中,进行了多状态约束的卡尔曼滤波中状态的扩展。具体的,首先扩展相应的滑动窗口,设置滑动窗口的长度为N+1,滑动窗口包含目标对象的状态信息中的位姿信息,具体表示如下:
x k=[π k,π k-1,…π e…π k-N],
Figure PCTCN2021109535-appb-000017
e∈[k-N,k]中的整数;
其中,x k表示当前图像对应的滑动窗口中的状态信息;π k表示当前图像对应的目标的状态信息中的位姿信息。在一种情况中,可以直接利用IMU的状态信息表征目标对象的状态信息,相应的,该滑动窗口包含目标对象的状态信息即为目标对象所设置的IMU的位姿信息,另一种情况,可以利用IMU的状态信息以及预存的IMU与目标对象之间的位姿转换关系确定目标对象的状态信息,这都是可以的。π e
Figure PCTCN2021109535-appb-000018
表示当前图像及其前N时刻图像中第e时刻图像对应的目标对象在世界坐标系下的位姿信息。
相应的,获得当前时刻的前N时刻图像中每一图像对应的第一匹配点对及第二匹配点对;基于当前图像对应的第一匹配点对及第二匹配点对、当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,以及当前图像对应的图像采集设备的设备位姿信息以及前N时刻图像中每一时刻图像对应的图像采集设备的设备位姿信息,确定各待利用特征点对应的三维位置信息。该待利用特征点为:各当前图像及其前N时刻图像中所检测出的特征点中,可确定出三维位置信息的特征点。
其中,当前时刻的前N时刻图像中每一时刻图像对应的第一匹配点对的确定流程与当前图像对应的第一匹配点对应的确定流程相同,当前时刻的前N时刻图像中每一时刻图像对应的第二匹配点对的确定流程与当前图像对应的第二匹配点对应的确定流程相同,在此不再赘述。N为正整数。
图像对应的图像采集设备的设备位姿信息可以指图像采集设备采集该图像时的设备位姿信息。
在本发明的一种实现方式中,所述S105可以包括如下步骤:
按照三角测量算法,基于当前图像对应的第一匹配点对及第二匹配点对、当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对、各当前图像对应的图像采集设备的设备位姿信息、各前N时刻图像对应的图像采集设备的设备位姿信息以及当前图像及各前N时刻图像对应的目标对象的位姿信息,确定各待利用特征点对应的三维位置信息。
一种实现方式中,目标对象与其所设置的图像采集设备之间的相对位置关系是确定的,电子设备可以预存有目标对象与其所设置的各图像采集设备之间的相对位置关系。 在目标对象的位姿信息确定的情况下,各图像采集设备的位姿信息确定。
电子设备按照三角测量算法,基于当前图像对应的第一匹配点对中各特征点在其图像上的图像位置信息及第二匹配点对中各特征点在其图像上的图像位置信息,当前时刻的前N时刻图像中每一时刻图像对应的第一匹配点对中各特征点在其图像上的图像位置信息及第二匹配点对中各特征点在其图像上的图像位置信息,各当前图像对应的图像采集设备的设备位姿信息、各前N时刻图像每一时刻图像对应的图像采集设备的设备位姿信息,以及当前图像及各前N时刻图像对应的目标对象的位姿信息,即对当前图像对应的滑动窗口中各位姿信息对应的当前图像及其前N时刻图像的所有特征点的匹配结果
Figure PCTCN2021109535-appb-000019
进行三角测量,确定出各待利用特征点对应的三维位置信息。其中,三角测量算法的具体计算过程可以参照相关技术中三角测量算法计算过程,在此不再赘述。
在一种情况中,可以利用各待利用特征点对应的三维位置信息,各待利用特征点的图像位置信息,以及各待利用特征点所在图像对应的图像采集设备的设备位姿信息和内参矩阵,构建最小重投影误差,利用Levenberg-Marquardt法,迭代优化最小重投影误差,获得更精确的各待利用特征点对应的三维位置信息。其中,各待利用特征点对应的三维位置信息集合可以表示为
Figure PCTCN2021109535-appb-000020
其中,
Figure PCTCN2021109535-appb-000021
表示第q个待利用特征点对应的空间点在世界坐标下的三维位置信息。
其中,各待利用特征点的图像位置信息为各待利用特征点在其所在图像中的图像位置信息。各待利用特征点所在图像包括当前图像以及当前时刻的前N时刻图像。。
S106:基于三维位置信息、各待利用特征点在当前图像的图像位置信息、初始状态信息及当前传感器数据,确定目标对象在当前时刻的当前状态信息。
本步骤中,考虑到图像采集设备采集图像的频率相应于其他传感器采集其他传感器数据的频率较低,电子设备可以首先参照误差状态卡尔曼滤波理论,基于电子设备首先获得的当前时刻对应的当前传感器数据以及初始状态信息,更新目标对象的状态信息,得到目标对象的中间状态信息。进而参照误差状态卡尔曼滤波理论,利用三维位置信息、各待利用特征点在当前图像的图像位置信息以及该中间状态信息,构建相应的测量方程,进而,基于该测量方程,确定目标对象在当前时刻的当前状态信息。其中,该当前状态信息可以为在世界坐标系下的状态信息。
应用本发明实施例,可以通过扩展状态,即利用多图像采集设备各自的特征点的跟踪结果即当前图像及其前N时刻图像对应的第二匹配点对,以及多图像采集设备之间的特征点关联关系即当前图像及其前N时刻图像对应的第一匹配点对,构建各待利用特征点对应的三维位置信息,以构建多状态约束,融合其他传感器数据的当前传感器数据, 确定得到目标对象在当前时刻的当前状态信息,以获得精度更高、鲁棒性更高的状态估计结果,以实现对包含任意数量图像采集设备的多传感器系统的对象的状态信息的估计。
在本发明的另一实施例中,所述S106,可以包括如下步骤021-024:
021:基于当前传感器数据以及初始状态信息,确定目标对象在当前时刻的中间状态信息。
022:利用三维位置信息、中间状态信息中的中间位姿信息、前N时刻图像各时刻图像所对应目标对象的状态信息中的对象位姿信息以及各图像采集设备的设备位姿信息和内参矩阵,确定各待利用特征点对应的空间点在其所在图像中投影点的投影位置信息。
023:基于各待利用特征点对应的投影位置信息和各待利用特征点在当前图像中的图像位置信息,构建重投影误差方程。
024:基于重投影误差方程,确定目标对象在当前时刻的当前状态信息。
一种情况中,当前传感器数据可以包括但不限于:IMU在当前时刻的前一时刻到当前时刻所采集的当前IMU数据、GNSS在当前时刻的前一时刻到当前时刻所采集的当前GNSS数据,以及轮速传感器在当前时刻的前一时刻到当前时刻所采集的当前轮速数据。即电子设备基于所获得的当前IMU数据、当前GNSS数据以及当前轮速数据,对初始状态信息进行更新迭代,得到目标对象的中间状态信息。
其中,参照误差状态卡尔曼滤波理论,构建利用上述当前传感器数据对目标对象进行状态更新过程的测量方程,其中,该测量方程可以统一通过如下公式(5)表示:
Figure PCTCN2021109535-appb-000022
其中,z k表示测量值,即表示当前IMU数据、当前GNSS数据或当前轮速数据,R k表示测量的噪声。h(·)表示从系统状态映射到测量值的函数,其中系统状态指的是系统输出的目标对象的状态信息,利用当前传感器数据对目标对象进行状态更新时,初始状态信息为该状态更新的系统状态的初始值。
具体的,对于GNSS测量,z k表示当前GNSS数据,相应的,上述公式(5)可以表示为如下公式(5.1):
Figure PCTCN2021109535-appb-000023
其中,
Figure PCTCN2021109535-appb-000024
表示当前GNSS数据中的某一帧GNSS数据,
Figure PCTCN2021109535-appb-000025
表示将该帧GNSS数据代入更新目标对象的状态信息的状态更新系统前的系统状态即目标对象的状态信息,R GNSS表示将该帧GNSS数据代入更新目标对象的状态信息的状态更新系统时,系统测量的噪声。
对于轮速测量,z k表示当前轮速数据,相应的,上述公式(5)可以表示为如下公式(5.2):
Figure PCTCN2021109535-appb-000026
其中,
Figure PCTCN2021109535-appb-000027
表示当前轮速数据中某一帧轮速数据,
Figure PCTCN2021109535-appb-000028
表示该帧轮速数据代入更新目标对象的状态信息的状态更新系统前的系统状态即目标对象的状态信息,R odo表示将该帧轮速数据代入更新目标对象的状态信息的状态更新系统时,系统测量的噪声。
对于IMU测量,若目标对象为静止状态,即
Figure PCTCN2021109535-appb-000029
并且目标对象在过去预设时长如1秒内IMU测量的IMU数据表征目标车辆的角速度和加速度的变化小于预设阈值,则上述公式(5)可以表示为如下公式(5.3):
Figure PCTCN2021109535-appb-000030
其中,0表示当前IMU数据中某一帧IMU数据,
Figure PCTCN2021109535-appb-000031
表示该帧IMU数据代入更新目标对象的状态信息的状态更新系统前的系统状态即目标对象的状态信息,R static表示将该帧IMU数据代入更新目标对象的状态信息的状态更新系统时,系统测量的噪声。
一种情况中,若目标对象不为静止状态,则可以不利用当前IMU数据对目标对象的状态信息进行更新。
本实现方式中,并不对利用当前传感器数据更新目标对象的状态信息时的不同数量的更新顺序进行限定,状态更新系统获得哪种类型的当前传感器数据,即利用该中类型的当前传感器数据,更新目标对象的状态信息。
后续的,可以根据上述的每一测量方程,可以计算出状态更新系统的滤波更新量。其中,滤波更新量对应的更新方程,通过如下公式(6)表示:
Figure PCTCN2021109535-appb-000032
其中,P k|k-1表示预测得到的状态协方差,初始值为初始状态协方差,初始状态协方差可以通过如下公式(7)表示:
Figure PCTCN2021109535-appb-000033
其中,
Figure PCTCN2021109535-appb-000034
即状态转移方程
Figure PCTCN2021109535-appb-000035
对状态量
Figure PCTCN2021109535-appb-000036
的导数。Q k为状态转移误差,一般是IMU的噪声参数,为常值。
K k表示为卡尔曼增益,表示当前的系统状态需要调整的幅度;
Figure PCTCN2021109535-appb-000037
表示目标对象在当前时刻的当前状态信息,P k|k表示当前时刻的当前状态协方差;
Figure PCTCN2021109535-appb-000038
即h(·)对状态量
Figure PCTCN2021109535-appb-000039
的导数。
Figure PCTCN2021109535-appb-000040
表示广义的加法,包括矢量的加法以及旋转向量的加法。
电子设备利用当前传感器数据以及初始状态信息,确定出目标对象在当前时刻的中间状态信息之后,利用目标对象的中间状态信息中的位姿信息及前N时刻图像各时刻图像所对应目标对象的状态信息中的对象位姿信息、各待利用特征点对应的三维位置信息、 各待利用特征点的图像位置信息,以及各待利用特征点所在图像对应的图像采集设备的设备位姿信息和内参矩阵,构建重投影误差方程,进而基于重投影误差方程,确定目标对象在当前时刻的当前状态信息。
其中,在一种情况中,构建重投影误差方程的过程可以为:
针对每一待利用特征点对应的空间点,利用该待利用特征点对应的空间点的三维位置信息、该待利用特征点所在图像对应目标对象的状态信息中的对象位姿信息,以及该待利用特征点所在图像对应的图像采集设备的设备位姿信息,将该待利用特征点对应的空间点从世界坐标系转换至目标对象所在坐标系,再转换至该待利用特征点所在图像对应的图像采集设备的设备坐标系下;进而结合该待利用特征点所在图像对应的图像采集设备的内参矩阵,将该待利用特征点对应的空间点从该待利用特征点所在图像对应的图像采集设备的设备坐标系下,投影至该待利用特征点所在图像的图像坐标系下,得到该待利用特征点对应的空间点在该待利用特征点所在图像的投影点的投影位置信息。进而,针对各待利用特征点,利用该待利用特征点的图像位置信息以及该待利用特征点对应的空间点在该待利用特征点所在图像的投影点的投影位置信息,构建重投影误差方程。
具体的,理论上,待利用特征点对应的空间点在该待利用特征点所在图像的投影点的投影位置信息,与待利用特征点在其所在图像上的图像位置重合,相应的,重投影误差方程,可以通过如下公式(8)表示:
Figure PCTCN2021109535-appb-000041
Figure PCTCN2021109535-appb-000042
表示第q个待利用特征点在其所在图像,即当前图像及其前N时刻图像中的第e时刻图像中第c i个图像采集设备采集的图像上的图像位置信息,即视觉测量;
Figure PCTCN2021109535-appb-000043
表示第q个待利用特征点对应的三维位置信息,q的取值范围为1至S及之间的整数,S表示待利用特征点的总数量;
Figure PCTCN2021109535-appb-000044
表示当前图像及其前N时刻图像中的第e时刻图像中第c i个图像采集设备的设备位姿信息,即外参,
Figure PCTCN2021109535-appb-000045
表示当前图像及其前N时刻图像中的第e时刻图像对应的目标对象的位姿信息;
Figure PCTCN2021109535-appb-000046
表示当前图像及其前N时刻图像中的第e时刻图像中第c i个图像采集设备的内参矩阵;
Figure PCTCN2021109535-appb-000047
表示第q个待利用特征点对应的三维位置信息中的竖轴坐标值。
进而基于重投影误差方程,调整其中的
Figure PCTCN2021109535-appb-000048
值,以使得上述重投影误差方程成立,确定出目标对象在当前时刻的当前状态信息。
在本发明的另一实施例中,所述024,可以包括如下步骤0241-0242:
0241:基于重投影误差方程,构建目标测量方程。
0242:利用目标测量方程以及滤波更新方程,确定目标对象在当前时刻的当前状态 信息。
参照误差状态卡尔曼滤波理论,为了更新得到目标对象在当前时刻的当前状态信息,需要构建一个与其他传感器所采集的传感器据的处理方式相同的测量方程z k=h(x k)+R k。本实现方式中,则基于上述重投影误差方程构建目标测量方程。
其中,上述公式(8)中等号左侧的
Figure PCTCN2021109535-appb-000049
即视觉测量则表示z k,上述公式(8)中等号右侧则表示h(x k)。相应的,视觉的测量方程可以表示为:
Figure PCTCN2021109535-appb-000050
通过上述公式可以观测到视觉的测量方程中的状态量包含
Figure PCTCN2021109535-appb-000051
而待利用特征点的图像位置信息即特征点位置未作为目标测量方程中的状态量,根据多状态约束的卡尔曼滤波MSCKF的理论,通过一阶近似、等式两边左乘
Figure PCTCN2021109535-appb-000052
的左零空间消去
Figure PCTCN2021109535-appb-000053
的残差影响,最终得到视觉的目标测量方程
Figure PCTCN2021109535-appb-000054
通过该目标测量方程以及滤波更新方程即上述公式(6)滤波更新量对应的更新方程,求解得到当前图像对应的滑动窗口中的各位姿信息,以及当前时刻对应的系统状态
Figure PCTCN2021109535-appb-000055
即目标对象在当前时刻的当前状态信息。
相应于上述方法实施例,本发明实施例提供了一种状态信息估计装置,如图2所示,所述装置可以包括:
第一获得模块210,被配置为获得在当前时刻目标对象所设置的多图像采集设备采集的当前图像及其他传感器采集的当前传感器数据,其中,所述其他传感器包括IMU;
第一确定模块220,被配置为利用所述当前时刻的前一时刻对应的IMU数据以及所述前一时刻所述目标对象的前一状态信息,确定所述目标对象在所述当前时刻的初始状态信息;
第二确定模块230,被配置为利用各当前图像所检测出的特征点以及各当前图像对应的图像采集设备之间的相对位姿关系,确定各当前图像之间的匹配点对,作为当前图像对应的第一匹配点对;
第三确定模块240,被配置为利用各当前图像所检测出的特征点及其前一图像所检测出的特征点,确定各当前图像与其前一图像之间的匹配点对,作为当前图像对应的第二匹配点对;
第四确定模块250,被配置为基于当前图像对应的第一匹配点对及第二匹配点对以及所述当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息;
第五确定模块260,被配置为基于所述三维位置信息、各待利用特征点的图像位置 信息、所述初始状态信息及所述当前传感器数据,确定所述目标对象在当前时刻的当前状态信息。
应用本发明实施例,可以通过扩展状态,即利用多图像采集设备各自的特征点的跟踪结果即当前图像及其前N时刻图像对应的第二匹配点对,以及多图像采集设备之间的特征点关联关系即当前图像及其前N时刻图像对应的第一匹配点对,构建各待利用特征点对应的三维位置信息,以构建多状态约束,融合其他传感器数据的当前传感器数据,确定得到目标对象在当前时刻的当前状态信息,以获得精度更高、鲁棒性更高的状态估计结果,以实现对包含任意数量图像采集设备的多传感器系统的对象的状态信息的估计。
在本发明的另一实施例中,所述初始状态信息包括:初始速度信息及初始位姿信息,其中,所述初始位姿信息包括:初始姿态信息以及初始位置信息;
所述第一确定模块220,被具体配置为利用所述当前时刻的前一时刻对应的IMU数据,确定所述前一时刻所述目标对象的角速度信息和加速度信息;
利用所述前一时刻的角速度信息以及所述前一状态信息中的前一姿态信息,构建第一状态转移方程;利用所述第一状态转移方程确定所述目标对象在所述当前时刻的初始姿态信息;
利用所述前一姿态信息、所述前一时刻的加速度信息以及所述前一状态信息中的前一速度信息,构建第二状态转移方程;利用所述第二状态转移方程确定所述目标对象在所述当前时刻的初始速度信息;
利用所述初始速度信息、所述前一速度信息以及所述前一状态信息中的前一位置信息,构建第三状态转移方程;利用所述第三状态转移方程确定所述目标对象在所述当前时刻的初始位置信息。
在本发明的另一实施例中,所述第四确定模块250,被具体配置为按照三角测量算法,基于当前图像对应的第一匹配点对及第二匹配点对、当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对、各当前图像对应的图像采集设备的设备位姿信息、各前N时刻图像对应的图像采集设备的设备位姿信息以及所述当前图像及各前N时刻图像对应的目标对象的位姿信息,确定各待利用特征点对应的三维位置信息。
在本发明的另一实施例中,所述第五确定模块260包括:
第一确定单元(图中未示出),被配置为基于所述当前传感器数据以及所述初始状态信息,确定所述目标对象在当前时刻的中间状态信息;
第二确定单元(图中未示出),被配置为利用所述三维位置信息、所述中间状态信息中的中间位姿信息、前N时刻图像各时刻图像所对应目标对象的状态信息中的对象位姿信息以及各图像采集设备的设备位姿信息和内参矩阵,确定各待利用特征点对应的空 间点在其所在图像中投影点的投影位置信息;
构建单元(图中未示出),被配置为基于各待利用特征点对应的投影位置信息和各待利用特征点在当前图像中的图像位置信息,构建重投影误差方程;
第三确定单元(图中未示出),被配置为基于所述重投影误差方程,确定所述目标对象在当前时刻的当前状态信息。
在本发明的另一实施例中,所述第三确定单元,被具体配置为基于所述重投影误差方程,构建目标测量方程;
利用所述目标测量方程以及滤波更新方程,确定所述目标对象在当前时刻的当前状态信息。
上述系统、装置实施例与系统实施例相对应,与该方法实施例具有同样的技术效果,具体说明参见方法实施例。装置实施例是基于方法实施例得到的,具体的说明可以参见方法实施例部分,此处不再赘述。本领域普通技术人员可以理解:附图只是一个实施例的示意图,附图中的模块或流程并不一定是实施本发明所必须的。
本领域普通技术人员可以理解:实施例中的装置中的模块可以按照实施例描述分布于实施例的装置中,也可以进行相应变化位于不同于本实施例的一个或多个装置中。上述实施例的模块可以合并为一个模块,也可以进一步拆分成多个子模块。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围。

Claims (10)

  1. 一种状态信息估计方法,其特征在于,所述方法包括:
    获得在当前时刻目标对象所设置的多图像采集设备采集的当前图像及其他传感器采集的当前传感器数据,其中,所述其他传感器包括IMU;
    利用所述当前时刻的前一时刻对应的IMU数据以及所述前一时刻所述目标对象的前一状态信息,确定所述目标对象在所述当前时刻的初始状态信息;
    利用各当前图像所检测出的特征点以及各当前图像对应的图像采集设备之间的相对位姿关系,确定各当前图像之间的匹配点对,作为当前图像对应的第一匹配点对;
    利用各当前图像所检测出的特征点及其前一图像所检测出的特征点,确定各当前图像与其前一图像之间的匹配点对,作为当前图像对应的第二匹配点对;
    基于当前图像对应的第一匹配点对及第二匹配点对以及所述当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息;
    基于所述三维位置信息、各待利用特征点的图像位置信息、所述初始状态信息及所述当前传感器数据,确定所述目标对象在当前时刻的当前状态信息。
  2. 如权利要求1所述的方法,其特征在于,所述初始状态信息包括:初始速度信息及初始位姿信息,其中,所述初始位姿信息包括:初始姿态信息以及初始位置信息;
    所述利用所述当前时刻的前一时刻对应的IMU数据以及所述前一时刻所述目标对象的前一状态信息,确定所述目标对象在所述当前时刻的初始状态信息的步骤,包括:
    利用所述当前时刻的前一时刻对应的IMU数据,确定所述前一时刻所述目标对象的角速度信息和加速度信息;
    利用所述前一时刻的角速度信息以及所述前一状态信息中的前一姿态信息,构建第一状态转移方程;利用所述第一状态转移方程确定所述目标对象在所述当前时刻的初始姿态信息;
    利用所述前一姿态信息、所述前一时刻的加速度信息以及所述前一状态信息中的前一速度信息,构建第二状态转移方程;利用所述第二状态转移方程确定所述目标对象在所述当前时刻的初始速度信息;
    利用所述初始速度信息、所述前一速度信息以及所述前一状态信息中的前一位置信息,构建第三状态转移方程;利用所述第三状态转移方程确定所述目标对象在所述当前时刻的初始位置信息。
  3. 如权利要求1所述的方法,其特征在于,所述基于当前图像对应的第一匹配点对 及第二匹配点对以及当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息的步骤,包括:
    按照三角测量算法,基于当前图像对应的第一匹配点对及第二匹配点对、当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对、各当前图像对应的图像采集设备的设备位姿信息、各前N时刻图像对应的图像采集设备的设备位姿信息以及所述当前图像及各前N时刻图像对应的目标对象的位姿信息,确定各待利用特征点对应的三维位置信息。
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述基于所述三维位置信息、所述各待利用特征点的图像位置信息、所述初始状态信息以及所述当前传感器数据,确定所述目标对象在当前时刻的当前状态信息的步骤,包括:
    基于所述当前传感器数据以及所述初始状态信息,确定所述目标对象在当前时刻的中间状态信息;
    利用所述三维位置信息、所述中间状态信息中的中间位姿信息、前N时刻图像各时刻图像所对应目标对象的状态信息中的对象位姿信息以及各图像采集设备的设备位姿信息和内参矩阵,确定各待利用特征点对应的空间点在其所在图像中投影点的投影位置信息;
    基于各待利用特征点对应的投影位置信息和各待利用特征点在当前图像中的图像位置信息,构建重投影误差方程;
    基于所述重投影误差方程,确定所述目标对象在当前时刻的当前状态信息。
  5. 如权利要求4所述的方法,其特征在于,所述基于所述重投影误差方程,确定所述目标对象在当前时刻的当前状态信息的步骤,包括:
    基于所述重投影误差方程,构建目标测量方程;
    利用所述目标测量方程以及滤波更新方程,确定所述目标对象在当前时刻的当前状态信息。
  6. 一种状态信息估计装置,其特征在于,所述装置包括:
    第一获得模块,被配置为获得在当前时刻目标对象所设置的多图像采集设备采集的当前图像及其他传感器采集的当前传感器数据,其中,所述其他传感器包括IMU;
    第一确定模块,被配置为利用所述当前时刻的前一时刻对应的IMU数据以及所述前一时刻所述目标对象的前一状态信息,确定所述目标对象在所述当前时刻的初始状态信息;
    第二确定模块,被配置为利用各当前图像所检测出的特征点以及各当前图像对应的 图像采集设备之间的相对位姿关系,确定各当前图像之间的匹配点对,作为当前图像对应的第一匹配点对;
    第三确定模块,被配置为利用各当前图像所检测出的特征点及其前一图像所检测出的特征点,确定各当前图像与其前一图像之间的匹配点对,作为当前图像对应的第二匹配点对;
    第四确定模块,被配置为基于当前图像对应的第一匹配点对及第二匹配点对以及所述当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对,确定各待利用特征点对应的三维位置信息;
    第五确定模块,被配置为基于所述三维位置信息、各待利用特征点的图像位置信息、所述初始状态信息及所述当前传感器数据,确定所述目标对象在当前时刻的当前状态信息。
  7. 如权利要求6所述的装置,其特征在于,所述初始状态信息包括:初始速度信息及初始位姿信息,其中,所述初始位姿信息包括:初始姿态信息以及初始位置信息;
    所述第一确定模块,被具体配置为利用所述当前时刻的前一时刻对应的IMU数据,确定所述前一时刻所述目标对象的角速度信息和加速度信息;
    利用所述前一时刻的角速度信息以及所述前一状态信息中的前一姿态信息,构建第一状态转移方程;利用所述第一状态转移方程确定所述目标对象在所述当前时刻的初始姿态信息;
    利用所述前一姿态信息、所述前一时刻的加速度信息以及所述前一状态信息中的前一速度信息,构建第二状态转移方程;利用所述第二状态转移方程确定所述目标对象在所述当前时刻的初始速度信息;
    利用所述初始速度信息、所述前一速度信息以及所述前一状态信息中的前一位置信息,构建第三状态转移方程;利用所述第三状态转移方程确定所述目标对象在所述当前时刻的初始位置信息。
  8. 如权利要求6所述的装置,其特征在于,所述第四确定模块,被具体配置为按照三角测量算法,基于当前图像对应的第一匹配点对及第二匹配点对、当前时刻的前N时刻图像对应的第一匹配点对及第二匹配点对、各当前图像对应的图像采集设备的设备位姿信息、各前N时刻图像对应的图像采集设备的设备位姿信息以及所述当前图像及各前N时刻图像对应的目标对象的位姿信息,确定各待利用特征点对应的三维位置信息。
  9. 如权利要求6-8任一项所述的装置,其特征在于,所述第五确定模块包括:
    第一确定单元,被配置为基于所述当前传感器数据以及所述初始状态信息,确定所 述目标对象在当前时刻的中间状态信息;
    第二确定单元,被配置为利用所述三维位置信息、所述中间状态信息中的中间位姿信息、前N时刻图像各时刻图像所对应目标对象的状态信息中的对象位姿信息以及各图像采集设备的设备位姿信息和内参矩阵,确定各待利用特征点对应的空间点在其所在图像中投影点的投影位置信息;
    构建单元,被配置为基于各待利用特征点对应的投影位置信息和各待利用特征点在当前图像中的图像位置信息,构建重投影误差方程;
    第三确定单元,被配置为基于所述重投影误差方程,确定所述目标对象在当前时刻的当前状态信息。
  10. 如权利要求9所述的装置,其特征在于,所述第三确定单元,被具体配置为基于所述重投影误差方程,构建目标测量方程;
    利用所述目标测量方程以及滤波更新方程,确定所述目标对象在当前时刻的当前状态信息。
PCT/CN2021/109535 2021-02-26 2021-07-30 一种状态信息估计方法及装置 WO2022179047A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110217283.5 2021-02-26
CN202110217283.5A CN114964217A (zh) 2021-02-26 2021-02-26 一种状态信息估计方法及装置

Publications (1)

Publication Number Publication Date
WO2022179047A1 true WO2022179047A1 (zh) 2022-09-01

Family

ID=82973589

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/109535 WO2022179047A1 (zh) 2021-02-26 2021-07-30 一种状态信息估计方法及装置

Country Status (2)

Country Link
CN (1) CN114964217A (zh)
WO (1) WO2022179047A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080167814A1 (en) * 2006-12-01 2008-07-10 Supun Samarasekera Unified framework for precise vision-aided navigation
CN109506642A (zh) * 2018-10-09 2019-03-22 浙江大学 一种机器人多相机视觉惯性实时定位方法及装置
CN112050806A (zh) * 2019-06-06 2020-12-08 北京初速度科技有限公司 一种移动车辆的定位方法及装置
CN112115980A (zh) * 2020-08-25 2020-12-22 西北工业大学 基于光流跟踪和点线特征匹配的双目视觉里程计设计方法
CN112270710A (zh) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 位姿确定方法、位姿确定装置、存储介质与电子设备
CN112269851A (zh) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 地图数据更新方法、装置、存储介质与电子设备
CN112304307A (zh) * 2020-09-15 2021-02-02 浙江大华技术股份有限公司 一种基于多传感器融合的定位方法、装置和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080167814A1 (en) * 2006-12-01 2008-07-10 Supun Samarasekera Unified framework for precise vision-aided navigation
CN109506642A (zh) * 2018-10-09 2019-03-22 浙江大学 一种机器人多相机视觉惯性实时定位方法及装置
CN112050806A (zh) * 2019-06-06 2020-12-08 北京初速度科技有限公司 一种移动车辆的定位方法及装置
CN112115980A (zh) * 2020-08-25 2020-12-22 西北工业大学 基于光流跟踪和点线特征匹配的双目视觉里程计设计方法
CN112304307A (zh) * 2020-09-15 2021-02-02 浙江大华技术股份有限公司 一种基于多传感器融合的定位方法、装置和存储介质
CN112270710A (zh) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 位姿确定方法、位姿确定装置、存储介质与电子设备
CN112269851A (zh) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 地图数据更新方法、装置、存储介质与电子设备

Also Published As

Publication number Publication date
CN114964217A (zh) 2022-08-30

Similar Documents

Publication Publication Date Title
CN110243358B (zh) 多源融合的无人车室内外定位方法及系统
CN111052183B (zh) 利用事件相机的视觉惯性里程计
CN112567201B (zh) 距离测量方法以及设备
CN109887057B (zh) 生成高精度地图的方法和装置
CN112197770B (zh) 一种机器人的定位方法及其定位装置
CN112598757B (zh) 一种多传感器时间空间标定方法及装置
JP2019215853A (ja) 測位のための方法、測位のための装置、デバイス及びコンピュータ読み取り可能な記憶媒体
US10895458B2 (en) Method, apparatus, and system for determining a movement of a mobile platform
WO2020140431A1 (zh) 相机位姿确定方法、装置、电子设备及存储介质
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
WO2020107931A1 (zh) 位姿信息确定方法和装置、视觉点云构建方法和装置
CN112815939B (zh) 移动机器人的位姿估计方法及计算机可读存储介质
CN112230242A (zh) 位姿估计系统和方法
JP2020064056A (ja) 位置推定装置及び方法
KR101890612B1 (ko) 적응적 관심영역 및 탐색창을 이용한 객체 검출 방법 및 장치
CN113551665B (zh) 一种用于运动载体的高动态运动状态感知系统及感知方法
CN112050806B (zh) 一种移动车辆的定位方法及装置
KR101737950B1 (ko) 지형참조항법에서 영상 기반 항법해 추정 시스템 및 방법
CN111623773B (zh) 一种基于鱼眼视觉和惯性测量的目标定位方法及装置
CN114323033A (zh) 基于车道线和特征点的定位方法、设备及自动驾驶车辆
CN110598370B (zh) 基于sip和ekf融合的多旋翼无人机鲁棒姿态估计
CN112862818B (zh) 惯性传感器和多鱼眼相机联合的地下停车场车辆定位方法
CN113465596A (zh) 一种基于多传感器融合的四旋翼无人机定位方法
Hong et al. Visual inertial odometry using coupled nonlinear optimization
CN111811421B (zh) 一种高速实时形变监测方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927476

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21927476

Country of ref document: EP

Kind code of ref document: A1