WO2022033139A1 - 一种自运动估计方法及相关装置 - Google Patents
一种自运动估计方法及相关装置 Download PDFInfo
- Publication number
- WO2022033139A1 WO2022033139A1 PCT/CN2021/098209 CN2021098209W WO2022033139A1 WO 2022033139 A1 WO2022033139 A1 WO 2022033139A1 CN 2021098209 W CN2021098209 W CN 2021098209W WO 2022033139 A1 WO2022033139 A1 WO 2022033139A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vector
- velocity vector
- sensor
- carrier
- coordinate system
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
Definitions
- the present application relates to the field of sensor technology, and in particular, to a self-motion estimation method and related devices.
- ADAS Advanced driver-assistance systems
- ADS autonomous driving systems
- moving objects include vehicles, pedestrians, etc.
- stationary objects include obstacles, guardrails, light poles, and buildings.
- different methods are usually used for analysis and processing, such as classifying, identifying and tracking moving targets; classifying and identifying stationary targets to provide additional information for unmanned vehicles, such as avoiding obstacles , provide a drivable area, etc.
- Sensors can usually be installed on different carrier platforms, such as vehicles, ships, satellites, drones, and robots, and the sensors will follow the movement of the carrier platform where the sensors are located.
- the motion of the sensor leads to the inability to analyze the moving target and the stationary target independently. Therefore, the motion of the sensor needs to be estimated to achieve the separation of the moving target and the stationary target; on the other hand, the tracking of the moving target is usually based on the model, and the model usually assumes Relative to the ground or geodetic coordinate system, but the motion of the sensor will cause the above model to fail or the tracking performance to degrade; at this time, the motion of the sensor or the sensor carrier needs to be compensated. In addition, for different actual scenarios, it is also crucial to realize the positioning and tracking of the sensor motion platform through sensor motion estimation.
- the embodiments of the present application disclose a self-motion estimation method and a related device, which can determine the speed of a sensor and a carrier where the sensor is located, and improve the accuracy of the self-motion estimation of the sensor or the carrier where the sensor is located.
- a first aspect of the embodiments of the present application discloses a self-motion estimation method, including:
- the first rotational velocity vector ⁇ s the first translation velocity vector
- the external parameters of the sensor determine the second rotational speed vector ⁇ e and the second translational speed vector te of the carrier
- the carrier is the carrier where the sensor is located
- the external parameters of the sensor include the sensor coordinate system and the carrier The transformation relationship of the coordinate system.
- the normalized or scale-stretched first translation velocity vector of the self-motion of the sensor is usually obtained.
- the embodiment of the present application can accurately obtain the second rotational velocity vector ⁇ e and the complete second translational velocity vector t e of the carrier, that is, recover the missing information through external parameters, so as to be more accurate It estimates the carrier where the sensor is located and the motion speed of the sensor, which improves the accuracy of the self-motion estimation of the sensor or the carrier where the sensor is located, thereby improving the safety of assisted driving, automatic driving or unmanned driving.
- the conversion relationship includes: a rotation matrix R and a translation vector r of the sensor coordinate system relative to the carrier coordinate system.
- the first translation velocity vector and the external parameters of the sensor to determine the second rotational speed vector ⁇ e and the second translational speed vector t e of the carrier, including: determining the carrier according to the rotation matrix R and the first rotational speed vector ⁇ s .
- the second rotational velocity vector ⁇ e is
- the second rotational velocity vector ⁇ e of the carrier is determined according to the external parameters of the sensor coordinate system relative to the carrier coordinate device, that is, the rotation matrix R or the quaternion or the Euler angle and the first rotational velocity vector ⁇ s ,
- the external parameters of the sensor relative to the carrier can be effectively used to obtain the rotational angular velocity of the carrier where the sensor is located, thereby effectively improving the accuracy of estimating the rotational speed of the carrier where the sensor is located.
- the first translation velocity vector and the external parameters of the sensor to determine the second rotational speed vector ⁇ e and the second translational speed vector t e of the carrier, including: determining the second rotational speed vector ⁇ e and the translation vector r according to the second rotational speed vector The instantaneous velocity component caused by the rotation of the carrier; according to the rotation matrix R and the first translation velocity vector A normalized instantaneous velocity vector of the sensor relative to the carrier coordinate system is determined; a second translational velocity vector te of the carrier is determined according to the instantaneous velocity component and the normalized instantaneous velocity vector.
- the determining the second translational velocity vector te of the carrier according to the instantaneous velocity component and the normalized instantaneous velocity vector includes: according to the instantaneous velocity components and the normalized instantaneous velocity vector to determine the magnitude of the translational velocity vector of the sensor coordinate system; from the magnitude of the translational velocity vector of the sensor coordinate system, the normalized instantaneous velocity vector and the instantaneous The velocity component determines the second translational velocity vector te of the carrier.
- the second translational velocity vector te of the carrier is the magnitude of the translational velocity vector of the sensor coordinate system, and the is the normalized instantaneous velocity vector, and the t R is the instantaneous velocity component.
- the method further includes: according to the magnitude s of the translational velocity vector of the sensor coordinate system and the first translational velocity vector A third translational velocity vector ts of the sensor self-motion is determined.
- the amplitude s of the translational velocity vector of the sensor coordinate system and the first translational velocity vector Determine the third translational velocity vector ts of the sensor self-motion, where the first translational velocity vector is normalized or scaled, and the third translational velocity vector ts is the complete translational velocity vector.
- the embodiments of the present application provide a method for determining the complete third translational velocity vector ts of the self-motion of the sensor in the absence of scale information.
- the senor is a visual sensor
- the method further includes: acquiring a flow vector of a stationary obstacle, where the flow vector includes an image of the stationary obstacle on the visual sensor The motion vector on the plane; the depth z of the stationary obstacle is determined according to the flow vector of the stationary obstacle, the third translational velocity vector ts and the first rotational velocity vector ⁇ s.
- the depth z of the stationary obstacle is determined by the flow vector of the stationary obstacle, the third translational velocity vector t s and the first rotational velocity vector ⁇ s , while in the prior art, there is a scale problem, and the depth information is the same as the The various components of the translational velocity are coupled together, and it is usually impossible to obtain an accurate depth estimation.
- the embodiment of the present application can obtain a more accurate depth z by using the flow vector.
- a second aspect of the embodiments of the present application discloses a self-motion estimation device, including:
- the first acquisition unit is used to acquire the first rotational velocity vector ⁇ s of the sensor self-motion, the first translational velocity vector normalized or scaled
- a first determination unit configured to use the first rotational speed vector ⁇ s and the first translational speed vector And the external parameters of the sensor determine the second rotational speed vector ⁇ e and the second translational speed vector te of the carrier, the carrier is the carrier where the sensor is located, and the external parameters of the sensor include the sensor coordinate system and the carrier The transformation relationship of the coordinate system.
- the normalized or scale-stretched first translation velocity vector of the self-motion of the sensor is usually obtained. Usually the information of one degree of freedom is lacking.
- the embodiment of the present application can accurately obtain the second rotational velocity vector ⁇ e and the complete second translation velocity vector t e of the carrier, that is, recover the missing information through the external parameters of the sensor, As a result, the carrier where the sensor is located and the motion speed of the sensor are more accurately estimated, the accuracy of the self-motion estimation of the sensor or the carrier where the sensor is located is improved, and the safety of assisted driving, automatic driving or unmanned driving is improved.
- the conversion relationship includes: a rotation matrix R and a translation vector r of the sensor coordinate system relative to the carrier coordinate system.
- the first determining unit is configured to determine the second rotational speed vector ⁇ e of the carrier according to the rotation matrix R and the first rotational speed vector ⁇ s .
- the second rotational speed vector ⁇ e of the carrier is determined according to the external parameters of the sensor coordinate system relative to the carrier coordinate system, that is, the rotation matrix R or the quaternion or the Euler angle and the first rotational speed vector ⁇ s ,
- the external parameters of the sensor relative to the carrier can be effectively used to obtain the rotational angular velocity of the carrier where the sensor is located, thereby effectively improving the accuracy of estimating the rotational speed of the carrier where the sensor is located.
- the first determining unit is configured to determine the instantaneous speed component caused by the rotation of the carrier according to the second rotation speed vector ⁇ e and the translation vector r; the rotation matrix R and the first translational velocity vector A normalized instantaneous velocity vector of the sensor relative to the carrier coordinate system is determined; a second translational velocity vector te of the carrier is determined according to the instantaneous velocity component and the normalized instantaneous velocity vector.
- the first determining unit is configured to determine the magnitude of the translational velocity vector of the sensor coordinate system according to the instantaneous velocity component and the normalized instantaneous velocity vector;
- a second translational velocity vector te of the carrier is determined from the magnitude of the translational velocity vector of the sensor coordinate system, the normalized instantaneous velocity vector, and the instantaneous velocity component.
- the second translational velocity vector te of the carrier is the magnitude of the translational velocity vector of the sensor coordinate system, and the is the normalized instantaneous velocity vector, and the t R is the instantaneous velocity component.
- the apparatus further includes: a second determination unit, configured to determine the magnitude s of the translational velocity vector of the sensor coordinate system and the first translational velocity vector A third translational velocity vector ts of the sensor self-motion is determined.
- the amplitude s of the translational velocity vector of the sensor coordinate system and the first translational velocity vector Determine the third translational velocity vector ts of the sensor self-motion, where the first translational velocity vector is normalized or scaled, and the third translational velocity vector ts is the complete translational velocity vector.
- the embodiments of the present application provide a method for determining the complete third translational velocity vector ts of the self-motion of the sensor in the absence of scale information.
- the senor is a visual sensor
- the device further includes: a second acquisition unit, configured to acquire a flow vector of a stationary obstacle, where the flow vector includes the stationary obstacle A motion vector on the image plane of the vision sensor; a third determination unit for the flow vector of the stationary obstacle, the third translational velocity vector ts and the first rotational velocity vector ⁇ s Determine the depth z of the stationary obstacle.
- the depth z of the stationary obstacle is determined by the flow vector of the stationary obstacle, the third translational velocity vector t s and the first rotational velocity vector ⁇ s , while in the prior art, there is a scale problem, and the depth information is the same as the The various components of the translational velocity are coupled together, and it is usually impossible to obtain an accurate depth estimation.
- the embodiment of the present application can obtain a more accurate depth z by using the optical flow vector.
- a third aspect of the embodiments of the present application discloses an apparatus for self-motion estimation, where the apparatus includes at least one processor and a communication interface, and optionally, further includes a memory.
- the memory is used to store a computer program, and the at least one processor invokes the computer program to perform the following operations:
- the carrier is the carrier where the sensor is located, and the external parameters of the sensor include the sensor coordinate system and the carrier The transformation relationship of the coordinate system.
- the normalized or scale-stretched first translation velocity vector of the self-motion of the sensor is usually obtained.
- the embodiment of the present application can accurately obtain the second rotational velocity vector ⁇ e of the carrier and the complete second translational velocity vector t e , that is, restore the missing information through the external parameters of the sensor, thereby making it easier to Accurately estimate the carrier where the sensor is located and the motion speed of the sensor, improve the accuracy of the self-motion estimation of the sensor or the carrier where the sensor is located, thereby improving the safety of autonomous driving.
- the conversion relationship includes: a rotation matrix R and a translation vector r of the sensor coordinate system relative to the carrier coordinate system.
- the at least one processor is configured to determine a second rotational speed vector ⁇ e of the carrier according to the rotation matrix R and the first rotational speed vector ⁇ s .
- the second rotational speed vector ⁇ e of the carrier is determined according to the external parameters of the sensor coordinate system relative to the carrier coordinate system, that is, the rotation matrix R or the quaternion or the Euler angle and the first rotational speed vector ⁇ s ,
- the external parameters of the sensor relative to the carrier can be effectively used to obtain the rotational angular velocity of the carrier where the sensor is located, thereby effectively improving the accuracy of estimating the rotational speed of the carrier where the sensor is located.
- the at least one processor is configured to determine the instantaneous speed component caused by the rotation of the carrier according to the second rotational speed vector ⁇ e and the translation vector r; the rotation matrix R and the first translational velocity vector A normalized instantaneous velocity vector of the sensor relative to the carrier coordinate system is determined; a second translational velocity vector te of the carrier is determined according to the instantaneous velocity component and the normalized instantaneous velocity vector.
- the at least one processor is configured to determine the magnitude of the translational velocity vector of the sensor coordinate system according to the instantaneous velocity component and the normalized instantaneous velocity vector;
- a second translational velocity vector te of the carrier is determined from the magnitude of the translational velocity vector of the sensor coordinate system, the normalized instantaneous velocity vector, and the instantaneous velocity component.
- the second translational velocity vector te of the carrier is the magnitude of the translational velocity vector of the sensor coordinate system, and the is the normalized instantaneous velocity vector, and the t R is the instantaneous velocity component.
- the at least one processor is further configured to, according to the magnitude s of the translational velocity vector of the sensor coordinate system and the first translational velocity vector A third translational velocity vector ts of the sensor self-motion is determined.
- the amplitude s of the translational velocity vector of the sensor coordinate system and the first translational velocity vector Determine the third translational velocity vector ts of the sensor self-motion, where the first translational velocity vector is normalized or scaled, and the third translational velocity vector ts is the complete translational velocity vector.
- the embodiments of the present application provide a method for determining the complete third translational velocity vector ts of the self-motion of the sensor in the absence of scale information.
- the senor is a visual sensor
- the at least one processor is further configured to acquire a flow vector of a stationary obstacle, where the flow vector includes the stationary obstacle in the The motion vector on the image plane of the vision sensor; the depth z of the stationary obstacle is determined from the flow vector of the stationary obstacle, the third translational velocity vector ts and the first rotational velocity vector ⁇ s.
- the depth z of the stationary obstacle is determined by the flow vector of the stationary obstacle, the third translational velocity vector t s and the first rotational velocity vector ⁇ s , while in the prior art, there is a scale problem, and the depth information is the same as the The various components of the translational velocity are coupled together, and it is usually impossible to obtain an accurate depth estimation.
- the embodiment of the present application can obtain a more accurate depth z by using the optical flow vector.
- a fourth aspect of the embodiments of the present application discloses a computer product that, when the computer program product runs on a processor, implements the method described in any one aspect or an optional solution of any one aspect.
- a fifth aspect of the embodiments of the present application discloses a chip system, where the chip system includes at least one processor and a communication interface.
- the chip system further includes a memory, and the at least one processor is configured to call at least one memory
- the computer program stored in the system enables the device where the chip system is located to implement the method described in any aspect or an optional solution of any aspect.
- a sixth aspect of the embodiments of the present application discloses a computer-readable storage medium, where the computer storage medium stores a computer program, and when the computer program is executed by a processor, implements any aspect or an optional solution of any aspect the described method.
- a seventh aspect of the embodiments of the present application discloses a terminal, where the terminal includes at least one processor and a communication interface, and optionally, further includes a memory, where the at least one processor is configured to call a computer program stored in at least one memory, The method described in order to make the terminal implement any aspect or an optional solution of any aspect.
- An eighth aspect of the embodiments of the present application discloses a sensor, where the sensor includes at least one processor and a communication interface.
- the sensor further includes a memory, and the at least one processor is configured to call at least one memory stored in the memory.
- a computer program so that the device where the sensor is located implements the method described in any aspect or an optional solution of any aspect.
- FIG. 1 is a schematic structural diagram of a self-motion estimation system provided by an embodiment of the present application.
- FIG. 2 is a schematic diagram of an inertial measurement unit provided by an embodiment of the present application.
- FIG. 3 is a flowchart of a self-motion estimation method provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of a representation form of a sensor coordinate system provided by an embodiment of the present application.
- FIG. 5 is a schematic diagram of a representation form of a carrier coordinate system provided by an embodiment of the present application.
- FIG. 6 is a schematic structural diagram of a self-motion estimation apparatus provided by an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of another self-motion estimation apparatus provided by an embodiment of the present application.
- FIG. 1 is a schematic structural diagram of a self-motion estimation system provided by an embodiment of the present application.
- the system includes a sensor 1001, a motion measurement module 1002, and a data processing module 1003, where the sensor 1001 may be a visual sensor, such as , infrared thermal imaging sensor, camera or camera, etc.
- the sensor 1001 is used to provide visual measurement data, such as images or videos;
- the motion measurement module 1002 is used to obtain motion measurement data according to the sensor data, such as the first rotational velocity vector ⁇ s of the self-motion of the sensor and the normalized or scaled first translation velocity vector
- the data processing module 1003 is used to process the measurement data provided by the sensor 1002.
- the motion measurement module 1002 and the data processing module 1003 may be in the same processor.
- the senor 1001 , the motion measurement module 1002 and the data processing module 1003 can be completely or differently integrated together in a wired or wireless manner, for example, the sensor 1001 , the motion measurement module 1002 and the data processing module 1003 are deployed on one processor On the system; at this time, the self-motion estimation system can be on the same vehicle, the same airborne or the same spaceborne or intelligent body. In yet another example, the sensor 1001 is on the vehicle, the airborne or the spaceborne or the intelligent body, and the motion measurement module 1002 and/or the data processing module 1003 are in the cloud.
- the sensor 1001 will provide the video data or parameterization
- the data or the data provided by the motion measurement module 1002 is sent to the data processing module 1003 in the cloud, and the data processing module 1003 sends the processed result to the vehicle, airborne or spaceborne or intelligent body.
- the vehicle may be a car, motorcycle or bicycle, etc.
- the airborne may be a drone, a helicopter or a jet plane, etc.
- the satellite may be a satellite, etc., and an intelligent body such as a robot.
- an inertial measurement unit is usually installed in the vehicle, as shown in Figure 2, which shows an inertial measurement unit, and the IMU is a A device that measures the three-axis attitude angle (or angular velocity) and acceleration of an object.
- an IMU will be equipped with three single-axis gyroscopes and three single-axis accelerometers. The gyroscope is used to detect the angular velocity signal of the carrier relative to the navigation coordinate system, and the accelerometer detects the object in the carrier coordinate system.
- the acceleration signal of the axis measures the angular velocity and acceleration of the object in three-dimensional space, and based on this, the movement speed and attitude of the object can be calculated.
- the moving speed of the object is usually obtained based on the accumulation of acceleration, but the measurement error of the acceleration measured by the accelerometer will accumulate over time, so there is a problem of error accumulation, and additional calibration with other sensors is required.
- the accuracy of the IMU generally used in the vehicle is too low, and the cost of selecting a high-precision IMU is high.
- a millimeter-wave radar sensor is typically installed in the vehicle, and the radar sensor typically provides measurements such as distance, azimuth, and radial velocity.
- the instantaneous velocity of the sensor relative to the ground can be obtained according to the least squares method or other methods.
- usually only two velocity components relative to the radar sensor coordinate system can be obtained, namely lateral velocity and radial velocity, but the third velocity component cannot be obtained.
- the radial velocity error is relatively low, the The lateral velocity error is relatively high, and it is difficult to meet the performance requirements of the system.
- this method can only obtain one rotational angular velocity component, that is, the yaw angular velocity (yaw rate), but cannot obtain the longitudinal angular velocity (pitch rate) and roll angular velocity (roll rate). .
- a camera or camera is installed in the vehicle for providing a continuous image.
- the optical flow method, the feature point method or the method of directly optimizing the objective function of the brightness can be used to obtain the (normalized or scaled) translational motion velocity vector sum of the sensor relative to the ground. Rotation velocity vector.
- this method has a scale problem, and the various components of depth and translation velocity are coupled together, usually a more accurate depth cannot be obtained, resulting in the inability to obtain an accurate translation motion velocity, or only normalized or scaled translation motion can be obtained. speed.
- FIG. 3 is a self-motion estimation method provided by an embodiment of the present application.
- the execution subject of the method may be a sensor system or a fusion perception system or a planning/control system integrating the above systems, such as an assisted driving or an automatic driving system. Wait.
- the execution body of the method may also be software or hardware (eg, a data processing device connected or integrated with the corresponding sensor by wireless or wire).
- the following different execution steps may be implemented in a centralized manner, or, the following different execution steps may also be implemented in a distributed manner.
- the method includes but is not limited to the following steps:
- Step S301 Obtain the first rotational velocity vector ⁇ s of the sensor's self-motion from the sensor, and the first translational velocity vector normalized or scaled
- the first translational velocity vector also known as the first translational motion velocity vector
- the sensor may be a visual sensor, such as a camera, a camera, an infrared sensor, or other imaging sensors, etc., which is not limited in this embodiment of the present application.
- the first rotational velocity vector ⁇ s and the first translation velocity vector of the self-motion of the sensor are obtained
- the method can be obtained through wired or wireless methods, for example, it can be directly read in the same processor module; in the same hardware system, it can be obtained through a bus such as a peripheral component interconnect (PCI) bus, or through a network
- PCI peripheral component interconnect
- CAN controller area network
- various types of controller area network (CAN) in the vehicle are obtained; or obtained through wireless means such as through cloud communication, of course, it can also be obtained in other forms, which is not limited in this embodiment of the present application.
- the first rotational velocity vector ⁇ s and the first translational velocity vector of the sensor self-motion It can be determined based on the optical characteristics or geometric characteristics of the data according to the feature points, lines or planes or regions in the data obtained by the sensor, for example, based on the 8-point method or the 5-point method or the Homography or optical flow method. Obtained, the embodiments of the present application are not limited.
- t' is the first translational velocity vector of the sensor's self-motion, which can be expressed as:
- the normalized value s' or the scaling value s' can also be a certain component of the vector t', for example, it can be the third component t' z of the vector t', that is:
- the normalization value s' or the scaling value s' may also be other unknown scalars, such as depth, which is not limited here.
- the sensor or the carrier on which the sensor is located moves in a plane, such as the ground or a plane track.
- Step S302 According to the first rotational velocity vector ⁇ s , the normalized or scaled first translation velocity vector And the external parameters of the sensor determine the second rotational velocity vector ⁇ e and/or the second translational velocity vector t e of the carrier.
- the second translational velocity vector te of the carrier is a complete translational velocity vector
- the external parameters of the sensor include the conversion relationship between the sensor coordinate system and the carrier coordinate system
- the external parameters may include the sensor coordinate system relative to the carrier coordinate System translation and/or rotation parameters of the sensor coordinate system relative to the carrier coordinate system; or, including translation parameters of the carrier coordinate system relative to the sensor coordinate system and/or rotation parameters of the carrier coordinate system relative to the sensor coordinate system.
- the translation of the sensor coordinate system relative to the carrier coordinate system may be a translation vector of the origin of the sensor coordinate system relative to the origin of the carrier coordinate system; or, the position vector of the origin of the sensor coordinate system in the origin of the carrier coordinate system, which is not limited here.
- the rotation of the sensor coordinate system relative to the carrier coordinate system may be represented by a rotation matrix or a quaternion or Euler angle of the rotation of the sensor coordinate system relative to the carrier coordinate system, which is not limited here.
- the translation of the carrier coordinate system relative to the sensor coordinate system may be a translation vector of the origin of the carrier coordinate system relative to the origin of the sensor coordinate system; or, the position vector of the origin of the carrier coordinate system in the origin of the sensor coordinate system, which is not limited here.
- the rotation of the carrier coordinate system relative to the sensor coordinate system may be represented by a rotation matrix or a quaternion or Euler angle for the rotation of the carrier coordinate system relative to the sensor coordinate system, which is not limited here.
- the external parameters of the sensor coordinate system relative to the carrier coordinate system are related to the installation position of the sensor on the carrier.
- the sensor coordinate system is shown in Figure 4.
- the X-axis direction is forward, and a right-handed coordinate system is used.
- FIG. 4 is a representation form of a sensor coordinate system, for example, the sensor coordinate system is represented by O 1 XYZ.
- the carrier coordinate system is fixed to the carrier, and the origin of the carrier coordinate system is the center of the carrier.
- the origin of the vehicle coordinate system is located at the center of the rear axle, as shown in Figure 5.
- the X-axis direction is forward, and a right-handed coordinate system is used.
- Figure 5 is a representation of a carrier coordinate system, for example, the carrier coordinate system is represented by O 2 X b Y b Z b , wherein O 2 X b is to the right along the horizontal axis of the carrier, and O 2 Y b is forward along the longitudinal axis of the carrier, O 2 Z b is upward along the vertical axis of the carrier.
- the conversion relationship may include a rotation matrix R and a translation vector r of the sensor coordinate system relative to the carrier coordinate system.
- the target position vector measured by the sensor is p b
- the position vector in the carrier coordinate system is p e , which can be expressed as:
- the senor or the carrier on which the sensor is located moves in a plane, and the above coordinate rotation may be represented by a rotation around the z-axis direction.
- the second rotational velocity vector ⁇ s of the carrier is determined according to the rotation matrix R and the first rotational velocity vector ⁇ s .
- the second rotational speed vector ⁇ e of the carrier is the product of the rotation matrix R and the first rotational speed vector ⁇ s .
- the rotation matrix is R
- the first rotation speed vector ⁇ s the second rotation speed vector of the carrier is determined as:
- the rotation of the sensor coordinate system relative to the carrier coordinate system can be represented by a rotation matrix or a quaternion or Euler angle that the sensor coordinate system rotates relative to the carrier coordinate system, at this time, it can also be determined according to the quaternion or Euler angle
- the second rotational velocity vector of the carrier is not described further here.
- the sensor coordinate system and the carrier coordinate system have the same axis directions corresponding to each other.
- the second translational velocity vector te of the carrier is determined.
- the external parameters of the sensor may include a translation vector r of the sensor coordinate system relative to the carrier coordinate system and/or a rotation parameter, for example, the rotation parameter may be a rotation matrix, a quaternion, or an Euler angle or the like.
- the external parameters of the sensor include a translation vector r.
- the instantaneous speed component caused by the rotation of the carrier is determined according to the second rotational speed vector ⁇ e and the translation vector r; a translational velocity vector The normalized instantaneous velocity vector of the sensor relative to the coordinate system of the carrier is determined; the second translational velocity vector te of the carrier is determined according to the instantaneous velocity component and the normalized instantaneous velocity vector.
- ⁇ represents the cross product of the vectors. Specifically, it can be:
- the instantaneous velocity component t R caused by the rotation of the carrier is the negative vector of the cross product of the translation vector r and the second rotational velocity vector ⁇ e ,
- ⁇ represents the cross product of the vectors. Specifically, it can be
- the sensor or the carrier on which the sensor is located moves in a plane, such as the ground or a plane track.
- the first rotational speed vector ⁇ s can be simplified to be represented by ⁇ s,z
- the instantaneous velocity component t R can be simplified as:
- t R can be simplified to a two-dimensional vector, including t R, x , t R, y two components.
- a first translational velocity vector that can be scaled according to a rotation parameter and a normalization or scale Determine the normalized instantaneous velocity vector of the sensor relative to the carrier coordinate system.
- the rotation parameter of the sensor relative to the carrier for example, the rotation parameter may be a rotation matrix, a quaternion, or an Euler angle, or the like.
- Determining the normalized instantaneous velocity vector of the sensor relative to the carrier coordinate system may refer to the normalized instantaneous velocity vector of the sensor relative to the carrier coordinate system for the rotation matrix R and the normalized or scaled first translational velocity vector the product of , that is
- R is an identity matrix, the normalized instantaneous velocity vector of the sensor relative to the carrier coordinate system and the normalized or scaled first translational velocity vector equal,
- determining the second translational velocity vector t e of the carrier according to the instantaneous velocity component and the normalized instantaneous velocity vector may mean that the second translational velocity vector t e of the carrier is the difference between the instantaneous velocity vector and the instantaneous velocity component difference, the instantaneous velocity vector is determined from the normalized instantaneous velocity vector and the instantaneous velocity component.
- the second translational velocity vector t e of the carrier is:
- t R, y is the second component of t R , that is, the component along the y-axis direction, for The second component of , that is, the component along the y-axis.
- the instantaneous velocity vector is determined according to the normalized instantaneous velocity vector and the instantaneous velocity component, including
- the instantaneous velocity vector is determined according to the normalized instantaneous velocity vector and the magnitude of the translational velocity vector of the sensor coordinate system, wherein the magnitude of the translational velocity vector of the sensor coordinate system is based on the second component of the instantaneous velocity component tR And the second component of the normalized instantaneous velocity vector is determined, and the magnitude of the translational velocity vector of the sensor coordinate system can be:
- determining the second translational velocity vector te of the carrier according to the instantaneous velocity component and the normalized instantaneous velocity vector includes: determining the translational velocity of the sensor coordinate system according to the instantaneous velocity component and the normalized instantaneous velocity vector The magnitude of the vector; the second translational velocity vector te of the carrier is determined according to the magnitude of the translational velocity vector of the sensor coordinate system, the normalized instantaneous velocity vector and the instantaneous velocity component.
- the magnitude of the translational velocity vector of the sensor coordinate system is determined according to the instantaneous velocity component and the normalized instantaneous velocity vector, including:
- the second translational velocity vector te of the carrier is determined according to the magnitude of the translational velocity vector of the sensor coordinate system, the normalized instantaneous velocity vector and the instantaneous velocity component, including:
- s is the magnitude of the translational velocity vector of the sensor coordinate system
- t R is the instantaneous velocity component.
- t e, y is 0, because when the vehicle is designed, if the vehicle rotates, then the center of the position of the rear axle of the vehicle, that is, the speed in the y-axis direction of the carrier coordinate system needs to be 0, that is, t e, y is 0, if t e, y is not 0, the vehicle will skid, therefore, t e, y is 0.
- this embodiment of the present application may further include step S303:
- Step S303 According to the amplitude s of the translational velocity vector of the sensor coordinate system and the normalized or scaled first translational velocity vector A third translational velocity vector ts of the sensor self-motion is determined.
- the third translational velocity vector ts of the sensor self-motion is the complete translational velocity vector.
- the first translational velocity vector that is normalized or scaled according to the magnitude s of the translational velocity vector of the sensor coordinate system Determine the third translational velocity vector ts of the sensor's self-motion, including:
- the third translational velocity vector of the self-motion of the sensor is the instantaneous velocity vector of the sensor; or the third translational velocity vector of the self-motion of the sensor is the instantaneous velocity vector of the origin of the sensor coordinate system.
- the third translation velocity vector of the sensor self-motion can be transformed into the sensor coordinate system and expressed as t' s , such as,
- this embodiment of the present application may further include step S304:
- Step S304 Determine the depth z of the stationary obstacle.
- the sensor is a visual sensor.
- step S304 includes step S3041 and step S3042. details as follows:
- Step S3041 Obtain the flow vector of the stationary obstacle.
- the flow vector includes the motion vector of the projection of the stationary obstacle on the image plane of the vision sensor.
- the flow vector can be obtained by optical flow, for example, it can be determined by algorithms such as Lukas-Kanade optical flow algorithm or Horn-Schunck optical flow algorithm.
- the embodiments of the present application are not limited.
- the embodiment of the present application provides a method for obtaining the flow vector of a stationary obstacle by using the Lukas-Kanade optical flow method, as follows:
- the flow vector corresponding to the position (p x , p y ) of the stationary obstacle on the image can be calculated by the optical flow (u, v) corresponding to the position (p x , p y ), specifically, it can be obtained by the following relation:
- G is the gradient matrix
- b is the mismatch vector
- I x (x, y) and I y (x, y) are the gradients of the brightness of the image point (x, y) in the x and y directions at the point (x, y); I t is the image brightness pair
- Lukas-Kanade optical flow algorithm such as hierarchical estimation based on a pyramid structure, which will not be described here.
- Step S3042 Determine the depth z of the stationary obstacle according to the flow vector of the stationary obstacle, the third translational velocity vector ts and the first rotational velocity vector ⁇ s.
- the third translational velocity vector ts of the sensor's self-motion can be converted into a sensor coordinate system and expressed as t' s
- ⁇ s is the first rotational velocity vector of the sensor's self-motion.
- the flow vector of a stationary obstacle can be expressed as (u, v).
- the depth z of the stationary obstacle is determined according to the flow vector of the stationary obstacle, the third translational velocity vector t′ s and the first rotational velocity vector ⁇ s , which may be determined based on the following relation:
- s 1 [-f 0 x] T
- s 2 [0 -f y] T
- ⁇ represents the cross product of the vector
- f is the focal length of the camera
- the depth z of the stationary obstacle can be obtained according to the following relation:
- w 1 (x, y) and w 2 (x, y) are weighting coefficients, which can be determined according to the estimation accuracy or error of the above parameters u, v and t′ s and ⁇ s .
- this embodiment of the present application may further include step S305.
- Step S305 Determine the rotational speed vector of the carrier according to the second rotational speed vector ⁇ e and the second translational speed vector t e of the carrier, and the rotational speed vector ⁇ e2 and the translational speed vector t e2 of the carrier obtained by other sensors and translation velocity vector
- the second rotational velocity vector ⁇ e and the second translational velocity vector t e of the carrier may be obtained according to the data provided by the vision sensor, and the rotational velocity vector ⁇ e2 and the translation velocity vector t e2 of the carrier may be obtained according to the radar sensor It is obtained from data provided by other sensors, wherein the rotational speed vector and translational speed vector of one or more groups of carriers can be obtained through other sensors.
- the rotational velocity vector ⁇ e2 of the carrier is also obtained through the radar sensor and translational velocity vector t e2 .
- the weighted rotational velocity vector of the carrier It can be obtained according to the following formula:
- weighting coefficients w 1 , w 2 can be Or it can be determined according to the error covariance or standard deviation of ⁇ e and ⁇ e2 , for example, it is determined by the inverse matrix or reciprocal proportional to the covariance or standard deviation.
- Weighted translation velocity vector It can be obtained according to the following formula:
- weighting coefficients w 1 , w 2 can be Or it can be determined according to the error covariance or standard deviation of ⁇ e and ⁇ e2 , for example, it is determined by the inverse matrix or reciprocal proportional to the covariance or standard deviation.
- weighting forms are also included, which are not limited in the embodiments of the present application.
- the normalized or scale-stretched first translation velocity vector of the self-motion of the sensor is usually obtained.
- the embodiment of the present application can accurately obtain the second rotational velocity vector ⁇ e and the complete second translation velocity vector t e of the carrier, that is, recover the missing information through the external parameters of the sensor, thereby
- the carrier where the sensor is located and the motion speed of the sensor are more accurately estimated, which improves the accuracy of the self-motion estimation of the sensor or the carrier where the sensor is located, thereby improving the safety of assisted driving, automatic driving or unmanned driving.
- FIG. 6 is a schematic structural diagram of an apparatus for self-motion estimation provided by an embodiment of the present application.
- the apparatus for self-motion estimation may include a first obtaining unit 601 and a first determining unit 602, wherein the detailed description of each unit as follows.
- the first acquisition unit 601 is used to acquire the first rotational velocity vector ⁇ s of the sensor self-motion, the first translational velocity vector of normalization or scaling
- the first determining unit 602 is configured to, according to the first rotational speed vector ⁇ s , the first translational speed vector And the external parameters of the sensor determine the second rotational speed vector ⁇ e and the second translational speed vector te of the carrier, the carrier is the carrier where the sensor is located, and the external parameters of the sensor include the sensor coordinate system and the carrier The transformation relationship of the coordinate system.
- the conversion relationship includes: a rotation matrix R and a translation vector r of the sensor coordinate system relative to the carrier coordinate system.
- the first determining unit 602 is configured to determine the second rotation speed vector ⁇ e of the carrier according to the rotation matrix R and the first rotation speed vector ⁇ s .
- the first determining unit 602 is configured to determine the instantaneous speed component caused by the rotation of the carrier according to the second rotation speed vector ⁇ e and the translation vector r; according to the rotation matrix R and the first translational velocity vector A normalized instantaneous velocity vector of the sensor relative to the carrier coordinate system is determined; a second translational velocity vector te of the carrier is determined according to the instantaneous velocity component and the normalized instantaneous velocity vector.
- the first determining unit 602 is configured to determine the magnitude of the translational velocity vector of the sensor coordinate system according to the instantaneous velocity component and the normalized instantaneous velocity vector. ; Determine the second translational velocity vector te of the carrier according to the magnitude of the translational velocity vector of the sensor coordinate system, the normalized instantaneous velocity vector and the instantaneous velocity component.
- the second translational velocity vector te of the carrier is the second translational velocity vector te of the carrier
- the s is the magnitude of the translational velocity vector of the sensor coordinate system, and the is the normalized instantaneous velocity vector, and the t R is the instantaneous velocity component.
- the apparatus further includes: a second determining unit, configured to determine the magnitude s of the translational velocity vector of the sensor coordinate system and the first translational velocity vector A third translational velocity vector ts of the sensor self-motion is determined.
- the senor is a visual sensor
- the apparatus further includes: a second acquiring unit, configured to acquire a flow vector of a stationary obstacle, where the flow vector includes the stationary obstacle A motion vector on the image plane of the vision sensor; a third determination unit for the flow vector of the stationary obstacle, the third translational velocity vector ts and the first rotational velocity vector ⁇ s Determine the depth z of the stationary obstacle.
- each unit may also correspond to the corresponding description with reference to the method embodiment shown in FIG. 3 .
- FIG. 7 is an apparatus 700 for self-motion estimation provided by an embodiment of the present application.
- the apparatus 700 includes at least one processor 701 and a communication interface 702 , and optionally, further includes a memory 703 .
- the memory 703 is used to store computer programs, and the at least one processor 701 , the memory 703 and the communication interface 702 are connected to each other through a bus 704 .
- the memory 703 includes, but is not limited to, random access memory (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM), or A portable read-only memory (compact disc read-only memory, CD-ROM), the memory 703 is used for related computer programs and data.
- Communication interface 702 is used to receive and transmit data.
- the processor 701 may be one or more central processing units (central processing units, CPUs). In the case where the processor 701 is a CPU, the CPU may be a single-core CPU or a multi-core CPU.
- the processor 701 in the device 700 reads the computer program stored in the memory 703 to perform the following operations:
- the first rotational velocity vector ⁇ s the first translation velocity vector
- the external parameters of the sensor determine the second rotational speed vector ⁇ e and the second translational speed vector te of the carrier
- the carrier is the carrier where the sensor is located
- the external parameters of the sensor include the sensor coordinate system and the carrier The transformation relationship of the coordinate system.
- the conversion relationship includes: a rotation matrix R and a translation vector r of the sensor coordinate system relative to the carrier coordinate system.
- the processor is configured to determine the second rotational speed vector ⁇ e of the carrier according to the rotation matrix R and the first rotational speed vector ⁇ s .
- the processor is configured to determine the instantaneous speed component caused by the rotation of the carrier according to the second rotation speed vector ⁇ e and the translation vector r; matrix R and the first translational velocity vector A normalized instantaneous velocity vector of the sensor relative to the carrier coordinate system is determined; a second translational velocity vector te of the carrier is determined according to the instantaneous velocity component and the normalized instantaneous velocity vector.
- the processor is configured to determine the magnitude of the translational velocity vector of the sensor coordinate system according to the instantaneous velocity component and the normalized instantaneous velocity vector;
- the magnitude of the translational velocity vector of the sensor coordinate system, the normalized instantaneous velocity vector, and the instantaneous velocity component determine a second translational velocity vector te of the carrier.
- the second translational velocity vector te of the carrier is the second translational velocity vector te of the carrier
- the s is the magnitude of the translational velocity vector of the sensor coordinate system, and the is the normalized instantaneous velocity vector, and the t R is the instantaneous velocity component.
- the processor is further configured to measure the magnitude s of the translational velocity vector of the sensor coordinate system and the first translational velocity vector A third translational velocity vector ts of the sensor self-motion is determined.
- the senor is a vision sensor
- the processor is further configured to acquire a flow vector of a stationary obstacle
- the loss amount includes the stationary obstacle on an image plane of the vision sensor
- the motion vector of determine the depth z of the stationary obstacle according to the flow vector of the stationary obstacle, the third translational velocity vector ts and the first rotational velocity vector ⁇ s.
- An embodiment of the present application further provides a chip system, where the chip system includes at least one processor, at least one processor, and a communication interface.
- the chip system further includes a memory, and the at least one processor is configured to call at least one processor.
- Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed, the method flow shown in FIG. 3 is implemented.
- the embodiment of the present application further provides a computer program product, when the computer program product is executed, the method flow shown in FIG. 3 is realized.
- An embodiment of the present application further provides a terminal, where the terminal includes at least one processor and a communication interface, and optionally, a memory, where the at least one processor is configured to call a computer program stored in at least one memory, so as to make The method flow shown in FIG. 3 is realized.
- the terminal may be a vehicle, a ship, a satellite, an unmanned aerial vehicle, a robot, a smart home device, or a smart manufacturing device.
- An embodiment of the present application further provides a sensor, the sensor includes at least one processor and a communication interface, optionally, the sensor further includes a memory, and the at least one processor is configured to call a computer program stored in at least one memory , so that the method flow shown in FIG. 3 can be realized.
- the aforementioned storage medium includes: ROM or random storage memory RAM, magnetic disk or optical disk and other mediums that can store computer program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Algebra (AREA)
- Software Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Operations Research (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computing Systems (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
一种自运动估计方法及相关装置,该方法包括:获取传感器自运动的第一转动速度矢量ω s、归一化或尺度伸缩的第一平动速度矢量(I),根据第一转动速度矢量ω s、第一平动速度矢量(I)以及传感器的外部参数确定载体的第二转动速度矢量ω e和第二平动速度矢量t e,其中,载体为传感器所在的载体,传感器的外部参数包括传感器坐标系与载体坐标系的转换关系。采用该方法,能够确定传感器所在的载体或者传感器的运动速度,提高传感器所在的载体或者传感器自运动估计的精度,从而提高了辅助驾驶、自动驾驶或者无人驾驶的安全性。
Description
本申请要求于2020年8月13日提交中国专利局、申请号为202010822885.9、申请名称为“一种自运动估计方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及传感器技术领域,尤其涉及一种自运动估计方法及相关装置。
先进辅助驾驶系统(advanced driver-assistance system,ADAS)或者自动驾驶系统(autonomous driving,AD)通常会配置多种传感器,例如相机或者摄像头等传感器,用于感知周边环境信息,包括运动目标和静止目标,例如,运动目标有车辆、行人等;静止目标有障碍物、护栏、灯杆和建筑物等等。对于运动目标和静止目标,通常采用不同的方法分析处理,如对运动目标进行分类、识别以及跟踪处理;对静止目标进行分类和识别处理,从而为无人驾驶提供额外的信息,如规避障碍物、提供可行驶区域等等。传感器通常可以安装于不同的载体平台,例如车辆、舰船、卫星、无人机以及机器人等,传感器将跟随传感器所在的载体平台运动。一方面,传感器的运动导致运动目标和静止目标无法独立分析,因此,需要估计传感器的运动实现运动目标和静止目标的分离;另一方面,运动目标的跟踪通常是基于模型的,且模型通常假定相对地面或者大地坐标系,但是传感器的运动将导致上述模型失效或者跟踪性能下降;此时,需要对传感器或者传感器载体的运动进行补偿。此外,对实际的不同场景,通过传感器运动估计实现对传感器运动平台的定位和跟踪也至关重要。
因此如何确定传感器以及传感器所在的载体的运动,是本领域人员正在解决的技术问题。
发明内容
本申请实施例公开了一种自运动估计方法及相关装置,能够确定传感器和传感器所在的载体的速度,提高传感器或者传感器所在的载体自运动估计的精度。
本申请实施例第一方面公开了一种自运动估计方法,包括:
根据所述第一转动速度矢量ω
s、所述第一平动速度矢量
以及所述传感器的外部参数确定载体的第二转动速度矢量ω
e和第二平动速度矢量t
e,所述载体为所述传感器所在的载体,所述传感器的外部参数包括传感器坐标系与载体坐标系的转换关系。
在上述方法中,根据传感器自运动的第一转动速度矢量ω
s、归一化或尺度伸缩的第一平动速度矢量
以及传感器坐标系和载体坐标系的转换关系确定载体的第二转动速度矢量ω
e和第二平动速度矢量t
e,其中,第二平动速度矢量t
e为完整的平动速度矢量,而现有技术当中,通常获得传感器自运动的归一化或尺度伸缩的第一平动速度矢量
通常缺少一个 自由度的信息,而本申请实施例能够精确的获得载体的第二转动速度矢量ω
e和完整的第二平动速度矢量t
e,即通过外部参数恢复缺失的信息,从而更加准确的估计传感器所在的载体和传感器的运动速度,提高了传感器或者传感器所在的载体自运动估计的精度,从而提高了辅助驾驶、自动驾驶或者无人驾驶的安全性。
可选地,在一种实现方式中,所述转换关系包括:所述传感器坐标系相对于所述载体坐标系的旋转矩阵R和平移矢量r。
可选地,在又一种实现方式中,所述根据所述第一转动速度矢量ω
s、所述第一平动速度矢量
以及传感器的外部参数确定所述载体的第二转动速度矢量ω
e和第二平动速度矢量t
e,包括:根据所述旋转矩阵R和所述第一转动速度矢量ω
s确定所述载体的第二转动速度矢量ω
e。
在上述方法中,根据传感器坐标系相对于载体坐标器的外部参数,即旋转矩阵R或者四元数或者欧拉角和第一转动速度矢量ω
s,确定载体的第二转动速度矢量ω
e,可以有效利用传感器相对于载体的外部参数,获取传感器所在的载体的转动角速度,有效的提高传感器所在的载体的转动速度估计的精度。
可选地,在又一种实现方式中,所述根据所述第一转动速度矢量ω
s、所述第一平动速度矢量
以及所述传感器的外部参数确定所述载体的第二转动速度矢量ω
e和第二平动速度矢量t
e,包括:根据所述第二转动速度矢量ω
e和所述平移矢量r确定所述载体转动引起的瞬时速度成分;根据所述旋转矩阵R和所述第一平动速度矢量
确定所述传感器相对于所述载体坐标系的归一化瞬时速度矢量;根据所述瞬时速度成分和所述归一化瞬时速度矢量确定所述载体的第二平动速度矢量t
e。
可选地,在又一种实现方式中,所述根据所述瞬时速度成分和所述归一化瞬时速度矢量确定所述载体的第二平动速度矢量t
e,包括:根据所述瞬间速度成分和所述归一化瞬时速度矢量确定所述传感器坐标系的平动速度矢量的幅度;根据所述传感器坐标系的平动速度矢量的幅度、所述归一化瞬时速度矢量以及所述瞬时速度成分确定所述载体的第二平动速度矢量t
e。
在上述方法中,通过传感器坐标系的平动速度矢量的幅度s和第一平动速度矢量
确定传感器自运动的第三平动速度矢量t
s,其中,第一平动速度矢量
为归一化或者尺度伸缩的,且第三平动速度矢量t
s为完整的平动速度矢量。而现有技术中,只能够获得归一化或者尺度伸缩的第一平动速度矢量
与现有技术相比,本申请实施例在缺乏尺度信息的情况下,提供了一种确定传感器自运动的完整的第三平动速度矢量t
s的方法。
可选地,在又一种实现方式中,所述传感器为视觉传感器,所述方法还包括:获取静止障碍物的流矢量,所述流矢量包括所述静止障碍物在所述视觉传感器的图像平面上的运动矢量;根据所述静止障碍物的流矢量、所述第三平动速度矢量t
s和所述第一转动速度矢 量ω
s确定所述静止障碍物的深度z。
在上述方法中,通过静止障碍物的流矢量、第三平动速度矢量t
s和第一转动速度矢量ω
s确定静止障碍物的深度z,而现有技术中,存在尺度问题,深度信息与平动速度的各个分量耦合在一起,通常无法获得精确地深度估计,与现有技术相比,本申请实施例能够通过利用流矢量,获得更加精确的深度z。
本申请实施例第二方面公开了一种自运动估计装置,包括:
第一确定单元,用于根据所述第一转动速度矢量ω
s、所述第一平动速度矢量
以及所述传感器的外部参数确定载体的第二转动速度矢量ω
e和第二平动速度矢量t
e,所述载体为所述传感器所在的载体,所述传感器的外部参数包括传感器坐标系与载体坐标系的转换关系。
在上述装置中,根据传感器自运动的第一转动速度矢量ω
s、归一化或尺度伸缩的第一平动速度矢量
以及传感器坐标系和载体坐标系的转换关系确定载体的第二转动速度矢量ω
e和第二平动速度矢量t
e,其中,第二平动速度矢量t
e为完整的平动速度矢量,而现有技术当中,通常获得传感器自运动的归一化或尺度伸缩的第一平动速度矢量
通常缺少一个自由度的信息,因此,本申请实施例能够精确的获得载体的第二转动速度矢量ω
e和完整的第二平动速度矢量t
e,即通过传感器的外部参数恢复缺失的信息,从而更加准确的估计传感器所在的载体和传感器的运动速度,提高了传感器或者传感器所在的载体自运动估计的精度,从而提高了辅助驾驶、自动驾驶或者无人驾驶的安全性。
可选的,在一种实现方式中,所述转换关系包括:所述传感器坐标系相对于所述载体坐标系的旋转矩阵R和平移矢量r。
可选的,在又一种实现方式中,所述第一确定单元,用于根据所述旋转矩阵R和所述第一转动速度矢量ω
s确定所述载体的第二转动速度矢量ω
e。
在上述装置中,根据传感器坐标系相对于载体坐标系的外部参数,即旋转矩阵R或者四元数或者欧拉角和第一转动速度矢量ω
s,确定载体的第二转动速度矢量ω
e,可以有效利用传感器相对于载体的外部参数,获取传感器所在的载体的转动角速度,有效的提高传感器所在的载体的转动速度估计的精度。
可选的,在又一种实现方式中,所述第一确定单元,用于根据所述第二转动速度矢量ω
e和所述平移矢量r确定所述载体转动引起的瞬时速度成分;根据所述旋转矩阵R和所述第一平动速度矢量
确定所述传感器相对于所述载体坐标系的归一化瞬时速度矢量;根据所述瞬时速度成分和所述归一化瞬时速度矢量确定所述载体的第二平动速度矢量t
e。
可选的,在又一种实现方式中,所述第一确定单元,用于根据所述瞬间速度成分和所述归一化瞬时速度矢量确定所述传感器坐标系的平动速度矢量的幅度;根据所述传感器坐标系的平动速度矢量的幅度、所述归一化瞬时速度矢量以及所述瞬时速度成分确定所述载体的第二平动速度矢量t
e。
在上述装置中,通过传感器坐标系的平动速度矢量的幅度s和第一平动速度矢量
确定传感器自运动的第三平动速度矢量t
s,其中,第一平动速度矢量
为归一化或者尺度伸缩的,且第三平动速度矢量t
s为完整的平动速度矢量。而现有技术中,只能够获得归一化或者尺度伸缩的第一平动速度矢量
与现有技术相比,本申请实施例在缺乏尺度信息的情况下,提供了一种确定传感器自运动的完整的第三平动速度矢量t
s的方法。
可选的,在又一种实现方式中,所述传感器为视觉传感器,所述装置还包括:第二获取单元,用于获取静止障碍物的流矢量,所述流矢量包括所述静止障碍物在所述视觉传感器的图像平面上的运动矢量;第三确定单元,用于根据所述静止障碍物的流矢量、所述第三平动速度矢量t
s和所述第一转动速度矢量ω
s确定所述静止障碍物的深度z。
在上述装置中,通过静止障碍物的流矢量、第三平动速度矢量t
s和第一转动速度矢量ω
s确定静止障碍物的深度z,而现有技术中,存在尺度问题,深度信息与平动速度的各个分量耦合在一起,通常无法获得精确地深度估计,与现有技术相比,本申请实施例能够通过利用光流矢量,获得更加精确的深度z。
本申请实施例第三方面公开了一种自运动估计装置,所述装置包括至少一个处理器和通信接口,可选的,还包括存储器。所述存储器用于存储计算机程序,所述至少一个处理器调用所述计算机程序,用于执行以下操作:
获取传感器自运动的第一转动速度矢量ω
s、归一化或尺度伸缩的第一平动速度矢量
根据所述第一转动速度矢量ω
s、所述第一平动速度矢量
以及传感器的外部参数确定所述载体的第二转动速度矢量ω
e和第二平动速度矢量t
e,所述载体为所述传感器所在的载体,所述传感器的外部参数包括传感器坐标系与载体坐标系的转换关系。
在上述装置中,根据传感器自运动的第一转动速度矢量ω
s、归一化或尺度伸缩的第一平动速度矢量
以及传感器坐标系和载体坐标系的转换关系确定载体的第二转动速度矢量ω
e和第二平动速度矢量t
e,其中,第二平动速度矢量t
e为完整的平动速度矢量,而现有技术当中,通常获得传感器自运动的归一化或尺度伸缩的第一平动速度矢量
通常缺少一个自由度的信息,而本申请实施例能够精确的获得载体的第二转动速度矢量ω
e和完整的第二平动速度矢量t
e,即通过传感器的外部参数恢复缺失信息,从而更加准确的估计传感器所在的载体和传感器的运动速度,提高了传感器或者传感器所在的载体自运动估计的精度,从而提高了自动驾驶的安全性。
可选地,在一种实现方式中,所述转换关系包括:所述传感器坐标系相对于所述载体坐标系的旋转矩阵R和平移矢量r。
可选地,在又一种实现方式中,所述至少一个处理器,用于根据所述旋转矩阵R和所述第一转动速度矢量ω
s确定所述载体的第二转动速度矢量ω
e。
在上述装置中,根据传感器坐标系相对于载体坐标系的外部参数,即旋转矩阵R或者四元数或者欧拉角和第一转动速度矢量ω
s,确定载体的第二转动速度矢量ω
e,可以有效利用 传感器相对于载体的外部参数,获取传感器所在的载体的转动角速度,有效的提高传感器所在的载体的转动速度估计的精度。
可选地,在又一种实现方式中,所述至少一个处理器,用于根据所述第二转动速度矢量ω
e和所述平移矢量r确定所述载体转动引起的瞬时速度成分;根据所述旋转矩阵R和所述第一平动速度矢量
确定所述传感器相对于所述载体坐标系的归一化瞬时速度矢量;根据所述瞬时速度成分和所述归一化瞬时速度矢量确定所述载体的第二平动速度矢量t
e。
可选地,在又一种实现方式中,所述至少一个处理器,用于根据所述瞬间速度成分和所述归一化瞬时速度矢量确定所述传感器坐标系的平动速度矢量的幅度;根据所述传感器坐标系的平动速度矢量的幅度、所述归一化瞬时速度矢量以及所述瞬时速度成分确定所述载体的第二平动速度矢量t
e。
可选地,在上述装置中,通过传感器坐标系的平动速度矢量的幅度s和第一平动速度矢量
确定传感器自运动的第三平动速度矢量t
s,其中,第一平动速度矢量
为归一化或者尺度伸缩的,且第三平动速度矢量t
s为完整的平动速度矢量。而现有技术中,只能够获得归一化或者尺度伸缩的第一平动速度矢量
与现有技术相比,本申请实施例在缺乏尺度信息的情况下,提供了一种确定传感器自运动的完整的第三平动速度矢量t
s的方法。
可选地,在又一种实现方式中,所述传感器为视觉传感器,所述至少一个处理器,还用于获取静止障碍物的流矢量,所述流矢量包括所述静止障碍物在所述视觉传感器的图像平面上的运动矢量;根据所述静止障碍物的流矢量、所述第三平动速度矢量t
s和所述第一转动速度矢量ω
s确定所述静止障碍物的深度z。
在上述装置中,通过静止障碍物的流矢量、第三平动速度矢量t
s和第一转动速度矢量ω
s确定静止障碍物的深度z,而现有技术中,存在尺度问题,深度信息与平动速度的各个分量耦合在一起,通常无法获得精确地深度估计,与现有技术相比,本申请实施例能够通过利用光流矢量,获得更加精确的深度z。
本申请实施例第四方面公开了一种计算机产品,当所述计算机程序产品在处理器上运行时,实现任意一方面或者任意一方面的可选的方案所描述的方法。
本申请实施例第五方面公开了一种芯片系统,所述芯片系统包括至少一个处理器和通信接口,可选的,所述芯片系统还包括存储器,所述至少一个处理器用于调用至少一个存储器中存储的计算机程序,以使得所述芯片系统所在的装置实现任意一方面或者任意一方面的可选的方案所描述的方法。
本申请实施例第六方面公开了一种计算机可读存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序当被处理器执行时实现任意一方面或者任意一方面的可选的方案所描述的方法。
本申请实施例第七方面公开了一种终端,所述终端包含至少一个处理器和通信接口,可选的,还包括存储器,所述至少一个处理器用于调用至少一个存储器中存储的计算机程序,以使得所述终端实现任意一方面或者任意一方面的可选的方案所描述的方法。
本申请实施例第八方面公开了一种传感器,所述传感器包括至少一个处理器和通信接口,可选的,所述传感器还包括存储器,所述至少一个处理器用于调用至少一个存储器中存储的计算机程序,以使得所述传感器所在的装置实现任意一方面或者任意一方面的可选的方案所描述的方法。
以下对本申请实施例用到的附图进行介绍。
图1是本申请实施例提供的一种自运动估计系统的结构示意图;
图2是本申请实施例提供的一种惯性测量单元的示意图;
图3是本申请实施例提供的一种自运动估计方法的流程图;
图4是本申请实施例提供的一种传感器坐标系的表示形式的示意图;
图5是本申请实施例提供的一种载体坐标系的表示形式的示意图;
图6是本申请实施例提供的一种自运动估计装置的结构示意图;
图7是本申请实施例提供的又一种自运动估计装置的结构示意图。
下面结合本申请实施例中的附图对本申请实施例进行描述。
请参见图1,图1是本申请实施例提供的一种自运动估计系统的结构示意图,该系统包括传感器1001、运动测量模块1002和数据处理模块1003,其中,传感器1001可以为视觉传感器,例如,红外热成像传感器、相机或者摄像头等等。传感器1001用于提供视觉测量数据,例如图像或者视频;运动测量模块1002用于根据所述传感器数据获取运动测量数据,如传感器自运动的第一转动速度矢量ω
s和归一化或尺度伸缩的第一平动速度矢量
该数据处理模块1003用于处理传感器1002提供的测量数据,在一般情况下,运动测量模块1002和数据处理模块1003可以在同一个处理器中。
在一种示例中,传感器1001、运动测量模块1002和数据处理模块1003可以通过有线或者无线方式等完全或者不同集成在一起,例如传感器1001、运动测量模块1002和数据处理模块1003部署于一个处理器系统上;此时该自运动估计系统可以在同一个车载、同一个机载或者同一个星载或者智能体上。在又一种示例中,传感器1001在车载、机载或者星载或者智能体上,而运动测量模块1002和/或数据处理模块1003在云端,相应的,传感器1001将提供的视频数据或者参数化数据、或者运动测量模块1002提供的数据发送给在云端的数据处理模块1003,数据处理模块1003将处理后的结果发送给车载、机载或者星载或者智能体。其中,车载可以为如汽车、摩托车或者自行车等,机载可以为无人机、直升机或者喷气式飞机等等,星载可以为卫星等,智能体如机器人。
目前,获得运动速度的方法有多种,在一种方法中,车辆中通常安装有惯性测量单元 (inertial measurement unit,IMU),如图2所示,图2表示一种惯性测量单元,IMU是测量物体三轴姿态角(或角速度)以及加速度的装置。一般地,一个IMU内会装有三个单轴的陀螺仪和三个单轴的加速度计,陀螺仪用于检测载体相对于导航坐标系的角速度信号,而加速度计检测物体在载体坐标系统独立三轴的加速度信号,测量物体在三维空间中的角速度和加速度,并以此可以解算出物体运动速度和姿态。然而,在上面的方法中,物体的运动速度通常基于加速度积累得到的,但是通过加速度计测量的加速度的测量误差会随着时间累积,所以存在误差累积问题,需要利用其它传感器进行额外的校准,而且,一般用于车辆中的IMU的精度太低,若选用高精度的IMU,成本高昂。
在又一种方法中,车辆中通常安装有毫米波雷达传感器,雷达传感器通常能提供距离、方位角和径向速度等测量数据。基于静止目标的方位角和径向速度分量,可以根据最小二乘法或者其他的方式获得传感器相对于大地的瞬时速度。然而,在上述方法中,通常只能获得相对雷达传感器坐标系的两个速度分量,即横向速度和径向速度,但是无法获取第三个速度分量,此外,虽然径向速度误差比较低,但是横向速度误差比较高,难以满足系统的性能需求,而且,该方法只能获得一个转动角速度分量,即横摆角速度(yaw rate),而无法获得纵向角速度(pitch rate)和滚动角速度(roll rate)。
在又一种方法中,车辆中安装相机或者摄像头,用于提供连续的图像。基于上述两帧或者多帧图像,利用光流法、特征点法或者直接优化光亮度的目标函数的方法,从而可以获得传感器相对于大地的(归一化或者尺度伸缩的)平移运动速度矢量和转动速度矢量。但是该方法存在尺度问题,且深度和平动速度的各个分量耦合在一起,通常无法获得更加准确的深度,从而导致无法获得精确的平移运动速度,或者只能获得归一化或者尺度伸缩的平移运动速度。
请参见图3,图3是本申请实施例提供的一种自运动估计方法,该方法的执行主体可以是传感器系统或者融合感知系统或者集成上述系统的规划/控制系统如辅助驾驶或者自动驾驶系统等。或者,该方法的执行主体也可以是软件或者硬件(如与相应传感器通过无线或者有线连接或者集成在一起的数据处理装置)。以下不同的执行步骤可以集中式实现,或者,以下不同的执行步骤也可以分布式实现。所述方法包括但不限于如下步骤:
具体地,获取传感器自运动的第一转动速度矢量ω
s和第一平动速度矢量
的方式可以是通过有线或者无线方式获取,例如在同一处理器模块中可以直接读取;同一硬件系统中可以通过总线如外设部件互连标准(peripheral component interconnect,PCI)总线获取,或者通过网络如车内的各种类型控制器局域网络(controller area network,CAN)获取;或者通过无线方式如通过云端通信方式获取,当然也可以是其他的形式获取的,本申请实施例不做限定。
具体地,传感器自运动的第一转动速度矢量ω
s和第一平动速度矢量
可以根据传感 器获取的数据中的特征点、线或者、平面或者区域,基于数据的光学特性或者几何特性确定,例如基于8点法或者5点法或者单应性(Homography)或者光流法等方法得到,本申请实施例不做限定。
例如,第一转动速度矢量ω
s可以表示为ω
s=[ω
s,x ω
s,y ω
s,z]
T,其中ω
s,x,ω
s,y和ω
s,z为传感器自运动的第一转动速度在三个坐标轴的分量;归一化或尺度伸缩的第一平动速度矢量
可以表示为
具体的,
可以是传感器自运动的第一平动速度矢量t′的归一化矢量值或者尺度伸缩的矢量值;例如归一化或尺度伸缩的第一平动速度矢量
为传感器自运动的第一平动速度矢量t′的尺度伸缩或者归一化,
其中t’为传感器自运动的第一平动速度矢量,可以表示为:
t′=[t′
x t′
y t′
z]
T,
其中归一化值s′或者尺度伸缩值s′,可以是矢量t′的幅度或者范数值,即s′=‖t′‖。
或者,
其中归一化值s′或者尺度伸缩值s′,也可以是矢量t′某一分量,例如可以是矢量t′的第三分量t′
z,即:
s′=t′
z,
其中归一化值s′或者尺度伸缩值s′也可以是其它未知的标量,例如深度,此处不做限定。
可选地,作为一种实现方式,传感器或者传感器所在的载体在平面内运动,所述平面如地面或者平面轨道。第一转动速度矢量ω
s可以表示为ω
s=[0 0 ω
s,z]
T,其中传感器自运动的第一转动速度在X轴方向的分量ω
s,x=0,传感器自运动的第一转动速度在Y轴方向的分量ω
s,y=0;归一化或尺度伸缩的第一平动速度矢量
可以表示为
此时,第一转动速度矢量ω
s可以简化为用ω
s,z表示,
可以简化为用
表示。
具体地,载体的第二平动速度矢量t
e为完整的平动速度矢量,该传感器的外部参数包括传感器坐标系与载体坐标系的转换关系,该外部参数可以包括传感器坐标系相对于载体坐标系平移和/或传感器坐标系相对于载体坐标系的旋转参数;或者,包括载体坐标系相对于传感器坐标系的平移和/或载体坐标系相对于传感器坐标系的旋转参数。
具体地,传感器坐标系相对于载体坐标系平移可以是传感器坐标系原点相对于载体坐 标系原点的平移矢量;或者,传感器坐标系原点在载体坐标系原点中的位置矢量,此处不做限定。传感器坐标系相对于载体坐标系的旋转可以用传感器坐标系相对于载体坐标系旋转的旋转矩阵或者四元数或者欧拉角表示,此处不做限定。
具体地,载体坐标系相对于传感器坐标系平移可以是载体坐标系原点相对于传感器坐标系原点的平移矢量;或者,载体坐标系原点在传感器坐标系原点中的位置矢量,此处不做限定。载体坐标系相对于传感器坐标系的旋转可以用载体坐标系相对于传感器坐标系旋转的旋转矩阵或者四元数或者欧拉角表示,此处不做限定。
具体地,传感器坐标系相对于载体坐标系的外部参数与传感器在载体上的安装位置相关,如当传感器安装在车体(载体)的保险杠的中心位置时,则传感器坐标系如图4所示,通常X轴方向向前,采用右手坐标系。图4为一种传感器坐标系的表示形式,如传感器坐标系用O
1XYZ表示。载体坐标系与载体固连,载体坐标系的原点为载体中心,如车载坐标系的原点位于车后轴中心位置,如图5所示,通常X轴方向向前,采用右手坐标系。图5为一种载体坐标系的表示形式,如载体坐标系用O
2X
bY
bZ
b表示,其中O
2X
b沿载体横轴向右,O
2Y
b沿载体纵轴向前,O
2Z
b沿载体立轴向上。
可选地,作为一种实现方式,转换关系可以包括传感器坐标系相对于载体坐标系的旋转矩阵R和平移矢量r。传感器测量的目标位置矢量为p
b,在载体坐标系下的位置矢量为p
e,可以表示为:
p
e=Rp
b+r。
可选地,作为一种实现方式,传感器或者传感器所在的载体在平面内运动,上述坐标旋转可以用围绕z轴方向的旋转表示。
在一种实现方式中,根据旋转矩阵R和第一转动速度矢量ω
s确定载体的第二转动速度矢量ω
s。
具体地,所述载体的第二转动速度矢量ω
e为旋转矩阵R和所述第一转动速度矢量ω
s的乘积。例如,旋转矩阵为R,第一转动速度矢量ω
s,确定载体的第二转动速度矢量为:
ω
e=Rω
s。
由于传感器坐标系相对于载体坐标系的旋转可以用传感器坐标系相对于载体坐标系旋转的旋转矩阵或者四元数或者欧拉角表示,那么此时,也可以根据四元数或者欧拉角确定载体的第二转动速度矢量,此处不进一步赘述。
在一种示例中,传感器坐标系跟载体坐标系各个对应的轴方向相同,此时所述第二转动速度矢量与所述第一转动速度矢量相等,ω
e=ω
s,此时R为单位矩阵I。
具体地,传感器的外部参数可以包括传感器坐标系相对于载体坐标系的平移矢量r和/或旋转参数,例如旋转参数可以是旋转矩阵、四元数或者欧拉角等。
传感器的外部参数包括平移矢量r,在一种实现方式中,根据第二转动速度矢量ω
e和平移矢量r确定载体转动引起的瞬时速度成分;根据旋转矩阵R和归一化或尺度伸缩的第一平动速度矢量
确定传感器相对于载体坐标系的归一化瞬时速度矢量;根据瞬时速度成分和归一化瞬时速度矢量确定载体的第二平动速度矢量t
e。
具体地,根据第二转动速度矢量ω
e和平移矢量r确定载体转动引起的瞬时速度成分t
R可以是指载体转动引起的瞬时速度成分t
R为第二转动速度矢量ω
e和平移矢量r的叉积;t
R=ω
e×r。
其中×表示矢量的叉积。具体地,可以为:
其中,ω
e=[ω
e,x ω
e,y ω
e,z]
T。
或者载体转动引起的瞬时速度成分t
R为平移矢量r和第二转动速度矢量ω
e的叉积的负向量,
t
R=-r×ω
e,
其中,r=[r
x r
y r
z]
T。当然也可以根据其他的方式,本申请实施例不做限定。
其中,上述载体转动引起的瞬时速度成分t
R可以是三维矢量,如t
R=[t
R,x t
R,y t
R,z]
T或者t
R也可以是二维矢量。
可选地,作为一种实现方式,传感器或者传感器所在的载体在平面内运动,所述平面如地面或者平面轨道。此时,第一转动速度矢量ω
s可以简化为用ω
s,z表示,
可以简化为用
表示,平移矢量r=[r
x r
y r
z]
T可以简化为用r
x,r
y表示,r
z=0。瞬时速度成分t
R可以简化为:
t
R,x=-ω
e,z·r
y,
t
R,y=ω
e,z·r
x,
t
R,z=0。
t
R可以简化为二维矢量,包括t
R,x,t
R,y两个分量。
具体地,传感器相对于载体的旋转参数,例如旋转参数可以是旋转矩阵、四元数或者欧拉角等。
以旋转矩阵R为例,根据旋转矩阵R和归一化或尺度伸缩的第一平动速度矢量
确定传 感器相对于载体坐标系的归一化瞬时速度矢量可以是指传感器相对于载体坐标系的归一化瞬时速度矢量
为所述旋转矩阵R和归一化或尺度伸缩的第一平动速度矢量
的乘积,即
在一种示例中,根据瞬时速度成分和归一化瞬时速度矢量确定载体的第二平动速度矢量t
e可以是指载体的第二平动速度矢量t
e为瞬时速度矢量与瞬时速度成分的差,所述瞬时速度矢量根据所述归一化瞬时速度矢量与瞬时速度成分确定。
具体地,所述载体的第二平动速度矢量t
e为:
具体地,所述瞬时速度矢量根据所述归一化瞬时速度矢量与瞬时速度成分确定,包括
所述瞬时速度矢量根据所述归一化瞬时速度矢量及传感器坐标系的平动速度矢量的幅度确定,其中传感器坐标系的平动速度矢量的幅度根据所述瞬时速度成分t
R的第二分量以及所述归一化瞬时速度矢量的第二分量确定,所述传感器坐标系的平动速度矢量的幅度可以为:
在又一种示例中,根据瞬时速度成分和归一化瞬时速度矢量确定载体的第二平动速度矢量t
e包括:根据瞬间速度成分和归一化瞬时速度矢量确定传感器坐标系的平动速度矢量的幅度;根据传感器坐标系的平动速度矢量的幅度、归一化瞬时速度矢量以及瞬时速度成分确定载体的第二平动速度矢量t
e。
具体地,根据瞬间速度成分和归一化瞬时速度矢量确定传感器坐标系的平动速度矢量的幅度,包括:
其中,s表示传感器坐标系的平动速度矢量的幅度。
具体地,根据传感器坐标系的平动速度矢量的幅度、归一化瞬时速度矢量以及瞬时速度成分确定载体的第二平动速度矢量t
e,包括:
其中,s为传感器坐标系的平动速度矢量的幅度,
为归一化瞬时速度矢量,t
R为瞬时速度成分。其中,t
e,y为0,由于车辆在被设计的时候,车辆如果转动,那么车后轴的位置的中心,即在载体坐标系的y轴方向的速度需要为0,即t
e,y为0,若t
e,y不为0,车辆会打滑,因此,t
e,y为0。
t
e可以是三维矢量,t
e=[t
e,x t
e,y t
e,z]
T或者t
e也可以是二维矢量,包括t
e,x,t
e,y两个速度分量。
可选的,本申请实施例可以进一步包括步骤S303:
具体地,传感器坐标系的平动速度矢量的幅度s的计算方式如步骤S302所述,本步骤不再赘述。传感器自运动的第三平动速度矢量t
s为完整的平动速度矢量。根据传感器坐标系的平动速度矢量的幅度s和归一化或尺度伸缩的第一平动速度矢量
确定传感器自运动的第三平动速度矢量t
s,包括:
需要指出的是,传感器自运动的第三平动速度矢量为传感器瞬时速度矢量;或者传感器自运动的第三平动速度矢量为传感器坐标系原点的瞬时速度矢量。
进一步地,传感器自运动的第三平动速度矢量可以转换到传感器坐标系表示为t′
s,如,
t′
s=R
-1t
s,
可选的,本申请实施例可以进一步包括步骤S304:
步骤S304:确定静止障碍物的深度z。
具体地,在步骤S304中,传感器为视觉传感器。
其中,步骤S304包括步骤S3041和步骤S3042。具体如下:
步骤S3041:获取静止障碍物的流矢量。
具体地,流矢量包括静止障碍物在视觉传感器的图像平面上的投影的运动矢量。获取静止障碍物的流矢量的方法有多种,流矢量可以通过光流获取,例如可以通过Lukas-Kanade光流算法或者Horn–Schunck光流算法等算法确定。本申请实施例不做限定。
为了理解方便,本申请实施例提供了一种利用Lukas-Kanade光流法获取静止障碍物的流矢量的方式,如下:
静止障碍物在图像上的位置(p
x,p
y)对应的流矢量可以用位置(p
x,p
y)对应的光流(u,v)计算,具体地,可以通过以下关系式获取,
其中G为梯度矩阵,b为失配矢量,具体地,
其中,I
x(x,y)和I
y(x,y)为图像点(x,y)的亮度在该点(x,y)处的x和y方向的梯度;I
t为图像亮度对时间的变化量,[p
x-w
x,p
x+w
x]和[p
y-w
y,p
y+w
y]为点(p
x,p
y)对应的窗口,其大小为(2w
x+1)(2w
y+1),w
x和w
y为预先设定的参数,例如w
x=w
y=1,2,3……。
需要指出的是,即使对于Lukas-Kanade光流算法也有多种实现形式,例如基于金字塔结构分层估计,此处不赘述。
步骤S3042:根据所述静止障碍物的流矢量、所述第三平动速度矢量t
s和所述第一转动速度矢量ω
s确定所述静止障碍物的深度z。
具体地,传感器自运动的第三平动速度矢量t
s可以转换到传感器坐标系表示为t′
s,ω
s为传感器自运动的第一转动速度矢量。静止障碍物的流矢量可以表示为(u,v)。
根据所述静止障碍物的流矢量、所述第三平动速度矢量t′
s和所述第一转动速度矢量ω
s确定所述静止障碍物的深度z,可以是基于以下关系式确定,
其中,s
1=[-f 0 x]
T,s
2=[0 -f y]
T,
×表示矢量的叉积,f为相机焦距,x,y为图像平面的像素位置,x∈[p
x-w
x,p
x+w
x],y∈[p
y-w
y,p
y+w
y];其中w
x和w
y为非负整数,w
x=0,1,2,3,4…;w
y=0,1,2,3,4…。
具体地,静止障碍物的深度z,可以根据以下关系式得到:
其中w
1(x,y)和w
2(x,y)为加权系数,可以根据上述参数u,v和t′
s和ω
s的估计精度或者误差确定。
或者
或者
或者
可选的,本申请实施例可以进一步包括步骤S305。
具体地,载体的第二转动速度矢量ω
e和第二平动速度矢量t
e可以是根据视觉传感器提供的数据获得的,载体的转动速度矢量ω
e2和平动速度矢量t
e2可以是根据雷达传感器等其他传感器提供的数据获得的,其中,通过其他传感器可以获得一组或者多组载体的转动速度矢量和平动速度矢量。
例如,为了理解方便,本申请实施例中,除了通过视觉传感器获得载体的第二转动速度矢量ω
e和第二平动速度矢量t
e之外,还通过雷达传感器获得载体的转动速度矢量ω
e2和平动速度矢量t
e2。那么,载体的加权转动速度矢量
可以根据以下公式得到:
除了上述加权形式,还包括其他的加权形式,本申请实施例不做限定。
在图3所描述的方法中,根据传感器自运动的第一转动速度矢量ω
s、归一化或尺度伸缩的第一平动速度矢量
以及传感器坐标系和载体坐标系的转换关系确定载体的第二转动速度矢量ω
e和第二平动速度矢量t
e,其中,第二平动速度矢量t
e为完整的平动速度矢量,而现有技术当中,通常获得传感器自运动的归一化或尺度伸缩的第一平动速度矢量
通常缺少一个自由度的信息,而本申请实施例能够精确的获得载体的第二转动速度矢量ω
e和完 整的第二平动速度矢量t
e,即通过传感器的外部参数恢复缺失的信息,从而更加准确的估计传感器所在的载体和传感器的运动速度,提高了传感器或者传感器所在的载体自运动估计的精度,从而提高了辅助驾驶、自动驾驶或者无人驾驶的安全性。
上述详细阐述了本申请实施例的方法,下面提供了本申请实施例的装置。
请参见图6,图6是本申请实施例提供的一种自运动估计装置的结构示意图,该自运动估计装置可以包括第一获取单元601和第一确定单元602,其中,各个单元的详细描述如下。
第一确定单元602,用于根据所述第一转动速度矢量ω
s、所述第一平动速度矢量
以及所述传感器的外部参数确定载体的第二转动速度矢量ω
e和第二平动速度矢量t
e,所述载体为所述传感器所在的载体,所述传感器的外部参数包括传感器坐标系与载体坐标系的转换关系。
在一种实现方式中,所述转换关系包括:所述传感器坐标系相对于所述载体坐标系的旋转矩阵R和平移矢量r。
在又一种实现方式中,所述第一确定单元602,用于根据所述旋转矩阵R和所述第一转动速度矢量ω
s确定所述载体的第二转动速度矢量ω
e。
在又一种实现方式中,所述第一确定单元602,用于根据所述第二转动速度矢量ω
e和所述平移矢量r确定所述载体转动引起的瞬时速度成分;根据所述旋转矩阵R和所述第一平动速度矢量
确定所述传感器相对于所述载体坐标系的归一化瞬时速度矢量;根据所述瞬时速度成分和所述归一化瞬时速度矢量确定所述载体的第二平动速度矢量t
e。
可选地,在又一种实现方式中,所述第一确定单元602,用于根据所述瞬间速度成分和所述归一化瞬时速度矢量确定所述传感器坐标系的平动速度矢量的幅度;根据所述传感器坐标系的平动速度矢量的幅度、所述归一化瞬时速度矢量以及所述瞬时速度成分确定所述载体的第二平动速度矢量t
e。
可选地,在又一种实现方式中,所述载体的第二平动速度矢量t
e,
可选地,在又一种实现方式中,所述传感器为视觉传感器,所述装置还包括:第二获取单元,用于获取静止障碍物的流矢量,所述流矢量包括所述静止障碍物在所述视觉传感器的图像平面上的运动矢量;第三确定单元,用于根据所述静止障碍物的流矢量、所述第三平动速度矢量t
s和所述第一转动速度矢量ω
s确定所述静止障碍物的深度z。
需要说明的是,各个单元的实现及有益效果还可以对应参照图3所示的方法实施例的 相应描述。
请参见图7,图7是本申请实施例提供的一种自运动估计装置700,该装置700包括至少一个处理器701和通信接口702,可选的,还包括存储器703。所述存储器703用于存储计算机程序,所述至少一个处理器701、存储器703和通信接口702通过总线704相互连接。
存储器703包括但不限于是随机存储记忆体(random access memory,RAM)、只读存储器(read-only memory,ROM)、可擦除可编程只读存储器(erasable programmable read only memory,EPROM)、或便携式只读存储器(compact disc read-only memory,CD-ROM),该存储器703用于相关计算机程序及数据。通信接口702用于接收和发送数据。
处理器701可以是一个或多个中央处理器(central processing unit,CPU),在处理器701是一个CPU的情况下,该CPU可以是单核CPU,也可以是多核CPU。
该装置700中的处理器701读取所述存储器703中存储的计算机程序,用于执行以下操作:
根据所述第一转动速度矢量ω
s、所述第一平动速度矢量
以及所述传感器的外部参数确定载体的第二转动速度矢量ω
e和第二平动速度矢量t
e,所述载体为所述传感器所在的载体,所述传感器的外部参数包括传感器坐标系与载体坐标系的转换关系。
可选地,在一种实现方式中,所述转换关系包括:所述传感器坐标系相对于所述载体坐标系的旋转矩阵R和平移矢量r。
可选地,在又一种实现方式中,所述处理器,用于根据所述旋转矩阵R和所述第一转动速度矢量ω
s确定所述载体的第二转动速度矢量ω
e。
可选地,在又一种实现方式中,所述处理器,用于根据所述第二转动速度矢量ω
e和所述平移矢量r确定所述载体转动引起的瞬时速度成分;根据所述旋转矩阵R和所述第一平动速度矢量
确定所述传感器相对于所述载体坐标系的归一化瞬时速度矢量;根据所述瞬时速度成分和所述归一化瞬时速度矢量确定所述载体的第二平动速度矢量t
e。
可选地,在又一种实现方式中,所述处理器,用于根据所述瞬间速度成分和所述归一化瞬时速度矢量确定所述传感器坐标系的平动速度矢量的幅度;根据所述传感器坐标系的平动速度矢量的幅度、所述归一化瞬时速度矢量以及所述瞬时速度成分确定所述载体的第二平动速度矢量t
e。
可选地,在又一种实现方式中,所述载体的第二平动速度矢量t
e,
在又一种实现方式中,所述传感器为视觉传感器,所述处理器,还用于获取静止障碍物的流矢量,所述流失量包括所述静止障碍物在所述视觉传感器的图像平面上的运动矢量; 根据所述静止障碍物的流矢量、所述第三平动速度矢量t
s和所述第一转动速度矢量ω
s确定所述静止障碍物的深度z。
需要说明的是,各个操作的实现及有益效果还可以对应参照图3所示的方法实施例的相应描述。
本申请实施例还提供一种芯片系统,所述芯片系统包括至少一个处理器,至少一个处理器和通信接口,可选的,所述芯片系统还包括存储器,所述至少一个处理器用于调用至少一个存储器中存储的计算机程序,以使得图3所示的方法流程得以实现。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,当其运行时,图3所示的方法流程得以实现。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品运行时,图3所示的方法流程得以实现。
本申请实施例还提供了一种终端,所述终端包含至少一个处理器和通信接口,可选的,还包括存储器,所述至少一个处理器用于调用至少一个存储器中存储的计算机程序,以使得图3所示的方法流程得以实现。可选的,所述终端可以为车辆、舰船、卫星、无人机、机器人、智能家居设备或者智能制造设备等。
本申请实施例还提供了一种传感器,所述传感器包括至少一个处理器和通信接口,可选的,所述传感器还包括存储器,所述至少一个处理器用于调用至少一个存储器中存储的计算机程序,以使得图3所示的方法流程得以实现。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来计算机程序相关的硬件完成,该计算机程序可存储于计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储计算机程序代码的介质。
Claims (18)
- 根据权利要求1所述的方法,其特征在于,所述转换关系包括:所述传感器坐标系相对于所述载体坐标系的旋转矩阵R和平移矢量r。
- 根据权利要求4所述的方法,其特征在于,所述根据所述瞬时速度成分和所述归一化瞬时速度矢量确定所述载体的第二平动速度矢量t e,包括:根据所述瞬间速度成分和所述归一化瞬时速度矢量确定所述传感器坐标系的平动速度矢量的幅度;根据所述传感器坐标系的平动速度矢量的幅度、所述归一化瞬时速度矢量以及所述瞬时速度成分确定所述载体的第二平动速度矢量t e。
- 根据权利要求1-7任一项所述的方法,其特征在于,所述传感器为视觉传感器,所述方法还包括:获取静止障碍物的流矢量,所述流矢量包括所述静止障碍物在所述视觉传感器的图像平面上的运动矢量;根据所述静止障碍物的流矢量、所述第三平动速度矢量t s和所述第一转动速度矢量ω s确定所述静止障碍物的深度z。
- 根据权利要求9所述的装置,其特征在于,所述转换关系包括:所述传感器坐标系相对于所述载体坐标系的旋转矩阵R和平移矢量r。
- 根据权利要求10所述的装置,其特征在于,所述第一确定单元,用于根据所述旋转矩阵R和所述第一转动速度矢量ω s确定所述载体的第二转动速度矢量ω e。
- 根据权利要求12所述的装置,其特征在于,所述第一确定单元,用于根据所述瞬间速度成分和所述归一化瞬时速度矢量确定所述传感器坐标系的平动速度矢量的幅度;根据所述传感器坐标系的平动速度矢量的幅度、所述归一化瞬时速度矢量以及所述瞬时速度成分确定所述载体的第二平动速度矢量t e。
- 根据权利要求9-15任一项所述的装置,其特征在于,所述传感器为视觉传感器,所述装置还包括:第二获取单元,用于获取静止障碍物的流矢量,所述流矢量包括所述静止障碍物在所述视觉传感器的图像平面上的运动矢量;第三确定单元,用于根据所述静止障碍物的流矢量、所述第三平动速度矢量t s和所述第一转动速度矢量ω s确定所述静止障碍物的深度z。
- 一种芯片系统,其特征在于,所述芯片系统包括至少一个处理器和通信接口,所述至少一个处理器用于调用至少一个存储器中存储的计算机程序,以使得所述芯片系统所在装置实现如权利要求1-8中任一项所述的方法。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序,当所述计算机程序在一个或多个处理器上运行时,执行如权利要求1-8中任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010822885.9 | 2020-08-13 | ||
CN202010822885.9A CN114077719A (zh) | 2020-08-13 | 2020-08-13 | 一种自运动估计方法及相关装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022033139A1 true WO2022033139A1 (zh) | 2022-02-17 |
Family
ID=80246935
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/098209 WO2022033139A1 (zh) | 2020-08-13 | 2021-06-03 | 一种自运动估计方法及相关装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114077719A (zh) |
WO (1) | WO2022033139A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102435188A (zh) * | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | 一种用于室内环境的单目视觉/惯性全自主导航方法 |
CN105606095A (zh) * | 2016-03-17 | 2016-05-25 | 广州展讯信息科技有限公司 | 一种简化惯性导航设备安装要求的方法与装置 |
CN108318027A (zh) * | 2017-01-18 | 2018-07-24 | 腾讯科技(深圳)有限公司 | 载体的姿态数据的确定方法和装置 |
CN108549399A (zh) * | 2018-05-23 | 2018-09-18 | 深圳市道通智能航空技术有限公司 | 飞行器偏航角修正方法、装置及飞行器 |
US20180364048A1 (en) * | 2017-06-20 | 2018-12-20 | Idhl Holdings, Inc. | Methods, architectures, apparatuses, systems directed to device position tracking |
-
2020
- 2020-08-13 CN CN202010822885.9A patent/CN114077719A/zh active Pending
-
2021
- 2021-06-03 WO PCT/CN2021/098209 patent/WO2022033139A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102435188A (zh) * | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | 一种用于室内环境的单目视觉/惯性全自主导航方法 |
CN105606095A (zh) * | 2016-03-17 | 2016-05-25 | 广州展讯信息科技有限公司 | 一种简化惯性导航设备安装要求的方法与装置 |
CN108318027A (zh) * | 2017-01-18 | 2018-07-24 | 腾讯科技(深圳)有限公司 | 载体的姿态数据的确定方法和装置 |
US20180364048A1 (en) * | 2017-06-20 | 2018-12-20 | Idhl Holdings, Inc. | Methods, architectures, apparatuses, systems directed to device position tracking |
CN108549399A (zh) * | 2018-05-23 | 2018-09-18 | 深圳市道通智能航空技术有限公司 | 飞行器偏航角修正方法、装置及飞行器 |
Also Published As
Publication number | Publication date |
---|---|
CN114077719A (zh) | 2022-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109887057B (zh) | 生成高精度地图的方法和装置 | |
US20210012520A1 (en) | Distance measuring method and device | |
US10884110B2 (en) | Calibration of laser and vision sensors | |
JP7073315B2 (ja) | 乗物、乗物測位システム、及び乗物測位方法 | |
US20210096230A1 (en) | Calibration of laser sensors | |
US10914590B2 (en) | Methods and systems for determining a state of an unmanned aerial vehicle | |
CN111156998B (zh) | 一种基于rgb-d相机与imu信息融合的移动机器人定位方法 | |
US11747144B2 (en) | Real time robust localization via visual inertial odometry | |
JP7259749B2 (ja) | 情報処理装置、および情報処理方法、プログラム、並びに移動体 | |
CN112789655A (zh) | 用于标定惯性测试单元和相机的系统和方法 | |
US20190301871A1 (en) | Direct Sparse Visual-Inertial Odometry Using Dynamic Marginalization | |
TWI827649B (zh) | 用於vslam比例估計的設備、系統和方法 | |
CN109300143B (zh) | 运动向量场的确定方法、装置、设备、存储介质和车辆 | |
JP2019528501A (ja) | マルチカメラシステムにおけるカメラ位置合わせ | |
US20180075614A1 (en) | Method of Depth Estimation Using a Camera and Inertial Sensor | |
CN111829532B (zh) | 一种飞行器重定位系统和重定位方法 | |
CN111380514A (zh) | 机器人位姿估计方法、装置、终端及计算机存储介质 | |
CN113551665B (zh) | 一种用于运动载体的高动态运动状态感知系统及感知方法 | |
Xian et al. | Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach | |
EP3258443B1 (en) | Information processing device and method | |
CN116952229A (zh) | 无人机定位方法、装置、系统和存储介质 | |
CN116721166B (zh) | 双目相机和imu旋转外参在线标定方法、装置及存储介质 | |
WO2022037370A1 (zh) | 一种运动估计方法及装置 | |
WO2022033139A1 (zh) | 一种自运动估计方法及相关装置 | |
WO2020223868A1 (zh) | 地面信息处理方法、装置和无人驾驶车辆 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21855195 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21855195 Country of ref document: EP Kind code of ref document: A1 |