WO2021196983A1 - 一种自运动估计的方法及装置 - Google Patents

一种自运动估计的方法及装置 Download PDF

Info

Publication number
WO2021196983A1
WO2021196983A1 PCT/CN2021/079509 CN2021079509W WO2021196983A1 WO 2021196983 A1 WO2021196983 A1 WO 2021196983A1 CN 2021079509 W CN2021079509 W CN 2021079509W WO 2021196983 A1 WO2021196983 A1 WO 2021196983A1
Authority
WO
WIPO (PCT)
Prior art keywords
position data
stationary target
velocity vector
estimated value
sensor
Prior art date
Application number
PCT/CN2021/079509
Other languages
English (en)
French (fr)
Inventor
王建国
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021196983A1 publication Critical patent/WO2021196983A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/02Systems for determining distance or velocity not using reflection or reradiation using radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control

Definitions

  • This application relates to the field of sensors, and in particular to a method and device for self-motion estimation.
  • ADAS Advanced driver-assistance systems
  • AD autonomous driving
  • sensors such as: radar (radio detection and ranging, Radar), sonar, ultrasonic sensors, and vision sensors (Such as: camera) and so on. These sensors are used to perceive information about the surrounding environment.
  • the surrounding environment information includes moving targets and stationary targets, moving targets such as vehicles and pedestrians, and stationary targets such as obstacles, guardrails, roadsides, light poles, surrounding trees and buildings.
  • targets that move relative to the reference frame and targets that are stationary relative to the reference frame are usually analyzed and processed by different methods.
  • Targets that move relative to the reference frame usually need to be classified, identified, and tracked.
  • Targets that are stationary with respect to the reference frame usually need to be classified and identified to provide additional information for autonomous driving, such as avoiding obstacles and providing a drivable area.
  • the movement of the sensor will make it impossible to distinguish between a target moving relative to the reference frame and a stationary target relative to the reference frame. Therefore, it is necessary to estimate the self-motion state of the sensor or its platform, especially its speed, to compensate for the above effects.
  • self-motion estimation usually uses an inertial measurement unit (IMU) on a mobile device to measure the velocity vector of the sensor.
  • IMU inertial measurement unit
  • the velocity vector measured by the IMU is usually obtained based on the acceleration measured by the accelerometer, and the measurement error will accumulate over time.
  • it is susceptible to electromagnetic interference. Therefore, how to accurately obtain the self-motion estimation result is an urgent problem to be solved at present.
  • the embodiment of the present application provides a method for self-motion estimation, which is used to accurately determine the translational velocity vector and the rotation velocity vector of the self-motion.
  • the embodiments of the present application also provide corresponding devices.
  • the first aspect of the present application provides an ego-motion estimation method, which includes: obtaining a first rotation velocity vector estimation value ⁇ 1 of a first sensor; obtaining a movement velocity vector estimation value v of a second sensor and The data set of the stationary target relative to the reference frame, the data set includes the position data of the stationary target; according to v, the position data of the stationary target and ⁇ 1 , the translational velocity vector estimation value T and the rotation velocity vector of the first device's self-movement are determined Estimated value ⁇ .
  • the first sensor may be a vision sensor, an imaging sensor, or the like.
  • the second sensor may be millimeter wave radar, laser radar, or ultrasonic radar.
  • the second sensor can acquire at least one velocity component of the stationary target, such as a radial velocity component.
  • the first device may be a carrier or a platform system where the sensor is located, for example: it may be a mobile device platform such as a vehicle-mounted, air-borne, ship/ship-borne, space-borne, automated or intelligent system.
  • the reference system may be a predefined reference object coordinate system, such as a coordinate system such as the earth or a star or a map, or an inertial coordinate system moving at a constant speed relative to the earth; the stationary target may be an object in the surrounding environment.
  • a predefined reference object coordinate system such as a coordinate system such as the earth or a star or a map, or an inertial coordinate system moving at a constant speed relative to the earth
  • the stationary target may be an object in the surrounding environment.
  • the three components of the first rotation speed vector estimation value ⁇ 1 include a yaw rate (yaw rate), a pitch rate (pitch rate), and a roll rate (roll rate).
  • the motion velocity vector estimated value v may be an estimated value of the instantaneous velocity vector of the second sensor.
  • the data set of a target that is stationary with respect to the reference frame may be a measurement data set obtained from the second sensor or the first sensor, or a measurement data set obtained from other sensors through a communication link (for example, the cloud).
  • the data set may contain one or more stationary targets; the position data of the stationary targets may be rectangular coordinate position data, polar coordinate position data or spherical coordinate position data of the stationary target. It should be pointed out that for a stationary target, its position data can be one position data or multiple position data. Multiple position data for a stationary target can correspond to different parts of the target, and the target is an extended target at this time.
  • the estimated value T of the translational velocity vector of the self-movement of the first device includes the magnitude and direction information of the translational velocity of the self-movement of the first device, and may include the estimated values of the components of the translational velocity vector on the three coordinate axes of the rectangular coordinate system.
  • the first rotation speed vector estimated value ⁇ 1 of the first sensor, the movement speed vector estimated value v of the second sensor, and the data set of the stationary target relative to the reference system can be used to obtain the self-report of the first device.
  • the translational velocity vector estimation value T and the rotation velocity vector estimation value ⁇ of the motion can more accurately determine the estimated values of the translational velocity vector and the rotational velocity vector of the self-motion of the first device, thereby improving the estimation accuracy of the self-motion.
  • the method further comprising: obtaining translational velocity vector of the first telescopic sensor scale estimation value T '; according to said determined from v, the position data of the stationary object and ⁇ 1
  • the estimated translational velocity vector value T and the estimated rotational velocity vector ⁇ of motion include: determine the self-moving translational velocity vector estimated value T and the rotational velocity vector estimated value according to v, the position data of the stationary target, T′ and ⁇ 1 ⁇ .
  • the estimated value T′ of the translational velocity vector scaled (scaled) may be the estimated value of the normalized translational velocity vector or the translational velocity vector is scaled or weighted according to a certain scale factor. estimated value.
  • the translational velocity vector estimation value T and the rotation velocity vector estimation value ⁇ of the self-motion are determined according to the v of the second sensor, the position data of the stationary target, T′ and ⁇ 1 of the first sensor, and Effectively improve the estimation accuracy of the translational velocity vector and the rotational velocity vector of the self-motion of the first device.
  • ⁇ 1 , v, T′ and the position data of the stationary target are data relative to a common coordinate system.
  • the common coordinate system may be the coordinate system of the carrier platform where the first sensor and the second sensor are located.
  • the vehicle-mounted sensor may select the vehicle body coordinate system where it is located as its public coordinate system; the drone-borne sensor
  • the aircraft coordinate system can be selected as its public coordinate system; or, the public coordinate system can also be one of the coordinate systems as the public coordinate system, and the data of the other sensor is converted to the public coordinate system through coordinates; or, the public coordinate system can also be other The coordinate system, such as the geodetic coordinate system, or the map coordinate system used, or the coordinates of the navigation system, such as the NEU (north-east-up, NEU) coordinate system. No further restrictions are made here.
  • the acquired data of the first sensor and the second sensor are data in their respective coordinate systems. After acquiring these data, you need to coordinate the data first.
  • the conversion of the coordinate system, ⁇ 1 , v, T′ and the position data of the stationary target are the values after the conversion of the coordinate system. If the first sensor and the second sensor are located in the same coordinate system, there is no need to perform coordinate system conversion. It can be seen from this possible implementation that ⁇ 1 , v, T′ and the position data of the stationary target are data relative to the common coordinate system, which can ensure the translational velocity vector estimation value T and the rotation velocity vector of the first device's self-motion The accuracy of the estimated value ⁇ .
  • the value T 1, T is estimated according to a first translational velocity vector v, and the position data of the stationary object determination ⁇ 1 of the first device, as the [omega] [omega] 1 .
  • the possible implementation mode can be understood as: determining a first translational velocity vector of the first means estimates T 1, T 1 is then T a, according to ⁇ 1 as v, the position data of the stationary object and ⁇ 1 ⁇ .
  • is the second rotational velocity vector estimate of the first device determined according to the first translational velocity vector estimated values T 1 and v of the first device and the position data of the stationary target.
  • the value ⁇ 2 where T 1 is determined according to v, the position data of the stationary target and ⁇ 1 ; T is the second translational velocity vector estimation value of the first device determined according to v, the position data of the stationary target and ⁇ 2 T 2 .
  • T 1 is determined according to v, the position data of the stationary object and ⁇ 1; vector estimation value T 1, v, and the position data to determine a second stationary target rotational speed of the device according to the first ⁇ 2; v, stationary target position data and the determined second ⁇ 2 translational velocity vector of the first means of estimation value T 2, then T 2 as T, ⁇ 2 as the [omega] in accordance with.
  • is the second rotation speed vector estimated value ⁇ 2 of the first device determined according to ⁇ 1 , v and the position data of the stationary target
  • T is the estimated value of the first device according to v, the stationary target the second translational velocity vector and the position data of the first means determines the estimated value ⁇ 2 T 2.
  • the application is not limited to two.
  • the estimated value of the rotation speed vector is obtained in one iteration, and it can also be obtained in three or more iterations. Taking the n iteration as an example, in the nth iteration, T is T n and ⁇ is ⁇ n . It can be seen from this possible implementation that the estimated value ⁇ of the rotation speed vector obtained through multiple iterations has a higher accuracy based on the rotation obtained through multiple iterations.
  • the data set includes at least two subsets, and ⁇ is the first translational velocity vector estimated value T 1 , v according to the first device and the position of the stationary target in the first subset
  • the second rotation speed vector estimation value ⁇ 2 of the first device determined by the data, T 1 is determined according to v, the position data of the stationary target in the second subset, and ⁇ 1 ;
  • T is the stationary object according to v, the stationary in the first subset the second translational velocity vector of the target position data and ⁇ 2 of the first device determines the estimated value of T 2.
  • the possible implementation mode can be understood as: determining a first vector translational velocity estimation value T 1 based on the position data and ⁇ v, the second subset of stationary targets; concentrate according to 1, v and a first sub-T
  • the position data of the stationary target determines the second rotational velocity vector estimated value ⁇ 2 of the first device
  • the second translational velocity vector estimated value of the first device is determined according to v, the position data of the stationary target in the first subset, and ⁇ 2 T 2 ;
  • T 2 as T and ⁇ 2 as ⁇ .
  • the position data of the stationary objects contained in each of the at least two subsets may not overlap, or may partially overlap, but not completely overlap.
  • this possible implementation only describes the situation where the estimated values of the translational velocity vector and the rotational velocity vector are obtained in two iterations, this application is not limited to two iterations, and can also be three or more iterations.
  • the estimated value of the rotation speed vector is obtained through multiple iterations. Taking n iterations as an example, for the nth iteration, T is T n and ⁇ is ⁇ n . It can be seen from this possible implementation that the accuracy of T and ⁇ outputted by multiple iterations using different subsets is higher.
  • the data set includes at least three subsets, and ⁇ is the first translational velocity vector estimated value T 1 , v according to the first device and the position of the stationary target in the first subset
  • the second rotation speed vector estimation value ⁇ 2 of the first device determined by the data, T 1 is determined according to v, the position data of the stationary target in the second subset, and ⁇ 1 ;
  • T is the stationary object in the third subset according to v the second translational velocity vector of the target position data and ⁇ 2 of the first device determines the estimated value of T 2.
  • the possible implementation mode can be understood as: a first translational velocity vector is determined based on the position data and ⁇ v, the second subset of stationary targets estimation value T 1; concentrate according to 1, v and a first sub-T
  • the position data of the stationary target determines the second rotational velocity vector estimation value ⁇ 2 of the first device; the second translational velocity vector estimate of the first device determined according to v, the position data of the stationary target in the third subset, and ⁇ 2
  • T 2 then T 2 is taken as T and ⁇ 2 is taken as ⁇ .
  • the position data of the stationary objects contained in each of the at least three subsets may have no intersection, or may partially overlap, but not completely overlap.
  • this possible implementation only describes the situation where the estimated values of the translational velocity vector and the rotational velocity vector are obtained in two iterations, this application is not limited to two iterations, and can also be three or more iterations.
  • the estimated value of the rotation speed vector is obtained through multiple iterations. Taking n iterations as an example, for the nth iteration, T is T n and ⁇ is ⁇ n . It can be seen from this possible implementation that the accuracy of T and ⁇ outputted in multiple iterations using different subsets is higher.
  • r can be one piece of location data, or two or more location data, and it can be all the location data in the data set, or it can be part of the location data.
  • the value of n is different, and the value of r in the relational expression may be the same or different. It can be seen from this possible implementation that T can be quickly determined through this relational expression.
  • ⁇ T′ T, which is obtained from v and ⁇ and the position data r of the stationary target in the data set.
  • r can be one piece of location data, or two or more location data, and it can be all the location data in the data set, or it can be part of the location data.
  • v- ⁇ r
  • v- ⁇ 1 ⁇ r
  • ⁇ T′ T 1
  • v- ⁇ 2 ⁇ r
  • ⁇ T′ T n .
  • the value of n is different, and the value of r in the relational expression may be the same or different. It can be seen from this possible implementation that T can be quickly determined through this relational expression.
  • T 1 , v and ⁇ 2 satisfy the relationship Among them, r is the position data of the stationary target in the data set. It can also be described as: ⁇ 2 is based on the relationship Obtained from v and T 1 and the position data r of the stationary target in the data set.
  • r can be one piece of location data, or two or more location data, and it can be all the location data in the data set, or it can be part of the location data.
  • Relation It can be deformed, for example: add a coefficient in front of one, two or more parameters in the relational expression.
  • n is an integer greater than 2.
  • the value of n is different, and the value of r in the relational expression may be the same or different. It can be seen from this possible implementation that ⁇ can be quickly determined through this relational expression.
  • T 1 , v and ⁇ 2 satisfy the relationship Among them, r is the position data of the stationary target in the first subset. It can also be described as: ⁇ 2 is based on the relationship Obtained from v and T 1 and the position data r of the stationary target in the first subset.
  • r may be one piece of location data in the first subset, or two or more location data, or all of the location data in the first subset, or part of the location data.
  • Relation It can be deformed, for example: add a coefficient in front of one, two or more parameters in the relational expression. Combining the previous possible implementation methods, it can be understood that: Of course, the value of n is different, and the value of r in the relational expression may be the same or different. It can be seen from this possible implementation that ⁇ can be quickly determined through this relational expression.
  • the above steps: obtaining the estimated value v of the motion velocity vector of the second sensor includes: according to the azimuth angle ⁇ and the pitch angle of the stationary target relative to the second sensor And the radial velocity v'to determine the estimated value v of the movement velocity vector of the second sensor; or, according to the three-dimensional position vector of the stationary target relative to the second sensor and the radial velocity v', determine the estimated value of the movement velocity vector v of the second sensor .
  • this possible implementation can be expressed as: according to the direction cosine vector ⁇ and the radial velocity v', determine the estimated value v of the movement velocity vector of the second sensor. Azimuth angle ⁇ and pitch angle It is determined, or the ⁇ is determined based on the three-dimensional position vector of the stationary target relative to the second sensor and the radial velocity v'.
  • this possible implementation can also be expressed as: acquiring the azimuth angle ⁇ and the pitch angle of the stationary target relative to the second sensor And the radial velocity v'of the second sensor relative to the stationary target; according to ⁇ and Determine the directional cosine vector ⁇ of the stationary target relative to the second sensor; determine the estimated value v of the motion velocity vector of the second sensor according to the directional cosine vector ⁇ and the radial velocity v'.
  • the azimuth angle ⁇ and the pitch angle of the stationary target relative to the second sensor And the radial velocity v', the three-dimensional estimated value v of the motion velocity vector of the second sensor can be determined.
  • this possible implementation can also be expressed as: obtaining the three axial distances x, y, z and the third axial distance from the second sensor to the stationary target in the Cartesian coordinate system with the second sensor as the origin.
  • the radial velocity v'of the second sensor relative to the stationary target; the direction cosine vector ⁇ of the stationary target relative to the second sensor is determined according to the distances x, y, z of the three axial directions; the movement speed of the second sensor is determined according to ⁇ and v' Vector estimate v.
  • the method further includes: according to the estimated value of the motion velocity vector of the second sensor relative to the target, the directional cosine vector ⁇ of the target relative to the second sensor, and the relative The radial velocity v'of the second sensor and the velocity threshold V Thresh determine that the target is a stationary target. After determining that the target is a stationary target, the position data of the stationary target can also be obtained, and the position data of the stationary target can be divided into a data set.
  • a second aspect of the present application provides a method for obtaining an estimated value of a velocity vector, the method comprising: obtaining an azimuth angle ⁇ and a pitch angle of a stationary target relative to a second sensor And the radial velocity v'of the second sensor relative to the stationary target; according to the azimuth angle ⁇ and pitch angle of the stationary target relative to the second sensor And the radial velocity v'to determine the estimated value v of the movement velocity vector of the second sensor.
  • the azimuth angle ⁇ and the pitch angle of the stationary target relative to the second sensor And the radial velocity v', the three-dimensional estimated value v of the motion velocity vector of the second sensor can be determined.
  • the above steps according to the azimuth angle ⁇ and the pitch angle of the stationary target relative to the second sensor And the radial velocity v′ to determine the estimated value v of the movement velocity vector of the second sensor, including: according to ⁇ and Determine the directional cosine vector ⁇ of the stationary target relative to the second sensor; determine the estimated value v of the motion velocity vector of the second sensor according to the directional cosine vector ⁇ and the radial velocity v'.
  • a third aspect of the present application provides a method for obtaining an estimated value of a velocity vector.
  • the method includes: obtaining a three-dimensional position vector, a radial distance s, and a radial velocity v′ from a stationary target relative to a second sensor, and according to the stationary target relative to the second sensor.
  • the three-dimensional position vector, radial distance s, and radial velocity v'of the sensor determine the estimated value v of the movement velocity vector of the second sensor.
  • the three-dimensional estimated value v of the motion velocity vector of the second sensor can be determined.
  • the above steps determine the estimated value v of the movement velocity vector of the second sensor according to the three-dimensional position vector, the radial distance s, and the radial velocity v′ of the stationary target relative to the second sensor , Including: determining the direction cosine vector ⁇ of the stationary target relative to the second sensor according to the three-dimensional position vector of the stationary target relative to the second sensor and the radial distance s; determining the movement of the second sensor according to the direction cosine vector ⁇ and the radial velocity v' The velocity vector estimate v.
  • the fourth aspect of the present application provides a self-motion estimation apparatus, which is used to execute the foregoing first aspect or any possible implementation of the first aspect.
  • the apparatus for self-motion estimation includes a module or unit for executing the foregoing first aspect or any possible implementation of the first aspect.
  • a fifth aspect of the present application provides a device for obtaining an estimated value of a velocity vector, which is used to execute the method in the second aspect or any possible implementation of the second aspect, or to execute the method in the third aspect or the third aspect.
  • the apparatus for obtaining the velocity vector estimation value includes a module or unit for executing the method in the second aspect or the second aspect, or the third aspect or any possible implementation of the third aspect.
  • the sixth aspect of the present application provides a device for self-motion estimation, including: at least one processor, at least one memory, and computer-executable instructions stored in the memory and running on the processor, when the computer-executable instructions are executed by the processor ,
  • the processor executes the method in the foregoing first aspect or any one of the possible implementation manners of the first aspect.
  • the seventh aspect of the present application provides a device for obtaining a velocity vector estimation value, including: at least one processor, at least one memory, and computer-executable instructions stored in the memory and running on the processor.
  • the processor executes the method in the foregoing second aspect or any possible implementation of the second aspect, or is used to execute the foregoing third aspect or the method in any possible implementation of the third aspect.
  • An eighth aspect of the present application provides a sensor system, which includes a first sensor, a second sensor, and a device for performing self-motion estimation of the foregoing first aspect or any one of the possible implementation manners of the first aspect.
  • a ninth aspect of the present application provides a sensor system, which includes a second sensor and an acquisition speed vector for executing any possible implementation manner of the aforementioned second aspect or second aspect, or the third aspect or third aspect Estimated value of the device.
  • a tenth aspect of the present application provides a carrier for carrying the sensor system of the ninth aspect.
  • the carrier includes a first sensor, a second sensor, and a self-motion for performing any possible implementation of the first aspect or the first aspect.
  • the carrier may be the first device of the above-mentioned first aspect, for example: a car, a motorcycle, a bicycle, a drone, a helicopter, a jet plane, a ship, a motorboat, a satellite, a robot, and so on.
  • the eleventh aspect of the present application provides a carrier for carrying the sensor system of the tenth aspect.
  • the carrier includes a second sensor and is used to implement any one of the foregoing second aspect or second aspect, or the third aspect or the third aspect.
  • a twelfth aspect of the present application provides a computer-readable storage medium storing one or more computer-executable instructions.
  • the computer-executable instructions are executed by at least one processor, the at least one processor executes the above-mentioned first aspect or the first aspect.
  • any possible implementation method any possible implementation method.
  • a thirteenth aspect of the present application provides a computer-readable storage medium storing one or more computer-executable instructions.
  • the at least one processor executes the second aspect or the first The method in any possible implementation manner of the second aspect, or is used to execute the third aspect or the method in any possible implementation manner of the third aspect.
  • the fourteenth aspect of the present application provides a computer program product storing one or more computer-executable instructions.
  • the computer-executable instructions are executed by at least one processor, the at least one processor executes the first aspect or any of the first aspects.
  • a fifteenth aspect of the present application provides a computer program product storing one or more computer-executable instructions.
  • the computer-executable instructions are executed by at least one processor, the at least one processor executes the second aspect or the second aspect described above.
  • the method in any possible implementation manner, or is used to execute the third aspect or the method in any possible implementation manner of the third aspect.
  • the self-motion estimation device described in the fourth aspect and the sixth aspect may also be a chip, or other combination devices, components, etc. having the function of the self-motion estimation device.
  • the self-motion estimation device may include a communication interface, such as an input/output (input/output, I/O) interface, and the processing unit may be a processor, such as a central processing unit (CPU).
  • a communication interface such as an input/output (input/output, I/O) interface
  • the processing unit may be a processor, such as a central processing unit (CPU).
  • the apparatus for obtaining the velocity vector estimation value described in the fifth aspect and the seventh aspect may also be a chip, or other combination devices, components, etc. having the function of the foregoing apparatus for obtaining the velocity vector estimation value.
  • the fourth, sixth, eighth, tenth, twelfth and fourteenth aspects or the technical effects brought by any one of the possible implementations can be found in the first aspect or the different possible implementations of the first aspect. The technical effect brought by it will not be repeated here.
  • the technical effects of the fifth, seventh, ninth, eleventh, thirteenth and fifteenth aspects or any one of the possible implementation methods can be referred to the second, third or fourth aspects.
  • the technical effects brought about by different possible implementation methods will not be repeated here.
  • the solution provided by the embodiments of the present application can obtain the self-report of the first device through the estimated value ⁇ 1 of the first rotation speed vector of the first sensor, the estimated value v of the motion speed vector of the second sensor, and the data set of the stationary target relative to the reference frame.
  • the translational velocity vector estimation value T and the rotation velocity vector estimation value ⁇ of the motion can more accurately determine the estimated values of the translational velocity vector and the rotational velocity vector of the self-motion of the first device, thereby improving the estimation accuracy of the self-motion.
  • FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an embodiment of a method for self-motion estimation provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of an example of an application scenario provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of another embodiment of a method for self-motion estimation provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of an example scenario provided by an embodiment of the present application.
  • Fig. 6 is a schematic diagram of an example scenario provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an embodiment of a self-motion estimation apparatus provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of another embodiment of a self-motion estimation apparatus provided by an embodiment of the present application.
  • Fig. 9 is a schematic diagram of an embodiment of a sensor system provided by an embodiment of the present application.
  • the embodiment of the present application provides a method for ego-motion estimation, which is used to accurately determine the translational velocity vector and the rotation velocity vector of the self-motion.
  • the embodiments of the present application also provide corresponding devices. Detailed descriptions are given below.
  • the self-motion estimation method provided by the embodiment of the present application can be applied to a sensor system or a fusion sensing system or a planning/control system that integrates the aforementioned systems, such as the field of automatic driving or intelligent driving, etc., especially related to advanced driver assistance systems (advanced driver). assistance systems, ADAS).
  • the execution subject of the method may be software or hardware (such as a device connected or integrated with the corresponding sensor through a wireless or wired connection), a fusion sensing system, various first devices, and the like.
  • the self-movement can be the movement of the sensor or the carrier or platform system where the sensor is located. The following different execution steps can be implemented in a centralized or distributed manner.
  • Fig. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • the system architecture includes a sensor platform.
  • the sensor platform is equipped with a first sensor and a second sensor.
  • the system architecture also includes a device for self-motion estimation.
  • the self-motion estimation device can be deployed in a sensor platform, that is, the self-motion estimation device can be integrated with the sensor platform.
  • the device for self-motion estimation may be deployed outside the sensor platform, and the device for self-motion estimation and the sensor platform communicate through a wireless network.
  • Figure 1 takes the self-motion estimation device deployed in a sensor platform as an example.
  • the carrier or platform of the sensor may be a movable device.
  • the sensor platform can be an in-vehicle system, such as a car, a motorcycle, or a bicycle.
  • the sensor platform may be a ship-borne or ship-borne platform, such as a ship, a ship, a motorboat, and the like.
  • the sensor platform can be an airborne platform, such as a drone, a helicopter, or a jet plane, a balloon, etc.
  • the sensor platform may be a spaceborne platform, such as a satellite, and the sensor platform may be an automated or intelligent system such as a robotic system.
  • the sensor or the carrier or platform on which the sensor is located moves relative to the reference system, and there is a stationary target in the surrounding environment of the carrier or platform on which the sensor or the sensor is located.
  • the reference frame may be a geodetic coordinate system, or an inertial coordinate system that moves at a constant speed relative to the ground.
  • the stationary target may be objects in the surrounding environment, such as guardrails, road edges, buildings, light poles, and so on.
  • the stationary target can also be a surface buoy, a lighthouse, a shore or an island building, etc.
  • the stationary target may be a reference object such as a spacecraft that is stationary or moving at a constant speed relative to a star or satellite.
  • the stationary targets existing around an intelligent body system such as a robot system can be factories, buildings, trees, ores in the environment, and so on.
  • the first sensor may be a vision sensor, such as a camera or a camera, and the first sensor may also be an imaging sensor, such as an infrared imaging sensor or a synthetic aperture radar.
  • the second sensor may be millimeter wave radar or Lidar (light detection and ranging, Lidar), or ultrasonic radar, such as sonar.
  • the second sensor can acquire at least one velocity component of the target.
  • millimeter wave radar or lidar or sonar using frequency modulated continuous wave (FMCW) signals can obtain the radial velocity of the target relative to the sensor.
  • FMCW frequency modulated continuous wave
  • the above-mentioned sensors can measure surrounding targets (such as stationary targets relative to the reference system or moving targets, obstacles, buildings, etc.) relative to the reference system to obtain measurement data of the surrounding targets.
  • the measurement data may include the distance of the target relative to the sensor, the azimuth angle and/or the pitch angle, and the radial velocity.
  • the physical composition of the sensor here can be one or more physical sensors.
  • each of the one or more physical sensors may measure the azimuth angle, the pitch angle, and the radial velocity respectively, or the azimuth angle, the pitch angle, and the azimuth angle, the pitch angle, and the measurement data of the one or more physical sensors may be derived.
  • the radial velocity is not limited here.
  • Self-motion can usually be decomposed into translation and rotation.
  • Self-motion estimation is to determine the estimated values of the translational velocity vector and rotation velocity vector of self-motion.
  • the translational velocity vector can be represented by each component on the coordinate axis of a 2-dimensional or 3-dimensional rectangular coordinate system.
  • the rotation speed vector may be represented by various components of the rotation angular velocity, and may include one or more of the yaw rate (yaw rate), the pitch rate (pitch rate), and the roll rate (roll rate).
  • FIG. 2 is a schematic flowchart of a self-motion estimation method provided by an embodiment of the present application.
  • the self-motion estimation method provided by the embodiment of the present application may include:
  • a first determining means translational velocity vectors from the motion estimation value vector T and the rotational speed estimated value ⁇ ⁇ location data and the v, stationary targets.
  • the first sensor may be a vision sensor such as a camera or a camera, or an imaging sensor such as an infrared sensor or a synthetic aperture radar.
  • the first sensor can acquire images or video information of surrounding targets and the environment. Using image or video information, you can get the estimated value of the rotational angular velocity and/or the translational velocity vector of the scale expansion, for example, based on the optical flow method or combined with the mathematical model of the camera and the multi-view geometric method, which will not be repeated here. .
  • the second sensor may be millimeter wave radar, laser radar, ultrasonic radar, etc., and the second sensor may acquire the target position and at least one velocity component such as radial velocity measurement data.
  • the estimated value v of the motion velocity vector of the second sensor and the data set of the stationary target relative to the geodetic reference system can be obtained.
  • the estimated value v of the motion velocity vector of the second sensor can be obtained based on a random sample consensus (RANSAC) algorithm and the data of the target stationary relative to the reference frame can be determined.
  • RBSAC random sample consensus
  • the position data and radial velocity measurement data of the object obtained by the second sensor can be used to determine the measurement data set from the stationary target, and the data from the stationary target can be used to determine the The estimated value of the instantaneous velocity vector, that is, the estimated value of the motion velocity vector v of the second sensor, the implementation steps of this method are described in detail later.
  • the data set of a target that is stationary with respect to the reference frame may be a measurement data set obtained from the second sensor or the first sensor, or a measurement data set obtained from other sensors through a communication link (for example, the cloud).
  • the data set may contain one or more stationary targets; the position data of the stationary targets may be position data in rectangular coordinates, polar coordinates, or spherical coordinates.
  • the measurement data set may include position data from a point target, and may also include position data from an extended target.
  • one point target can get one location data
  • one extended target can get multiple location data.
  • each location data can correspond to a different part of the expansion target. This application is not limited here.
  • the self-motion of the first device can be obtained by the first rotation velocity vector estimation value ⁇ 1 of the first sensor, the movement velocity vector estimation value v of the second sensor, and the data set of the stationary target relative to the reference frame.
  • the estimated translational velocity vector T and the estimated rotational velocity vector ⁇ The estimated values of the translational velocity vector and the rotational velocity vector of the self-motion of the first device can be determined more accurately, thereby improving the estimation accuracy of the self-motion.
  • the foregoing method may further include: obtaining an estimated value T′ of the translational velocity vector scale expansion and contraction of the first sensor.
  • step 103 includes: in accordance with v, the stationary position of the target data, T 'and the translational velocity ⁇ 1 is determined from the motion vector of the first means of estimation value T and the rotational speed vector estimated value ⁇ .
  • the scaled estimated value of the translational velocity vector may be an estimated value of a normalized translational velocity vector or an estimated value of the translational velocity vector scaled or weighted according to a certain scale factor.
  • the components of the translational velocity vector are T x , Ty , T z
  • the estimated values of the normalized translational velocity vector can be ⁇ T x , ⁇ T y , ⁇ T z , where ⁇ is the normalization coefficient or weight
  • is a positive number less than or equal to 1
  • can satisfy the relationship: for example or, Or ⁇ is another positive number less than 1.
  • the various formulas used to express the relationship in this application can cover various possible deformations of the formula, and not only limit the formula itself.
  • the public coordinate system can be the coordinate system of the carrier platform where the first sensor and the second sensor are located.
  • the public coordinate system can be the vehicle body coordinate system; taking the drone-borne sensor as an example, the public coordinate system can be It is the drone coordinate system; or, the public coordinate system can also be the coordinate system of one of the sensors; or, the public coordinate system can also be other coordinate systems, such as the geodetic coordinate system, or the map coordinate system used, or ,
  • the coordinates of the navigation system such as the (north-east-up, NEU) coordinate system, etc., which are not specifically limited here.
  • the first sensor and the second sensor have their own coordinate systems, the translational velocity vector and/or rotation velocity vector of the first sensor, and the velocity vector of the second sensor or the measurement data of a stationary target, if Instead of being defined relative to the public coordinate system, the data defined relative to the public coordinate system can be obtained through coordinate transformation.
  • the embodiment of the present application does not limit the conversion process of vectors between various coordinate systems.
  • the assisted driving or autonomous driving scene includes the own vehicle 201, and its surrounding moving and stationary objects.
  • the moving objects are such as the target vehicle 202, and the stationary objects such as street lights 203, trees 204, and buildings 205 are stationary.
  • Targets can also include stationary obstacles such as stopped vehicles, road boundaries such as guardrails, and so on.
  • the vehicle 201 is equipped with a first sensor 2011, a second sensor 2012 and a device 2013 for self-motion estimation.
  • the first sensor 2011, the second sensor 2012, and the self-motion estimation device 2013 are connected or integrated in a wireless connection manner or a wired connection manner.
  • the first sensor 2011 and the second sensor 2012 may be a camera and a millimeter wave radar or a lidar installed at the front end of the vehicle, respectively, and the first sensor 2011 and the second sensor 2012 may also be installed at the side end or the rear end of the vehicle.
  • the installation method can be centralized installation or distributed installation, which is not limited here.
  • the first sensor 2011 can obtain the image or video information of the target and environment around the vehicle 201, and use the image or video information to obtain the rotation speed (also called the rotation angular velocity) and/or the translational motion of the scale expansion and contraction. Estimated velocity vector.
  • the second sensor 2012 acquires position data and moving speed information of targets in the surrounding environment by transmitting and receiving millimeter-wave or laser signals, and signal processing and other methods. Using the above-mentioned measurement data, the estimated value v of the motion velocity vector of the second sensor and the data set of the stationary target relative to the geodetic reference system can be obtained.
  • the self-motion estimation device 2013 may obtain the position data and radial velocity measurement data of the object according to the second sensor 2012, and determine that the data in the measurement data set comes from a stationary target, for example: street lamp 203 , Trees 204 and buildings 205, or moving targets, for example: target car 202.
  • the data from the stationary target can be used to determine the estimated value of the instantaneous velocity vector of the second sensor, that is, the estimated value of the motion velocity vector v of the second sensor. The implementation steps of this method are described in detail later.
  • Apparatus 2013 from the motion estimation means may determine a first motion vector from the translational velocity estimation value T and the rotational speed vector estimated value [omega] According v, stationary target position data and ⁇ 1.
  • the motion estimation device 2013 further determines the estimated value ⁇ of the rotation velocity vector.
  • the embodiments of the present application can be applied to unmanned aerial vehicles, robots, spaceborne or shipborne systems, which are not listed here.
  • T is a first translational velocity vector v
  • the position data of the still object ⁇ 1 of the first device determines the estimated value of T 1;
  • is ⁇ 1.
  • step 103 may include: determining a first vector of the first means translational velocity estimation value T 1, in accordance with the T T. 1 as v, the position data of the stationary object and ⁇ 1; ⁇ 1 as the ⁇ .
  • the steps of: determining a first translational velocity vector of the first means of estimation value T 1, the location data and ⁇ v, the stationary object may include: based on the relationship
  • MMSE minimum mean-squared error
  • the position error covariance can be obtained from the measurement accuracy of the sensor using the existing technology, which will not be further described here.
  • is the second rotational velocity vector estimated value ⁇ of the first device determined according to the first translational velocity vector estimated values T 1 and v of the first device and the position data of the stationary target. 2, wherein, T 1 is determined according to v, the position data of the stationary object and ⁇ 1; T is a value T 2 according to the second translational velocity vector v, and the position data of the stationary object 2 [omega] determining a first estimation means .
  • This process can also be understood as: determining T 1 according to v, the position data of the stationary target and ⁇ 1 ; determining the second rotation speed vector estimation value ⁇ 2 of the first device according to T 1 , v and the position data of the stationary target; according to v , The position data of the stationary target and ⁇ 2 determine the second translational velocity vector estimation value T 2 of the first device; use T 2 as T and ⁇ 2 as ⁇ .
  • Through relation Determining ⁇ 2 may include determining through position data of one stationary target or determining through position data of multiple stationary targets, which will be described separately below.
  • the second rotational velocity vector estimated value ⁇ 2 may be concentrated by a T 1, v, and data (an example of N, N ⁇ 1, and N is an integer) the position data of a still object is determined by the relation ⁇ 2 met out in Is the position vector of the i kth stationary target, where, Represents the transposition of the position vector of the i kth stationary target.
  • n is used to represent the number of iterations. In this example, n ⁇ 2, and n is an integer.
  • is the estimated value of the n-th rotational velocity vector ⁇ n , and the estimated value of the k-1 translational velocity vector T k-1 , v of ⁇ n passing through the first device and the stationary target
  • the estimated value ⁇ k of the nth rotation speed vector of the first device determined by the position data is obtained, where T k-1 is determined based on v, the position data of the stationary target and ⁇ k-1 , where 2 ⁇ k ⁇ n; T is the n translational velocity estimation value of the k vector translational velocity vector of the first means T n, T n is determined by v, the position data of the stationary object and the estimated value [omega] k T k obtained.
  • This process can also be understood as: determine T k-1 according to v, the position data of the stationary target and ⁇ k -1, where 2 ⁇ k ⁇ n; determine the first device according to T k-1 , v and the position data of the stationary target n-th rotational speed vector estimated value ⁇ k; n-determining translational velocity vector of the first means of estimation value T k T n as a T, as the [omega] [omega] n in accordance with v, the position data of the still object and ⁇ k.
  • T n can be finally obtained by using the measurement data of part or all of the stationary targets in the data set.
  • the embodiment of this application uses the position data of the stationary target to obtain ⁇ k through ⁇ k-1 , so that the above relationship and the data of the stationary target can be fully utilized , ⁇ k-1 on the basis of increasing the accuracy of the estimation finally obtained more accurate ⁇ n; the above-described relationship is more accurate estimate T n addition utilized in a more accurate basis ⁇ n, and thus the multiple The iterative process greatly improves the estimation accuracy of the final T and ⁇ .
  • is the second rotational speed vector estimated value ⁇ 2 of the first device determined according to ⁇ 1 , v and the position data of the stationary target
  • T is the first device determined according to v, the position data of the stationary target
  • ⁇ 2 The second translational velocity vector estimate T 2 .
  • This process can also be understood as: determining the second rotation speed vector estimation value ⁇ 2 of the first device according to ⁇ 1 , v and the position data of the stationary target; determining the first device's first device according to v, the position data of the stationary target and ⁇ 2 Second, the translational velocity vector estimated value T 2 , and then T 2 is used as T, and ⁇ 2 is used as ⁇ .
  • determining the second rotation speed vector estimation value ⁇ 2 of the first device according to ⁇ 1 , v and the position data of the stationary target can be based on the relational expression to make sure.
  • the step is to determine the second translational velocity vector estimation value T 2 of the first device according to v, the position data of the stationary target and ⁇ 2 , which is the same as the above-described determination of the first device according to v, the position data of the stationary target and ⁇ 1
  • T and ⁇ can also be obtained through n iterations.
  • n is used to represent the number of iterations. In this example, n ⁇ 2, and n is an integer.
  • is the estimated value of the nth rotation speed vector ⁇ n , the estimated value of the k-1th rotation speed vector ⁇ n passing through the first device ⁇ k-1 , v and the position of the stationary target
  • the k-th rotational speed vector estimation value ⁇ k of the first device determined by the data is obtained, where ⁇ k-1 is determined according to v, the position data of the stationary target and ⁇ k-2 , where 2 ⁇ k ⁇ n;
  • T is n-translational velocity vector estimation value T n, the k-translational velocity vector of the first means is determined by T n v, stationary target position data and the estimated value [omega] k T k obtained.
  • This process can also be understood as: determining the k-th rotational velocity vector estimated value ⁇ k of the first device according to ⁇ k-1 , v and the position data of the stationary target, where 2 ⁇ k ⁇ n; according to v, the position data of the stationary target And ⁇ k determine the k-th translational velocity vector estimated value T k of the first device, 2 ⁇ k ⁇ n; use T n as T and ⁇ n as ⁇ .
  • the embodiment of this application uses the position data of the stationary target to obtain ⁇ k through ⁇ k-1 , so that the above relationship and the data of the stationary target can be fully utilized , ⁇ k-1 on the basis of increasing the accuracy of the estimation finally obtained more accurate ⁇ n; the above-described relationship is more accurate estimate T n addition utilized in a more accurate basis ⁇ n, and thus the multiple The iterative process greatly improves the estimation accuracy of the final T and ⁇ .
  • the position data of the stationary target used in the iteration process is the complete set or subset of the data set.
  • the complete set of the data set can be divided into at least two subsets.
  • the kth The second iteration and the lth iteration, k ⁇ l use different subsets of the data set to calculate, so as to finally obtain T n and ⁇ n .
  • the position data of the stationary objects contained in each of the at least two subsets may not overlap, or may partially overlap, but not completely overlap.
  • the data set includes the first subset and the second subset.
  • the position data of the stationary objects in the second subset can be used to determine T 1
  • the position data of the stationary objects in the first subset can be used to determine T 1. ⁇ 2 and T 2 .
  • is the first translational velocity vector estimation value T 1 , v according to the first device and the position data of the stationary target in the first subset
  • the determined second rotation speed vector estimation value ⁇ 2 of the first device, T 1 is determined according to v, the position data of the stationary target in the second subset, and ⁇ 1 ;
  • T is the stationary target according to v, the stationary target in the first subset position data and the second translational velocity vector ⁇ 2 of the first device determines the estimated value T 2.
  • the process may also be understood as: the location data and determining a ⁇ v, the second subset of the first stationary target translational velocity vector estimation value T 1; according to the position of T 1, v, and a first subset of the still object determining a second rotational speed of the first data unit vector estimated value ⁇ 2; 2 a second determining translational velocity vector of the first means of estimation value T 2 based on the position data of v, a first subset [omega] and a stationary target; then T 2 is defined as T, and ⁇ 2 is defined as ⁇ .
  • ⁇ 2 The process of determining ⁇ 2 is the same as the aforementioned process, and ⁇ 2 can satisfy the relationship It's just that r j used therein is the position data of the stationary object in the first data set.
  • the position data of stationary targets in the same subset are used when determining T 2 and ⁇ 2.
  • T and ⁇ can also be obtained through n iterations.
  • n is used to represent the number of iterations. In this example, n ⁇ 2, and n is an integer.
  • is the estimated value of the nth rotational velocity vector ⁇ n , and the estimated value of the k-1th translational velocity vector T of ⁇ n through the first device k-1 , v, and the k-th rotational velocity vector estimated value ⁇ k of the first device determined by the position data of the stationary target in the k-th subset, where T k-1 is based on the static The position data of the target and ⁇ k-1 are determined, where 2 ⁇ k ⁇ n; T is the estimated value of the n-th translational velocity vector T n , and T n is determined by v, the position data of the stationary target in the k- th subset and ⁇ k The determined n-th translational velocity vector estimation value T k of the first device is obtained.
  • T k-1 is determined based on the position data of v, k-1 first subset stationary target and ⁇ k-1, wherein 2 ⁇ k ⁇ n;
  • the T k-1, v and k The position data of the stationary target in the subset determines the k-th rotational velocity vector estimation value ⁇ k of the first device;
  • the k-th translational velocity vector of the first device is determined according to v, the position data of the stationary target in the k-th subset and ⁇ k Estimated value T k ;
  • T n be T and ⁇ n as ⁇ .
  • this embodiment uses different iterative steps to determine T k and ⁇ k using the position data of stationary targets in different data subsets.
  • the position data of stationary targets in the subsets are independent of each other, and different subsets are used. Multiple iterations can make full use of the information contained in the position data of different stationary targets, so that the final estimation accuracy of T and ⁇ is greatly improved.
  • the data set includes at least three subsets, and the position data of the stationary objects contained in each subset of the at least three subsets may not overlap, or may overlap partially, but not completely.
  • the data set includes the first subset, the second subset, and the third subset.
  • the position data of the stationary target in the second subset can be used to determine T 1
  • the stationary target in the first subset can be used.
  • the position data of the target determines ⁇ 2
  • the position data of the stationary target in the third subset is used to determine T 2 .
  • is the first translational velocity vector estimation value T 1 , v according to the first device and the position data of the stationary target in the first subset Determine the second rotation speed vector estimation value ⁇ 2 of the first device, T 1 is determined based on v, the position data of the stationary target in the second subset, and ⁇ 1; T is based on v, the stationary target in the third subset the second translational velocity vector and the position data of the first means determines the estimated value ⁇ 2 T 2.
  • This process can also be understood as: determining the first translational velocity vector estimation value T 1 according to v, the position data of the stationary target in the second subset and ⁇ 1; according to T 1 , v and the position data of the stationary target in the first subset Determine the second rotational velocity vector estimated value ⁇ 2 of the first device; determine the second translational velocity vector estimated value T 2 of the first device according to v, the position data of the stationary target in the third subset, and ⁇ 2; T 2 is defined as T, and ⁇ 2 is defined as ⁇ .
  • ⁇ 2 The process of determining ⁇ 2 is the same as the aforementioned process, and ⁇ 2 can satisfy the relationship It's just that r j used therein is the position data of the stationary object in the first data set.
  • the position data of stationary targets in the same subset are not used when determining T 1 , T 2 and ⁇ 2.
  • T and ⁇ can also be obtained through n iterations.
  • n is used to represent the number of iterations. In this example, n ⁇ 2, and n is an integer.
  • is the estimated value of the n-th rotational velocity vector ⁇ n
  • ⁇ n is the estimated value of the k-1 translational velocity vector T through the first device.
  • T k-1 , v, and the position data of the stationary target in the 2k-1th subset determine the nth rotational velocity vector estimate ⁇ k of the first device, where T k-1 is based on v, the 2k-2th subset
  • the position data of the stationary target and ⁇ k-1 are determined, where 2 ⁇ k ⁇ n; T is determined as the estimated value of the n-th translational velocity vector T n , and T n is determined by v, the position data of the stationary target in the 2kth subset and a k [omega] k translational velocity vector of the first device determines the estimated value T k obtained.
  • the process may also be understood as:
  • the T k-1, v and 2k 1 position of the stationary target data subsets to determine the rotational speed of the k-vector of the first means estimates [omega] k;
  • k-determining apparatus according to a first translational position data v, 2k-th subset [omega] k and stationary targets Velocity vector estimated value T k ;
  • T n is taken as T, and ⁇ n is taken as ⁇ .
  • ⁇ n can satisfy the relation Use the measurement data of part or all of the stationary targets in the data set to obtain ⁇ n .
  • this embodiment uses the position data of stationary targets in different data subsets when determining T k and ⁇ k .
  • the position data of stationary targets in the subsets are independent of each other, and different subsets are used.
  • the second iteration can make full use of the information contained in the position data of different stationary targets, so that the final estimation accuracy of T and ⁇ is greatly improved.
  • FIG. 4 is another schematic flowchart of a self-motion estimation method provided by an embodiment of the present application.
  • another embodiment of the method for self-motion estimation provided in the embodiment of the present application may include:
  • the position data of v, stationary targets, T 'and ⁇ 1 of the first determining means translational velocity vectors from the motion estimation value vector T and the rotational speed estimated value ⁇ .
  • the solution provided by the embodiment of the present application determines the translational velocity vector estimated value T and the rotational velocity vector estimated value ⁇ of the self-movement of the first device according to the v of the second sensor, the position data of the stationary target, the T′ and ⁇ 1 of the first sensor. Therefore, the accuracy of the translational velocity vector estimation value T and the rotation velocity vector estimation value ⁇ of the self-motion of the first device can be effectively improved.
  • T is the first translational velocity vector estimated value T 1 of the first device determined according to v, the position data of the stationary target, ⁇ 1 and T′, and ⁇ is ⁇ 1 .
  • step 303 may include: determining the first translational velocity vector estimation value T 1 of the first device according to v, the position data of the stationary target, ⁇ 1 and T′, and then using T 1 as T, and ⁇ 1 is referred to as ⁇ .
  • ⁇ T′ v+r ⁇ to obtain the first translational velocity vector estimation value T1.
  • ⁇ r represents the vector cross product of ⁇ and r
  • r ⁇ represents the vector cross product of r and ⁇
  • r represents the position vector of the stationary target.
  • ⁇ T′ v+r ⁇ 1 , where Is the position vector of the i kth stationary target, M ⁇ 1.
  • ⁇ T′ v+r ⁇ 1 , where Is the position vector of the i kth stationary target, M ⁇ 1.
  • MMSE minimum mean-squared error
  • the position error covariance can be obtained from the measurement accuracy of the sensor using the existing technology, which will not be further described here.
  • T
  • r can be one position data, or a position vector obtained from two or more position data (all position data in the data set, or part of the position data), as described above.
  • ⁇ T′ v+r ⁇ , based on T′, v, ⁇ and r, we can get
  • ⁇ T′ T.
  • ⁇ T′ v+r ⁇ , from v, T′ and ⁇ And the position data r of the stationary target in the data set is obtained.
  • the above process of calculating T may be obtained by one calculation, or it may be obtained by two or more iterative calculations.
  • is the second rotational velocity vector estimated value ⁇ 2 of the first device determined according to the first translational velocity vector estimated values T 1 and v of the first device and the position data of the stationary target.
  • T 1 is determined according to v, the position data of the stationary target, ⁇ 1 and T′;
  • T is the estimated value of the second translational velocity vector of the first device determined according to v, the position data of the stationary target ⁇ 2 and T′ T 2 .
  • step 303 may include: determining T 1 according to v, the position data of the stationary target, ⁇ 1 and T′; determining the second rotation of the first device according to T 1 , v and the position data of the stationary target Velocity vector estimated value ⁇ 2 ; Determine the second translational velocity vector estimated value T 2 of the first device according to v, the position data of the stationary target, ⁇ 2 and T′, and then use T 2 as T and ⁇ 2 as ⁇ .
  • T 1 , v and the position data of the stationary target determine the second rotation speed vector estimation value ⁇ 2 of the first device, which can satisfy the relationship ⁇ 2 may be determined by the relation ⁇ 2.
  • the second rotation speed vector estimated value ⁇ 2 can be determined by the position data of T 1 , v and multiple (take N as an example, N ⁇ 1, and N is an integer) stationary target in the data set by the following relational expression : in Is the position vector of the i kth stationary target, where, Represents the transposition of the position vector of the i kth stationary target.
  • the position data of the stationary target, ⁇ 2 and T′, the second translational velocity vector estimation value T 2 of the first device is determined to be the same as the aforementioned v, the position data of the stationary target, ⁇ 1 and T′.
  • T 2
  • ⁇ T′ v+r ⁇ 2 , you can It is understood according to the foregoing process, and the details are not repeated here.
  • T and ⁇ can also be obtained through n iterations.
  • n is used to represent the number of iterations. In this example, n ⁇ 2, and n is an integer.
  • is the estimated value of the n-th rotational velocity vector ⁇ n , and the estimated value of the k-1 translational velocity vector T k-1 , v of ⁇ n passing through the first device and the stationary target
  • the k-th rotational speed vector estimate value ⁇ k of the first device determined by the position data, where T k-1 is determined according to v, the position data of the stationary target, and ⁇ k-1 and T′, where 2 ⁇ k ⁇ n ;
  • T is the k-th translational velocity vector estimated value T k of the first device determined according to v, the position data of the stationary target, and ⁇ k and T′.
  • This process can also be understood as: Determine T k-1 according to v, the position data of the stationary target and ⁇ k-1 and T′, where 2 ⁇ k ⁇ n; Determine according to T k-1 , v and the position data of the stationary target The k-th rotational velocity vector estimated value ⁇ k of the first device; the k- th translational velocity vector estimated value T k of the first device is determined according to v, the position data of the stationary target and ⁇ k and T′, and T n is taken as T, Let ⁇ n be ⁇ .
  • ⁇ T′ v+r ⁇ k , using The measurement data of some or all of the stationary targets in the data set finally obtains T n .
  • ⁇ k can satisfy the relation Use the measurement data of some or all of the stationary targets in the data set to finally obtain ⁇ n .
  • ⁇ T′ v+r ⁇ , using the position data of the stationary target, through ⁇ n-1 obtains T n-1 , so that a more accurate T n-1 can be obtained on the basis of the improvement of the estimation accuracy of ⁇ n- 1; at the same time , a more accurate ⁇ n can be obtained on the basis of the improvement of the accuracy of T n-1 Therefore, the above-mentioned multiple iteration process greatly improves the estimation accuracy of T and ⁇ finally obtained.
  • the position data of the stationary target used in the iteration process is the complete set or subset of the data set.
  • the complete set of the data set can be divided into at least two subsets.
  • the kth The second iteration and the lth iteration, k ⁇ l use different subsets of the data set to calculate, so as to finally obtain T n and ⁇ n .
  • the position data of the stationary objects contained in each of the at least two subsets may not overlap, or may partially overlap, but not completely overlap.
  • the data set includes the first subset and the second subset.
  • the position data of the stationary objects in the second subset can be used to determine T 1
  • the position data of the stationary objects in the first subset can be used to determine T 1. ⁇ 2 and T 2 .
  • is the first translational velocity vector estimation value T 1 , v according to the first device and the position data of the stationary target in the first subset
  • the second rotation speed vector estimation value ⁇ 2 of the first device determined T 1 is determined according to v, the position data of the stationary target in the second subset, ⁇ 1 and T′; T is determined according to v, the first subset
  • the position data of the stationary target, ⁇ 2 and T' determine the second translational velocity vector estimated value T 2 of the first device.
  • This process can also be understood as: determining the first translational velocity vector estimation value T 1 according to v, the position data of the stationary target in the second subset, ⁇ 1 and T′; according to T 1 , v and the stationary target in the first subset The position data of the target determines the second rotational velocity vector estimate ⁇ 2 of the first device; the second translational velocity vector estimate of the first device is determined according to v, the position data of the stationary target in the first subset, ⁇ 2 and T′ The value T 2 ; then T 2 is taken as T and ⁇ 2 is taken as ⁇ .
  • ⁇ T' v + r i ⁇ ⁇ 1, wherein , r i is the position data of the second data set of stationary targets.
  • ⁇ T′ v+r j ⁇ 2 , where r j is the first Position data of stationary objects in a data set.
  • ⁇ 2 The process of determining ⁇ 2 is the same as the aforementioned process, and ⁇ 2 can satisfy the relationship It's just that r j used therein is the position data of the stationary object in the first data set.
  • the position data of stationary targets in the same subset are used when determining T 2 and ⁇ 2.
  • T and ⁇ can also be obtained through n iterations.
  • n is used to represent the number of iterations.
  • n is an integer.
  • is the estimated value of the nth rotational velocity vector ⁇ n , and the estimated value of the k-1th translational velocity vector T of ⁇ n through the first device k-1 , v, and the k-th rotational velocity vector estimated value ⁇ k of the first device determined by the position data of the stationary target in the k-th subset, where T k-1 is based on the static
  • the position data of the target, ⁇ k-1 and T′ are determined, where 2 ⁇ k ⁇ n; T is the estimated value of the n-th translational velocity vector T n , T n passes through the position data of the stationary target in the v and k-th subsets , ⁇ k and T'determine the k-th translational velocity vector estimation value T k of the first device.
  • This process can also be understood as: Determine T k-1 according to v, the position data of the stationary target in the k-1th subset, ⁇ k-1 and T′, where 2 ⁇ k ⁇ n; according to T k-1 , v And the position data of the stationary target in the kth subset to determine the k-th rotational velocity vector estimation value ⁇ k of the first device; according to v, the position data of the stationary target in the k-th subset, ⁇ k and T′ to determine the first device’s
  • the k-th translational velocity vector estimated value T k ; T n is taken as T, and ⁇ n is taken as ⁇ .
  • ⁇ T′ v+r ⁇ k , using The measurement data of some or all of the stationary targets in the data set finally obtains T n .
  • ⁇ k can satisfy the relation Use the measurement data of some or all of the stationary targets in the data set to finally obtain ⁇ n .
  • this embodiment uses different iterative steps to determine T k and ⁇ k using the position data of stationary targets in different data subsets.
  • the position data of stationary targets in the subsets are independent of each other, and different subsets are used. Multiple iterations can make full use of the information contained in the position data of different stationary targets, so that the final estimation accuracy of T and ⁇ is greatly improved.
  • the data set includes at least three subsets, and the position data of the stationary objects contained in each subset of the at least three subsets may not overlap, or may overlap partially, but not completely.
  • the data set includes the first subset, the second subset, and the third subset.
  • the position data of the stationary target in the second subset can be used to determine T 1
  • the stationary target in the first subset can be used.
  • the position data of the target determines ⁇ 2
  • the position data of the stationary target in the third subset is used to determine T 2 .
  • is the first translational velocity vector estimation value T 1 , v according to the first device and the position data of the stationary target in the first subset
  • the second rotation speed vector estimation value ⁇ 2 of the first device determined, T 1 is determined based on v, the position data of the stationary target in the second subset, ⁇ 1 and T′; T is based on v, the third subset
  • the position data of the stationary target, ⁇ 2 and T' determine the second translational velocity vector estimated value T 2 of the first device.
  • This process can also be understood as: determining the first translational velocity vector estimation value T 1 according to v, the position data of the stationary target in the second subset, ⁇ 1 and T′; according to T 1 , v and the stationary target in the first subset
  • the position data of the target determines the second rotational velocity vector estimated value ⁇ 2 of the first device
  • the second translational velocity vector of the first device determined according to v, the position data of the stationary target in the third subset, ⁇ 2 and T′ Estimate T 2 ; then use T 2 as T and ⁇ 2 as ⁇ .
  • ⁇ T' v + r i ⁇ ⁇ 1, wherein, r i is the position data of the stationary object in the second data set.
  • ⁇ 2 The process of determining ⁇ 2 is the same as the aforementioned process, and ⁇ 2 can satisfy the relationship It's just that r j used therein is the position data of the stationary object in the first data set.
  • ⁇ T′ v+r s ⁇ 2 , where r s is the position data of the stationary target in the third data set.
  • the position data of stationary targets in the same subset are not used when determining T 1 , T 2 and ⁇ 2.
  • T and ⁇ can also be obtained through n iterations.
  • n is used to represent the number of iterations. In this example, n ⁇ 2, and n is an integer.
  • is the estimated value of the n-th rotational velocity vector ⁇ n
  • ⁇ n is the estimated value of the k-1 translational velocity vector T through the first device.
  • This process can also be understood as: determining T k-1 according to v, the position data of the stationary target in the 2k-2th subset, ⁇ k-1 and T′, where 2 ⁇ k ⁇ n;
  • T k-1 v and the position data of the stationary target in the 2k-1th subset, determine the nth rotation velocity vector estimation value ⁇ k of the first device, where 2 ⁇ k ⁇ n; according to v, the 2kth subset
  • the position data of the stationary target, ⁇ k, and T′ determine the k- th translational velocity vector estimation value T k of the first device; T n is taken as T, and ⁇ n is taken as ⁇ .
  • ⁇ T′ v+r ⁇ k , using The measurement data of some or all of the stationary targets in the data set finally obtains T n .
  • ⁇ k can satisfy the relation Use the measurement data of some or all of the stationary targets in the data set to finally obtain ⁇ n .
  • this embodiment uses the position data of stationary targets in different data subsets when determining T k and ⁇ k .
  • the position data of stationary targets in the subsets are independent of each other, and different subsets are used.
  • the second iteration can make full use of the information contained in the position data of different stationary targets, so that the final estimation accuracy of T and ⁇ is greatly improved.
  • Step 202 in the embodiment corresponding to FIG. 2 and step 302 in the embodiment corresponding to FIG. 4 can be implemented by, for example, a random sampling consensus (RANdom SAmple Consensus, RANSAC) method; it can also be implemented by the following two schemes.
  • the estimated value of the velocity vector can be obtained in all schemes. The two schemes are introduced separately below.
  • the first scheme through azimuth angle ⁇ , pitch angle And the radial velocity v'determine the estimated value v of the movement velocity vector of the second sensor.
  • the first solution includes the following steps:
  • step S12 includes: according to ⁇ and Determine the directional cosine vector ⁇ of the stationary target relative to the second sensor; determine the estimated value v of the motion velocity vector of the second sensor according to the directional cosine vector ⁇ and the radial velocity v'.
  • the measurement data corresponding to the stationary target includes the angle and the radial velocity.
  • the first solution uses the azimuth angle ⁇ and pitch angle of the stationary target relative to the second sensor And the radial velocity v', the three-dimensional estimated value v of the motion velocity vector of the second sensor can be determined.
  • the estimated value v of the motion velocity vector of the second sensor is determined through the three-dimensional position vector (x, y, z), the radial distance s, and the radial velocity v′.
  • the second scheme includes the following steps:
  • step S22 includes: determining the direction cosine vector ⁇ of the stationary target relative to the second sensor according to the three-dimensional position vector (x, y, z) of the stationary target relative to the second sensor; determining according to the direction cosine vector ⁇ and the radial velocity v' The estimated value v of the movement velocity vector of the second sensor.
  • the three-dimensional estimated value of the motion velocity vector of the second sensor can be determined through the three-dimensional position vector of the stationary target relative to the second sensor and the radial velocity v'.
  • the self-motion estimation method provided in the embodiment of the present application can also determine a stationary target and generate a data set.
  • the process may include: according to the estimated value of the motion velocity vector of the second sensor relative to the target, the cosine vector ⁇ of the direction of the target relative to the second sensor, the radial velocity v'of the target relative to the second sensor, and the velocity gate
  • the limit value V Thresh determines that the target is a stationary target.
  • the position data of the stationary target can also be obtained, and the position data of the stationary target can be divided into a data set.
  • the component of the estimated value of the motion velocity vector of the second sensor relative to the target includes [v x v y v z ].
  • an embodiment of a device 40 for self-motion estimation provided by an embodiment of the present application includes:
  • the first acquiring unit 401 is configured to acquire the first rotation speed vector estimated value ⁇ 1 of the first sensor.
  • the second acquiring unit 402 is configured to acquire the estimated value v of the motion velocity vector of the second sensor and the data set of the stationary target relative to the reference frame, the data set including the position data of the stationary target.
  • the processing unit 403 is configured to determine the translational velocity vector estimation value T and the rotation velocity vector estimation of the self-motion of the first device according to the v obtained by the second obtaining unit 402, the position data of the stationary target, and the ⁇ 1 obtained by the first obtaining unit 401 ⁇ .
  • the first device can be calculated by the first rotation velocity vector estimated value ⁇ 1 of the first sensor, the movement velocity vector estimated value v of the second sensor, and the data set of the stationary target relative to the reference frame.
  • the estimated value of the translational velocity vector of the self-motion T is the estimated value of the rotational velocity vector ⁇ .
  • the acceleration accumulation method can effectively improve the accuracy of the translational velocity vector estimation value T and the rotation velocity vector estimation value ⁇ of the self-motion of the first device.
  • the first acquiring unit 401 is further configured to acquire the estimated value T′ of the translational velocity vector scale expansion and contraction of the first sensor.
  • the location data for v, stationary targets, T 'and ⁇ 1 is determined from the first means and the rotational speed estimation value T translational velocity vector of the motion vector estimated value ⁇ .
  • ⁇ 1 , v, T′ and the position data of the stationary target are data relative to a common coordinate system.
  • T is a first translational velocity vector v
  • the position data of the still object ⁇ 1 of the first device determines the estimated value of T 1, ⁇ is ⁇ 1.
  • is the second rotational velocity vector estimated value ⁇ 2 of the first device determined according to the first translational velocity vector estimated value T 1 , v of the first device and the position data of the stationary target. 1 are determined according to v, the position data of the stationary object and ⁇ 1; T is the estimated value of T 2 according to the second translational velocity vector v, and the position data of the stationary object determination ⁇ 2 of the first device.
  • the data set includes at least two subsets, and ⁇ is the first determined according to the first translational velocity vector estimation values T 1 and v of the first device and the position data of the stationary target in the first subset.
  • the second rotation speed vector estimation value ⁇ 2 of the device , T 1 is determined according to v, the position data of the stationary target in the second subset and ⁇ 1 ;
  • T is the position data of the stationary target in the first subset according to v, and the second translational velocity vector ⁇ 2 of the first device determines the estimated value T 2.
  • the data set includes at least three subsets, and ⁇ is the first determined according to the first translational velocity vector estimated values T 1 and v of the first device and the position data of the stationary target in the first subset.
  • the second rotation speed vector estimation value ⁇ 2 of the device , T 1 is determined according to v, the position data of the stationary target in the second subset and ⁇ 1 ;
  • T is the position data of the stationary target in the third subset according to v, and the second translational velocity vector ⁇ 2 of the first device determines the estimated value T 2.
  • ⁇ T′ T, where r is the position data of the stationary target in the data set.
  • T 1 , v and ⁇ 2 satisfy the relationship Among them, r is the position data of the stationary target in the data set.
  • T 1 , v and ⁇ 2 satisfy the relationship Among them, r is the position data of the stationary target in the first subset.
  • the second acquiring unit 402 is configured to determine the azimuth angle ⁇ and the pitch angle of the stationary target relative to the second sensor. And the radial velocity v'to determine the estimated value v of the movement velocity vector of the second sensor; or, according to the three-dimensional position vector, radial distance and radial velocity v'of the stationary target relative to the second sensor, the movement velocity of the second sensor is determined Vector estimate v.
  • the second acquiring unit 402 is configured to determine the second sensor's height H according to the height H of the second sensor relative to the ground, the radial distance s from the second sensor to the stationary target, and the radial velocity v' The estimated value of the motion velocity vector v.
  • An embodiment of the present application also provides a computer storage medium, wherein the computer storage medium stores a program, and the program executes some or all of the steps recorded in the above method embodiments.
  • the self-motion estimation device may be a chip or other terminal equipment that can implement the functions of this application, as well as vehicles and sub-devices. Ships, airplanes, satellites, robots and other equipment.
  • the apparatus for self-motion estimation may include: at least one processor (taking two processors as an example, which may include a processor 501 and a processor 502), a communication line 503, a transceiver 504, and a memory 505.
  • the processor 501 and the processor 502 may be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application-specific integrated circuit (server IC), or one or more for controlling the computer. Apply for integrated circuits for program execution.
  • CPU central processing unit
  • microprocessor microprocessor
  • server IC application-specific integrated circuit
  • Each of these processors can be a single-CPU (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the processor here may refer to one or more devices, circuits, and/or processing cores for processing data (for example, computer execution instructions).
  • the communication line 503 may include a path to transmit information between the aforementioned components.
  • Transceiver 504 using any device such as a transceiver to communicate with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area networks (WLAN), etc. .
  • the transceiver 504 may also be a transceiver circuit or a transceiver, and may include a receiver and a transmitter.
  • the memory 505 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions
  • the dynamic storage device can also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disk storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by a computer Any other media accessed, but not limited to this.
  • the memory may exist independently, and is connected to the processor 501 and the processor 502 through a communication line 503.
  • the memory 505 may also be integrated with at least one of the processor 501 and the processor 502.
  • the self-motion estimation apparatus may also include a communication interface 506.
  • the devices described in FIG. 8 may be connected through the communication line 503, or may be connected through other connection methods, which is not limited in the embodiment of the present application.
  • the memory 505 is used to store computer-executable instructions for executing the solution of the present application, and is controlled to be executed by at least one of the processor 501 and the processor 502.
  • the processor 501 and the processor 502 are configured to execute computer-executable instructions stored in the memory 505, so as to implement the self-motion estimation method provided in the foregoing method embodiment of the present application.
  • the aforementioned memory 502 is used to store computer-executable program codes, and the program codes include instructions; when at least one of the processor 501 and the processor 502 executes the instructions, the processor 501 and the At least one of the processors 502 can perform the actions performed by the processing unit 403 in FIG.
  • the transceiver 504 or the communication interface 506 in the apparatus for self-motion estimation can perform the operations performed by the first acquiring unit 401 and the second acquiring unit 402 in FIG.
  • the implementation principles and technical effects of the actions are similar, so I won’t repeat them here.
  • the processor 501 and the processor 502 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 8.
  • an embodiment of the present application further provides a sensor system 60, which includes a first sensor 601, a second sensor 602, and a device 603 for performing self-motion estimation in the foregoing method embodiment.
  • the present application also provides a chip system, which includes a processor, which is used to support the above-mentioned self-motion estimation device to realize its related functions, for example, to receive or process the data and/or involved in the above-mentioned method embodiments. Or information.
  • the chip system also includes a memory, and the memory is used to store the necessary program instructions and data of the computer equipment.
  • the chip system can be composed of chips, or include chips and other discrete devices.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • Computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • computer instructions may be transmitted from a website, computer, server, or data center through a cable (such as Coaxial cable, optical fiber, Digital Subscriber Line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) transmission to another website, computer, server or data center.
  • a cable such as Coaxial cable, optical fiber, Digital Subscriber Line (DSL)
  • wireless such as infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • a computer device which can be a personal computer, a server, or a network device, etc.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

一种自运动估计的方法及装置(40, 603),自运动估计的方法包括:获取第一传感器(2011,601)的第一转动速度矢量估计值ω 1(101);获取第二传感器(2012,602)的运动速度矢量估计值v和相对于参考系静止的目标的数据集,数据集包括静止目标的位置数据(102);根据运动速度矢量估计值v、静止目标的位置数据和第一转动速度矢量估计值ω 1确定第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω(103)。自运动估计的方法通过第一传感器(2011,601)的第一转动速度矢量估计值ω 1和第二传感器(2012,602)的运动速度矢量估计值v和相对于参考系的静止目标的数据集可以准确确定自运动的平动速度矢量并有效提高转动速度矢量的估计精度,可以应用于辅助驾驶和自动驾驶系统以及配置上述传感器的机器人、无人机、舰载以及星载等系统。

Description

一种自运动估计的方法及装置
本申请要求于2020年03月30日提交中国专利局、申请号为202010236957.1、发明名称为“一种自运动估计的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及传感器领域,具体涉及一种自运动估计的方法及装置。
背景技术
先进自动驾驶辅助系统(advanced driver-assistance system,ADAS)或者自动驾驶(autonomous driving,AD)系统通常配置多种传感器,例如:雷达(radio detection and ranging,Radar)、声纳、超声波传感器、视觉传感器(如:摄像头)等。这些传感器用于感知周边环境信息。周边环境信息包括运动目标和静止目标,运动目标如车辆、行人,静止目标如障碍物、护栏、路沿、灯杆、周围的树木和建筑物等。
相对于固定位置的传感器,配置于可移动设备上的传感器的运动将造成诸多影响。例如:相对参考系运动的目标和相对参考系静止的目标通常采用不同的方法分析和处理,对相对参考系运动的目标通常需要分类、识别和跟踪。对相对参考系静止的目标通常需要分类和识别,为自动驾驶提供额外信息,如规避障碍物、提供可行驶区域等。传感器的运动将导致无法区分相对参考系运动的目标和相对参考系静止的目标,因此,有必要估计传感器或者其平台的自运动状态,特别是其速度,以补偿上述影响。
在现有技术中,自运动估计通常通过移动设备上的惯性测量单元(inertial measurement unit,IMU)来测量传感器的速度矢量。然而IMU测量的速度矢量通常是基于加速度计测量的加速度得到,测量误差会随时间累积,此外,易受电磁干扰的影响。因此,如何准确地得到自运动估计的结果是目前亟待解决的问题。
发明内容
本申请实施例提供一种自运动估计的方法,用于准确确定自运动的平动速度矢量和转动速度矢量。本申请实施例还提供了相应的装置。
本申请第一方面提供一种自运动(ego-motion)估计的方法,该方法包括:获取第一传感器的第一转动速度矢量估计值ω 1;获取第二传感器的运动速度矢量估计值v和相对于参考系的静止目标的数据集,该数据集包括静止目标的位置数据;根据v、静止目标的位置数据和ω 1确定第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω。
该第一方面中,第一传感器可以是视觉传感器或者成像传感器等。第二传感器可以是毫米波雷达、激光雷达或者超声波雷达等。第二传感器可以获取静止目标的至少一个速度分量,如径向速度分量。
自运动是传感器或者传感器所在载体或者平台系统的运动。第一装置可以是传感器所在载体或者平台系统,例如:可以是车载、机载、船/舰载、星载、自动化或者智能体系统等可移动设备平台。
该参考系可以是预定义的参照物坐标系,如大地或者星体或者地图等坐标系,或者相对于大地匀速运动的惯性坐标系;所述静止目标可以是周边环境中的物体。
第一转动速度矢量估计值ω 1的三个分量包括横摆角角速度(yaw rate)、俯仰角速度(pitch rate)和滚转角速度(roll rate)。
运动速度矢量估计值v可以是第二传感器的瞬时速度矢量的估计值。
相对于参考系静止的目标的数据集,可以是从第二传感器或者第一传感器得到的测量数据集,或者是通过通信链路(例如云端)从其它传感器得到的测量数据集。其中,数据集中可以包含一个或者多个静止目标;静止目标的位置数据可以是静止目标的直角坐标位置数据、极坐标位置数据或球坐标位置数据。需要指出的是,对于静止目标,其位置数据可以是一个位置数据,或者多个位置数据。对于一个静止目标的多个位置数据可以对应目标的不同部分,此时目标为扩展目标。
第一装置自运动的平动速度矢量估计值T包括第一装置自运动的平动速度的大小和方向信息,可以包括平动速度矢量在直角坐标系三个坐标轴上的分量的估计值。
由上述第一方面可知,通过第一传感器的第一转动速度矢量估计值ω 1、第二传感器的运动速度矢量估计值v和相对于参考系的静止目标的数据集可以得到第一装置的自运动的平动速度矢量估计值T和转动速度矢量估计值ω。因此,该第一方面可以更为准确的确定第一装置自运动的平动速度矢量和转动速度矢量估计值,从而提高自运动的估计精度。
在第一方面的一种可能的实现方式中,该方法还包括:获取第一传感器的平动速度矢量尺度伸缩的估计值T′;所述根据v、静止目标的位置数据和ω 1确定自运动的平动速度矢量估计值T和转动速度矢量估计值ω,包括:根据v、静止目标的位置数据、T′和ω 1确定自运动的平动速度矢量估计值T和转动速度矢量估计值ω。
该种可能的实现方式中,平动速度矢量尺度伸缩(scaled)的估计值T′可以是归一化的平动速度矢量的估计值或者是平动速度矢量按某一比例系数缩放或者加权的估计值。
该种可能的实现方式中,根据第二传感器的v、静止目标的位置数据、第一传感器的T′和ω 1确定自运动的平动速度矢量估计值T和转动速度矢量估计值ω,可以有效提高第一装置自运动的平动速度矢量和转动速度矢量的估计精度。
在第一方面的一种可能的实现方式中,ω 1、v、T′和静止目标的位置数据是相对于一个公共坐标系的数据。
该种可能的实现方式中,公共坐标系可以是第一传感器和第二传感器所在的载体平台的坐标系,例如车载传感器可以选择所在的车体坐标系为其公共坐标系;无人机载传感器可以选择飞机坐标系为其公共坐标系;或者,公共坐标系也可以是其中一个坐标系为公共坐标系,另一个传感器的数据通过坐标转换到公共坐标系;或者,公共坐标系也可以是其它坐标系,如大地坐标系,或者所使用的地图坐标系,或者,导航系统的坐标如NEU(north-east-up,NEU)坐标系等。此处不做进一步限定。
在第一传感器和第二传感器都有各自的坐标系时,获取到的第一传感器的数据和第二传感器的数据都是在各自坐标系中数据,获取到这些数据后需要先对数据进行坐标系的转换,ω 1、v、T′和静止目标的位置数据是经过坐标系转换后的值。若第一传感器和第二传感器位于同一个坐标系时,则无需进行坐标系转换。由该种可能的实现方式可知,ω 1、v、T′和静止目标的位置数据是相对于公共坐标系的数据,可以确保第一装置自运动的平动速度矢量 估计值T和转动速度矢量估计值ω的准确度。
在第一方面的一种可能的实现方式中,所述T为根据v、静止目标的位置数据和ω 1确定的第一装置的第一平动速度矢量估计值T 1,所述ω为ω 1
该种可能的实现方式中,可以理解为:根据v、静止目标的位置数据和ω 1确定第一装置的第一平动速度矢量估计值T 1,然后将T 1作为T,将ω 1作为ω。
由该种可能的实现方式可知,经过一轮计算就得到了T和ω,提高了T和ω计算的效率。
在第一方面的一种可能的实现方式中,ω为根据第一装置的第一平动速度矢量估计值T 1、v和静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,其中,T 1是根据v、静止目标的位置数据和ω 1确定的;T为根据v、静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2
该种可能的实现方式,也可以理解为:根据v、静止目标的位置数据和ω 1确定T 1;根据T 1、v和静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2;根据v、静止目标的位置数据和ω 2确定第一装置的第二平动速度矢量估计值T 2,然后将T 2作为T,将ω 2作为ω。
可以理解的是,该种可能的实现方式中虽然只描述了两次迭代得到平动速度矢量和转动速度矢量的估计值的情况,但本申请中不限于两次迭代,还可以是基于上次迭代的结果进一步迭代,以n次迭代为例,对于第n次迭代,T为T n,ω为ω n。由该种可能的实现方式可知,通过多次迭代,可以得到更为精确的平动速度矢量估计值T和转动速度矢量估计值ω。
在第一方面的一种可能的实现方式中,ω为根据ω 1、v和静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T为根据v、静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2
该种可能的实现方式中,可以理解为:根据ω 1、v和静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2;根据v、静止目标的位置数据和ω 2确定第一装置的第二平动速度矢量估计值T 2,然后将T 2作为T,将ω 2作为ω。
可以理解的是,该种可能的实现方式中虽然只描述了通过两次迭代得到转动速度矢量估计值并基于转动速度矢量估计值得到平动速度矢量估计值的情况,但本申请中不限于两次迭代得到转动速度矢量估计值,还可以是三次或者更多次迭代得到转动速度矢量估计值,以n次迭代为例,第n次迭代,T为T n,ω为ω n。由该种可能的实现方式可知,通过多次迭代得到的转动速度矢量的估计值ω,并基于多次迭代得到的转动的精准度更高。
在第一方面的一种可能的实现方式中,数据集包括至少两个子集,ω为根据第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T 1是根据v、第二子集中的静止目标的位置数据和ω 1确定的;T为根据v、第一子集中的静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2
该种可能的实现方式,也可以理解为:根据v、第二子集中的静止目标的位置数据和ω 1确定第一平动速度矢量估计值T 1;根据T 1、v和第一子集中的静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2;根据v、第一子集中的静止目标的位置数据和ω 2确定 第一装置的第二平动速度矢量估计值T 2;然后将T 2作为T,将ω 2作为ω。
该种可能的实现方式中,至少两个子集中各子集包含的静止目标的位置数据可以没有交集,也可以有部分重叠,但不完全重叠。
可以理解的是,该种可能的实现方式中在确定ω 2时也可以使用ω 1,而不需要确定出T 1
可以理解的是,该种可能的实现方式中虽然只描述了两次迭代得到平动速度矢量和转动速度矢量的估计值的情况,但本申请中不限于两次迭代,还可以是三次或者更多次迭代得到转动速度矢量估计值,以n次迭代为例,第n次迭代,T为T n,ω为ω n。由该种可能的实现方式可知,使用不同的子集多次迭代输出的T和ω的精准度更高。
在第一方面的一种可能的实现方式中,数据集包括至少三个子集,ω为根据第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T 1是根据v、第二子集中的静止目标的位置数据和ω 1确定的;T为根据v、第三子集中的静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2
该种可能的实现方式中,可以理解为:根据v、第二子集中的静止目标的位置数据和ω 1确定第一平动速度矢量估计值T 1;根据T 1、v和第一子集中的静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2;根据v、第三子集中的静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2;然后将T 2作为T,将ω 2作为ω。
该种可能的实现方式中,至少三个子集中各子集包含的静止目标的位置数据可以没有交集,也可以有部分重叠,但不完全重叠。
可以理解的是,该种可能的实现方式中在确定ω 2时也可以使用ω 1,而不需要确定出T 1
可以理解的是,该种可能的实现方式中虽然只描述了两次迭代得到平动速度矢量和转动速度矢量的估计值的情况,但本申请中不限于两次迭代,还可以是三次或者更多次迭代得到转动速度矢量估计值,以n次迭代为例,第n次迭代,T为T n,ω为ω n。由该种可能的实现方式可知,使用不同的子集多轮迭代输出的T和ω的精准度更高。
在第一方面的一种可能的实现方式中,T、v和ω之间满足关系式v-ω×r=T,其中,r为数据集中静止目标的位置数据。也可以描述为:所述T基于关系式v-ω×r=T,从v和ω以及所述数据集中静止目标的位置数据r得到。
该种可能的实现方式中,r可以是一个位置数据,也可以是两个或多个位置数据,可以是数据集中的全部位置数据,也可以是部分位置数据。v-ω×r=T是可以变形的,例如:在该关系式中的一个、两个或多个参数前面添加系数。结合前面的可能实现方式,可以理解的是:v-ω 1×r=T 1,v-ω 2×r=T 2,也可以表述为v-ω n×r=T n。当然,n取值不同,关系式中的r可以取的值可以相同也可以不相同。由该种可能的实现方式可知,通过该关系式可以快速确定出T。
在第一方面的一种可能的实现方式中,T、T′、v和ω之间满足关系式v-ω×r=|T|·T′=T,其中,r为数据集中静止目标的位置数据。也可以描述为:T基于关系式v-ω×r=|T|·T′=T,从v和ω以及所述数据集中静止目标的位置数据r得到。
该种可能的实现方式中,r可以是一个位置数据,也可以是两个或多个位置数据,可以 是数据集中的全部位置数据,也可以是部分位置数据。v-ω×r=|T|·T′=T是可以变形的,例如:在该关系式中的一个、两个或多个参数前面添加系数。结合前面的可能实现方式,可以理解的是:v-ω 1×r=|T1|·T′=T 1,v-ω 2×r=|T 2|·T′=T 2,也可以表述为v-ω n×r=|T n|·T′=T n。当然,n取值不同,关系式中的r可以取的值可以相同也可以不相同。由该种可能的实现方式可知,通过该关系式可以快速确定出T。
在第一方面的一种可能的实现方式中,T 1、v和ω 2之间满足关系式
Figure PCTCN2021079509-appb-000001
其中,r为数据集中静止目标的位置数据。也可以描述为:ω 2基于关系式
Figure PCTCN2021079509-appb-000002
从v和T 1以及所述数据集中静止目标的位置数据r得到。
该种可能的实现方式中,r可以是一个位置数据,也可以是两个或多个位置数据,可以是数据集中的全部位置数据,也可以是部分位置数据。关系式
Figure PCTCN2021079509-appb-000003
是可以变形的,例如:在该关系式中的一个、两个或多个参数前面添加系数。结合前面的可能实现方式,可以理解的是:
Figure PCTCN2021079509-appb-000004
n为大于2的整数。当然,n取值不同,关系式中的r可以取的值可以相同也可以不相同。由该种可能的实现方式可知,通过该关系式可以快速确定出ω。
在第一方面的一种可能的实现方式中,T 1、v和ω 2之间满足关系式
Figure PCTCN2021079509-appb-000005
其中,r为第一子集中静止目标的位置数据。也可以描述为:ω 2基于关系式
Figure PCTCN2021079509-appb-000006
从v和T 1以及所述第一子集中静止目标的位置数据r得到。
该种可能的实现方式中,r可以是第一子集中的一个位置数据,也可以是两个或多个位置数据,可以第一子集中的全部位置数据,也可以是部分位置数据。关系式
Figure PCTCN2021079509-appb-000007
Figure PCTCN2021079509-appb-000008
是可以变形的,例如:在该关系式中的一个、两个或多个参数前面添加系数。结合前面的可能实现方式,可以理解的是:
Figure PCTCN2021079509-appb-000009
当然,n取值不同,关系式中的r可以取的值可以相同也可以不相同。由该种可能的实现方式可知,通过该关系式可以快速确定出ω。
在第一方面的一种可能的实现方式中,上述步骤:获取第二传感器的运动速度矢量估计值v,包括:根据静止目标相对第二传感器的方位角θ、俯仰角
Figure PCTCN2021079509-appb-000010
和径向速度v′,确定第二传感器的运动速度矢量估计值v;或者,根据静止目标相对第二传感器的三维位置矢量和径向速度v′,确定第二传感器的运动速度矢量估计值v。
可以理解的是,该种可能的实现方式可以表示为:根据方向余弦矢量Λ和径向速度v′,确定第二传感器的运动速度矢量估计值v,该Λ是根据静止目标相对第二传感器的方位角θ和俯仰角
Figure PCTCN2021079509-appb-000011
确定的,或者该Λ是根据静止目标相对第二传感器的三维位置矢量和径向速度v′确定的。
可以理解的是,该种可能的实现方式还可以表示为:获取静止目标相对第二传感器的方位角θ、俯仰角
Figure PCTCN2021079509-appb-000012
和第二传感器相对静止目标的径向速度v′;根据θ和
Figure PCTCN2021079509-appb-000013
确定静止目标相对第二传感器的方向余弦矢量Λ;根据方向余弦矢量Λ和径向速度v′,确定第二传感器的运动速度矢量估计值v。
该种可能的实现方式中,通过静止目标相对第二传感器的方位角θ、俯仰角
Figure PCTCN2021079509-appb-000014
和径向速度v′,可以确定出三维的第二传感器的运动速度矢量估计值v。
可以理解的是,该种可能的实现方式还可以表示为:获取以第二传感器为原点的直角坐标系中第二传感器到静止目标的三个轴向的轴向距离x,y,z和第二传感器相对静止目标的径向速度v′;根据三个轴向的距离x,y,z确定静止目标相对第二传感器的方向余弦矢量Λ;根据Λ和v′,确定第二传感器的运动速度矢量估计值v。
在第一方面的一种可能的实现方式中,该方法还包括:根据第二传感器相对目标物的运动速度矢量估计值、该目标物相对述第二传感器的方向余弦矢量Λ、该目标物相对第二传感器的径向速度v′,以及速度门限值V Thresh,确定目标物为静止目标。确定目标物为静止目标后,还可以获取该静止目标的位置数据,将该静止目标的位置数据划分到数据集中。
本申请第二方面提供一种获取速度矢量估计值的方法,该方法包括:获取静止目标相对第二传感器的方位角θ、俯仰角
Figure PCTCN2021079509-appb-000015
和第二传感器相对静止目标的径向速度v′;根据静止目标相对第二传感器的方位角θ、俯仰角
Figure PCTCN2021079509-appb-000016
和径向速度v′,确定第二传感器的运动速度矢量估计值v。
该第二方面,通过静止目标相对第二传感器的方位角θ、俯仰角
Figure PCTCN2021079509-appb-000017
和径向速度v′,可以确定出三维的第二传感器的运动速度矢量估计值v。
在第二方面的一种可能的实现方式中,上述步骤:根据静止目标相对第二传感器的方位角θ、俯仰角
Figure PCTCN2021079509-appb-000018
和径向速度v′,确定第二传感器的运动速度矢量估计值v,包括:根据θ和
Figure PCTCN2021079509-appb-000019
确定静止目标相对第二传感器的方向余弦矢量Λ;根据方向余弦矢量Λ和径向速度v′,确定第二传感器的运动速度矢量估计值v。
本申请第三方面提供一种获取速度矢量估计值的方法,该方法包括:获取据静止目标相对第二传感器的三维位置矢量、径向距离s和径向速度v′,根据静止目标相对第二传感器的三维位置矢量、径向距离s和径向速度v′,确定第二传感器的运动速度矢量估计值v。
该第三方面,通过静止目标相对第二传感器的三维位置矢量、径向距离s和径向速度v′,可以确定出三维的第二传感器的运动速度矢量估计值v。
在第三方面的一种可能的实现方式中,上述步骤:根据静止目标相对第二传感器的三维位置矢量、径向距离s和径向速度v′,确定第二传感器的运动速度矢量估计值v,包括:根据静止目标相对第二传感器的三维位置矢量和径向距离s确定静止目标相对第二传感器的方向余弦矢量Λ;根据方向余弦矢量Λ和径向速度v′,确定第二传感器的运动速度矢量估计值v。
需要说明的是,本申请实施例中的同一字符有的用了“斜体”,有的不是“斜体”,例如:ω和ω,实际上表达的都是同一个含义。
本申请第四方面提供一种自运动估计的装置,用于执行上述第一方面或第一方面的任意可能的实现方式中的方法。具体地,该自运动估计的装置包括用于执行上述第一方面或第一方面的任意可能的实现方式中的方法的模块或单元。
本申请第五方面提供一种获取速度矢量估计值的装置,用于执行上述第二方面或第二方面的任意可能的实现方式中的方法,或者用于执行上述第三方面或第三方面的任意可能的实现方式中的方法。具体地,该获取速度矢量估计值的装置包括用于执行上述第二方面或第二方面、或者第三方面或第三方面的任意可能的实现方式中的方法的模块或单元。
本申请第六方面提供一种自运动估计的装置,包括:至少一个处理器和至少一个存储器以及存储在存储器中并可在处理器上运行的计算机执行指令,当计算机执行指令被处理器执行时,处理器执行如上述第一方面或第一方面任意一种可能的实现方式的方法。
本申请第七方面提供一种获取速度矢量估计值的装置,包括:至少一个处理器和至少一个存储器以及存储在存储器中并可在处理器上运行的计算机执行指令,当计算机执行指令被处理器执行时,处理器执行如上述第二方面或第二方面的任意可能的实现方式中的方法,或者用于执行上述第三方面或第三方面的任意可能的实现方式中的方法。
本申请第八方面提供一种传感器系统,该传感器系统包括第一传感器、第二传感器和用于执行前述第一方面或第一方面任一种可能的实施方式的自运动估计的装置。
本申请第九方面提供一种传感器系统,该传感器系统包括第二传感器和用于执行前述第二方面或第二方面、或者第三方面或第三方面任一种可能的实施方式的获取速度矢量估计值的装置。
本申请第十方面提供一种承载第九方面的传感器系统的载体,该载体包括第一传感器、第二传感器和用于执行前述第一方面或第一方面任一种可能的实施方式的自运动估计的装置。该载体可以为上述第一方面的第一装置,例如:汽车、摩托车、自行车、无人机、直升机、喷气式飞机、轮船、汽艇、卫星和机器人等。
本申请第十一方面提供一种承载第十方面的传感器系统的载体,该载体包括第二传感器和用于执行前述第二方面或第二方面、或者第三方面或第三方面任一种可能的实施方式的获取速度矢量估计值的装置。
本申请第十二方面提供一种存储一个或多个计算机执行指令的计算机可读存储介质,当计算机执行指令被至少一个处理器执行时,所述至少一个处理器执行如上述第一方面或第一方面任意一种可能的实现方式的方法。
本申请第十三方面提供一种存储一个或多个计算机执行指令的计算机可读存储介质,当计算机执行指令被至少一个处理器执行时,所述至少一个处理器执行如上述第二方面或第二方面的任意可能的实现方式中的方法,或者用于执行上述第三方面或第三方面的任意可能的实现方式中的方法。
本申请第十四方面提供一种存储一个或多个计算机执行指令的计算机程序产品,当计算机执行指令被至少一个处理器执行时,所述至少一个处理器执行上述第一方面或第一方面任意一种可能实现方式的方法。
本申请第十五方面提供一种存储一个或多个计算机执行指令的计算机程序产品,当计算机执行指令被至少一个处理器执行时,所述至少一个处理器执行上述第二方面或第二方面的任意可能的实现方式中的方法,或者用于执行上述第三方面或第三方面的任意可能的实现方式中的方法。
上述第四方面和第六方面所描述的自运动估计的装置也可以是芯片,或者其他具有上述自运动估计的装置功能的组合器件、部件等。
自运动估计的装置中的可以包括通信接口,例如:输入/输出(input/output,I/O)接口,处理单元可以是处理器,例如:中央处理单元(central processing unit,CPU)。
上述第五方面和第七方面所描述的获取速度矢量估计值的装置也可以是芯片,或者其他具有上述获取速度矢量估计值的装置功能的组合器件、部件等。
其中,第四、第六、第八、第十、第十二和第十四方面或者其中任一种可能实现方式所带来的技术效果可参见第一方面或第一方面不同可能实现方式所带来的技术效果,此处不再赘述。
其中,第五、第七、第九、第十一、第十三和第十五方面或者其中任一种可能实现方式所带来的技术效果可参见第二方面、第三方面或第四方面不同可能实现方式所带来的技术效果,此处不再赘述。
本申请实施例提供的方案通过第一传感器的第一转动速度矢量估计值ω 1、第二传感器的运动速度矢量估计值v和相对于参考系的静止目标的数据集可以得到第一装置的自运动的平动速度矢量估计值T和转动速度矢量估计值ω。因此,该第一方面可以更为准确的确定第一装置自运动的平动速度矢量和转动速度矢量估计值,从而提高自运动的估计精度。
附图说明
图1是本申请实施例提供的一种系统架构的示意图;
图2是本申请实施例提供的自运动估计的方法的一实施例示意图;
图3是本申请实施例提供的应用场景的一示例示意图;
图4是本申请实施例提供的自运动估计的方法的另一实施例示意图;
图5是本申请实施例提供的一场景示例示意图;
图6是本申请实施例提供的一场景示例示意图;
图7是本申请实施例提供的自运动估计的装置的一实施例示意图;
图8是本申请实施例提供的自运动估计的装置的另一实施例示意图;
图9是本申请实施例提供的传感器系统的一实施例示意图。
具体实施方式
下面结合附图,对本申请的实施例进行描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
本申请实施例提供一种自运动(ego-motion)估计的方法,用于准确确定自运动的平动速度矢量和转动速度矢量。本申请实施例还提供了相应的装置。以下分别进行详细说明。
本申请实施例提供的自运动估计的方法,该方法可应用于传感器系统或者融合感知系统或者集成上述系统的规划/控制系统如自动驾驶或者智能驾驶领域等,尤其涉及高级辅助 驾驶系统(advanced driver assistance systems,ADAS)。该方法的执行主体可以是软件或者硬件(如与相应传感器通过无线或者有线连接或者集成在一起的装置)、融合感知系统以及各种第一装置等。该自运动可以是传感器或者传感器所在载体或者平台系统的运动。以下不同的执行步骤可以集中式也可以分布式实现。
为了能够更好地理解本申请实施例,下面对本申请实施例可应用的系统架构进行说明。
图1是本申请实施例提供的一种系统架构的示意图。如图1所示,该系统架构中包括传感器平台。传感器平台中配置有第一传感器和第二传感器。该系统架构中还包括自运动估计的装置。其中,该自运动估计的装置可以部署于传感器平台中,即该自运动估计的装置可以与传感器平台集成于一体。或者,该自运动估计的装置可以部署于传感器平台之外,该自运动估计的装置与传感器平台通过无线网络进行通信。图1以自运动估计的装置部署于传感器平台中为例。
其中,传感器的载体或者平台可以为可移动的设备。例如,传感器平台可以为车载系统,如汽车、摩托车或者自行车等。或者,传感器平台可以为船载或者舰载平台,如船只、轮船、汽艇等。或者,传感器平台可以是机载平台,如无人机、直升机或者喷气式飞机、气球等。或者,传感器平台可以为星载平台,如卫星等,传感器平台可以为自动化或者智能体系统如机器人系统等。
传感器或者传感器所在载体或者平台相对于参考系运动,该传感器或者传感器所在载体或者平台周边环境中存在静止目标。以车载或者无人机机载传感器为例,该参考系可以是大地坐标系,或者相对于大地匀速运动的惯性坐标系。该静止目标可以是周边环境中的物体,例如护栏、道路边沿、建筑物、灯杆等。以舰载传感器为例,该静止目标也可以是水面浮标、灯塔、岸边或者岛屿建筑物等。以星载传感器为例,该静止目标可以是相对于恒星或者卫星静止或者匀速运动的参照物如飞船等。同样,智能体系统如机器人系统周边存在的静止目标可以是厂房、建筑、环境中的树木、矿石等。
第一传感器可以是视觉传感器,例如相机或者摄像头,所述第一传感器也可以成像传感器,例如红外成像传感器或者合成孔径雷达等。
第二传感器可以是毫米波雷达或者激光雷达(light detection and ranging,Lidar),或者超声波雷达,例如声呐。第二传感器可以获取目标的至少一个速度分量。例如采用调频连续波(frequency modulated continuous wave,FMCW)信号的毫米波雷达或者激光雷达或者声呐可以得到目标相对于传感器的径向速度。
上述传感器可以对周围的目标(如:相对参考系的静止目标或相对参考系的运动目标、障碍物、建筑物等)进行测量,得到周围目标的测量数据。例如,以雷达为例,测量数据可以包括目标相对于传感器的距离,方位角和/或俯仰角以及径向速度等。
需要进一步指出的是,此处传感器的物理构成可以是一个或者多个物理传感器。例如,该一个或者多个物理传感器的中各个物理传感器可以分别测量方位角、俯仰角以及径向速度,也可以是从该一个或者多个物理传感器的测量数据导出所述方位角、俯仰角以及径向速度,此处不做限定。
自运动通常可以分解为平动(translation)和转动(rotation)。自运动估计就是要 确定自运动的平动速度矢量和转动速度矢量的估计值。其中,平动速度矢量可以用2维或者3维直角坐标系坐标轴上的各个分量表示。转动速度矢量可以用转动角速度的各个分量表示,可以包括横摆角角速度(yaw rate)、俯仰角速度(pitch rate)和滚转角速度(roll rate)中的一个或者多个。
下面进一步对本申请所提供的自运动估计的方法及装置进行介绍。
请参见图2,图2是本申请实施例提供的一种自运动估计方法的流程示意图。如图2所示,本申请实施例提供的自运动估计方法的可以包括:
101、获取第一传感器的第一转动速度矢量估计值ω 1
102、获取第二传感器的运动速度矢量估计值v和相对于参考系的静止目标的数据集,该数据集包括静止目标的位置数据。
103、根据所述v、静止目标的位置数据和ω 1确定第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω。
具体的,第一传感器可以为视觉传感器如相机或者摄像头,也可以是成像传感器如红外传感器或者合成孔径雷达等。
以摄像头为例,第一传感器可以获取周边的目标和环境的图像或者视频信息。利用图像或者视频信息,可以得到转动角速度和/或尺度伸缩的平动速度矢量的估计值,例如,基于光流法或者结合相机的数学模型以及多视图几何方法得到,此处不做过多赘述。
具体的,第二传感器可以为毫米波雷达、激光雷达或者超声波雷达等,第二传感器可以获取目标位置以及至少一个速度分量如径向速度测量数据。利用上述测量数据可以得到第二传感器的运动速度矢量估计值v和相对于大地参考系的静止目标的数据集。例如,可以基于随机抽样一致性(Random sample consensus,RANSAC)算法得到第二传感器的运动速度矢量估计值v并确定相对于参考系静止的目标的数据。
作为另一种可选的实施例,利用第二传感器得到物体的位置数据和径向速度测量数据,可以确定来自于静止目标的测量数据集,并利用来自静止目标的数据可以确定第二传感器的瞬时速度矢量的估计值,也就是第二传感器的运动速度矢量估计值v,该方法实现步骤见其后详细描述。
相对于参考系静止的目标的数据集,可以是从第二传感器或者第一传感器得到的测量数据集,或者是通过通信链路(例如云端)从其它传感器得到的测量数据集。其中,数据集中可以包含一个或者多个静止目标;静止目标的位置数据可以是直角坐标、极坐标或球坐标的位置数据。需要指出的是,所述测量数据集可以包括来自点目标的位置数据,也可以包括来自扩展目标的位置数据。其中,一个点目标得到一个位置数据,一个扩展目标可以得到多个位置数据。扩展目标的多个位置数据中,每个位置数据可以对应扩展目标的不同部分。本申请此处不做限定。
由上述实施例可知,通过第一传感器的第一转动速度矢量估计值ω 1、第二传感器的运动速度矢量估计值v和相对于参考系的静止目标的数据集可以得到第一装置的自运动的平动速度矢量估计值T和转动速度矢量估计值ω。可以更为准确的确定第一装置自运动的平动速度矢量和转动速度矢量估计值,从而提高自运动的估计精度。
可选地,上述方法还可以包括:获取第一传感器的平动速度矢量尺度伸缩的估计值T′。
此时,步骤103包括:根据v、所述静止目标的位置数据、T′和ω 1确定第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω。
所述平动速度矢量尺度伸缩(scaled)的估计值可以是归一化(normalized)的平动速度矢量的估计值或者是平动速度矢量按某一比例系数缩放或者加权的估计值。例如,平动速度矢量的各个分量为T x,T y,T z,归一化的平动速度矢量的估计值可以为αT x,αT y,αT z,其中α为归一化系数或者加权系数,α为小于等于1的正数,α可以满足关系式:例如
Figure PCTCN2021079509-appb-000020
或者,
Figure PCTCN2021079509-appb-000021
或者α为其它小于1的正数。需要说明的是,本申请中用于表示关系的各种公式可以涵盖公式的各种可能变形,而不仅仅限定是公式本身。
需要指出的是,该处ω 1、v、T′和静止目标的位置数据可以是相对于一个公共坐标系的数据。公共坐标系可以是第一传感器和第二传感器所在的载体平台的坐标系,以车载传感器为例,该公共坐标系可以是车体坐标系;以无人机载传感器为例,公共坐标系可以是无人机坐标系;或者,该公共坐标系也可以是其中一个传感器的坐标系;或者,该公共坐标系也可以是其它坐标系,如大地坐标系,或者所使用的地图坐标系,或者,导航系统的坐标如(north-east-up,NEU)坐标系等,此处对此不做具体限定。
需要说明的是,通常第一传感器和第二传感器都有各自的坐标系,第一传感器的平动速度矢量和/或转动速度矢量,以及第二传感器的速度矢量或者静止目标的测量数据,如果不是相对于所述公共坐标系定义,可以通过坐标变换,得到相对于公共坐标系定义的数据。本申请实施例对矢量在各个坐标系之间的转换过程不做限定。
为了便于理解上述自运动估计的方法,下面以辅助驾驶或者自动驾驶的应用场景为例,对上述过程进行介绍。
如图3所示,该辅助驾驶或者自动驾驶场景中包括本车201,及其周边的移动目标和静止目标,移动目标如目标车202,静止目标如路灯203、树木204和建筑物205,静止目标还可以包括静止障碍物如停止的车辆、道路边界如护栏等。当然,该图3只是示例,实际上本车周围可以有很多移动目标和静止物车辆,很多路灯、树木和建筑物。本车201配置第一传感器2011、第二传感器2012和自运动估计的装置2013。第一传感器2011、第二传感器2012和自运动估计的装置2013通过无线连接方式或有线连接方式连接或者集成在一起。例如,可以通过无线保真(wireless fidelity,Wi-Fi)、紫蜂协议(ZigBee)以及蓝牙或者短距离通信(NFC)等等无线方式连接或者集成,或者通过CAN(Control Area Network)总线等有线方式连接或者集成,本申请实施例对此不做限定。第一传感器2011和第二传感器2012可以分别是安装于本车前端的摄像头和毫米波雷达或者激光雷达,第一传感器2011和第二传感器2012也可以安装于本车的侧端或者后端。安装方式可以集中式安装,可以分布式安装,此处不做限定。
以摄像头为例,第一传感器2011可以得到本车201周围的目标和环境的图像或者视频信息,利用图像或者视频信息可以得到转动速度(也可以称为转动角速度)和/或尺度伸缩的平动速度矢量估计值。
以毫米波雷达或者激光雷达为例,第二传感器2012通过发射和接收毫米波或者激光信号,通过信号处理等方法获取周围环境中目标的位置数据和移动速度信息。利用上述测量数据可以得到第二传感器的运动速度矢量估计值v和相对于大地参考系的静止目标的数据集。
作为另一种可选的实施例,自运动估计的装置2013可以根据第二传感器2012得到物体的位置数据和径向速度测量数据,确定测量数据集合中的数据来自于静止目标,例如:路灯203、树木204和建筑205,或者运动目标,例如:目标车202。利用来自静止目标的数据可以确定第二传感器的瞬时速度矢量的估计值,也就是第二传感器的运动速度矢量估计值v,该方法实现步骤见其后详细描述。
自运动估计的装置2013可以根据v、静止目标的位置数据和ω 1确定第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω。
进一步地,基于第一传感器的平动速度矢量尺度伸缩的估计值T′,自运动估计装置2013进一步确定转动速度矢量的估计值ω。
与之类似,本申请实施例可以应用于无人机、机器人、星载或者舰载系统,此处不一一列举。
可选的,作为一种实施例,T为根据v、静止目标的位置数据和ω 1确定的第一装置的第一平动速度矢量估计值T 1;ω为ω 1
也可以理解为,上述步骤103可以包括:根据v、静止目标的位置数据和ω 1确定第一装置的第一平动速度矢量估计值T 1,将T 1作为T;将ω 1作为ω。
可选的,作为一种实施例,步骤:根据所述v、所述静止目标的位置数据和ω 1确定所述第一装置的第一平动速度矢量估计值T 1,可以包括:基于关系式T=v-ω×r或者T=v+r×ω得到第一平动速度矢量估计值T 1,其中ω×r表示ω与r的矢量叉积(cross product),r×ω表示r与ω的矢量叉积(cross product),r表示静止目标的位置矢量。
具体地,T 1可以根据数据集中的一个静止目标的位置数据得到,如:T 1=v-ω 1×r i,或者T 1=v+r i×ω 1其中,r i=[x i y i z i] T是第i个静止目标的位置矢量。
具体地,T 1也可以根据数据集中的M个静止目标的位置数据得到,如T 1=v-ω 1×r,或者T 1=v+r×ω 1,其中
Figure PCTCN2021079509-appb-000022
Figure PCTCN2021079509-appb-000023
是第i k个静止目标的位置矢量,M≥1。
具体地,T 1基于M个静止目标的位置数据得到,也可以是根据最小二乘法或者最小均方误差(minimum mean-squared error,MMSE)准则得到,例如:T 1满足关系式T 1=v-ω 1×r或者T 1=v+r×ω 1,其中
Figure PCTCN2021079509-appb-000024
Figure PCTCN2021079509-appb-000025
是第i k个静止目标的位置矢量,M≥1。
其中,
Figure PCTCN2021079509-appb-000026
Figure PCTCN2021079509-appb-000027
为第i k个静止目标的位置矢量测量误差的协方差。从传感器的测量精度得到位置误差协方差可以采用现有技术,此处不进一步赘述。
上述实施例描述的方案中,T、v和ω之间满足关系式v-ω×r=T,或者满足关系式v+r×ω=T,其中,r为数据集中静止目标的位置数据。也可以描述为:T基于关系式v-ω×r=T,或者基于关系式v+r×ω=T,从v和ω以及所述数据集中静止目标的位置数据r得到。
可选的,作为另一种实施例,ω为根据第一装置的第一平动速度矢量估计值T 1、v和静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,其中,T 1是根据v、静止目标的位置数据和ω 1确定的;T为根据v、静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2
该过程也可以理解为:根据v、静止目标的位置数据和ω 1确定T 1;根据T 1、v和静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2;根据v、静止目标的位置数据和ω 2确定第一装置的第二平动速度矢量估计值T 2;将T 2作为T,将ω 2作为ω。
其中,关于根据v、静止目标的位置数据和ω 1确定T 1的过程可以参阅前述描述进行理解,此处不再重复。
根据T 1、v和静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2,可以根据关系式v-ω×r=T或者v+r×ω=T,或者等价地,
Figure PCTCN2021079509-appb-000028
来确定。
通过关系式
Figure PCTCN2021079509-appb-000029
确定ω 2可以包括通过一个静止目标的位置数据来确定或者通过多个静止目标的位置数据来确定,下面分别进行描述。
具体地,第二转动速度矢量估计值ω 2可以由T 1、v和数据集中的一个静止目标的位置数据,通过ω 2满足的关系式确定出:
Figure PCTCN2021079509-appb-000030
其中
Figure PCTCN2021079509-appb-000031
r i=[x i y i z i] T是第i个静止目标的位置矢量,其中,[x i y i z i] T表示第i个静止目标的位置矢量的转置。
或者第二转动速度矢量估计值ω 2可以由T 1、v和数据集中的(以N个为例,N≥1,且N为整数)静止目标的位置数据,通过ω 2满足的关系式确定出
Figure PCTCN2021079509-appb-000032
其中
Figure PCTCN2021079509-appb-000033
Figure PCTCN2021079509-appb-000034
是第i k个静止目标的位置矢量,其中,
Figure PCTCN2021079509-appb-000035
表示第i k个静止目标的位置矢量的转置。
步骤:根据v、静止目标的位置数据和ω 2确定第一装置的第二平动速度矢量估计值T 2,与前面所描述的根据v、静止目标的位置数据和ω 1确定第一装置的第一平动速度矢量估计值T 1步骤类似,通过关系式T 2=v-ω 2×r或者T 2=v+r×ω 2来确定,此处不进一步赘述。
当然,不限于第二次就输出T和ω。可以是继续迭代第三次、第四次或者更多,下面用n表示迭代的次数,该示例中n≥2,且n为整数。
可选的,作为另一种实施例,ω为第n转动速度矢量估计值ω nn通过第一装置的第k-1平动速度矢量估计值T k-1、v和静止目标的位置数据确定的第一装置的第n转动速度矢量估计值ω k得到,其中,T k-1是根据v、静止目标的位置数据和ω k-1确定的,其中2≤k≤n;确定T为第n平动速度矢量估计值T n,T n通过v、静止目标的位置数据和ω k确定的第一装置的第k平动速度矢量估计值T k得到。
该过程也可以理解为:根据v、静止目标的位置数据和ω k-1确定T k-1,其中2≤k≤n;根据T k-1、v和静止目标的位置数据确定第一装置的第n转动速度矢量估计值ω k;根据v、静止目标的位置数据和ω k确定第一装置的第n平动速度矢量估计值T k将T n作为T,将ω n作为 ω。
具体地,与前述类似,可以基于关系式T k=v-ω k×r或者T k=v+r×ω k,利用数据集中部分或者全部静止目标的测量数据最终得到T n
具体地,与前述类似,可以基于关系式
Figure PCTCN2021079509-appb-000036
利用数据集中部分或者全部静止目标的测量数据最终得到ω n
本申请实施例根据关系式v-ω×r=T或者v+r×ω=T,利用静止目标的位置数据,通过ω k-1得到ω k,从而可以充分利用上述关系和静止目标的数据,在ω k-1估计精度不断提高的基础上,最终得到更为准确的ω n;此外在更为准确的ω n基础上利用上述关系得到更为精确的估计T n,因此,上述多次迭代过程使得最终得到的T和ω的估计精度大大提高。
需要说明的是,上述ω 2的迭代过程不限于使用T 1这一种,还可以是直接从使用ω 1迭代得到。具体地,ω为根据ω 1、v和静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T为根据v、静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2
该过程也可以理解为:根据ω 1、v和静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2;根据v、静止目标的位置数据和ω 2确定第一装置的第二平动速度矢量估计值T 2,然后将T 2作为T,将ω 2作为ω。
其中,根据ω 1、v和静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2,可以是根据关系式
Figure PCTCN2021079509-appb-000037
来确定。
具体地,第二转动速度矢量估计值ω 2可以由T 1、v和数据集中的一个静止目标的位置数据,根据以下关系式给出:
Figure PCTCN2021079509-appb-000038
i=j或者i≠j,其中[r i] ×和r j的含义都可以参阅前述相关描述进行理解,此处不再重复赘述。
或者第二转动速度矢量估计值ω 2可以由T 1、v和数据集中的(以N个为例,N≥1,且N为整数)静止目标的位置数据,根据以下关系式给出:
Figure PCTCN2021079509-appb-000039
其中
Figure PCTCN2021079509-appb-000040
Figure PCTCN2021079509-appb-000041
的含义都可以参阅前述相关描述进行理解,其中i k,k=1,…,N≥1与j l,l=1,…,M≥1,相同或者不同或者部分相同,此处不再重复赘述。
步骤根据v、静止目标的位置数据和ω 2确定第一装置的第二平动速度矢量估计值T 2,与前面所描述的根据v、静止目标的位置数据和ω 1确定第一装置的第一平动速度矢量估计值T 1步骤类似,满足关系式T 2=v-ω 2×r或者T 2=v+r×ω 2,也就是通过这两个关系式可以来确定T 2,此处不进一步赘述。
当然,不限于两次迭代得到T和ω。不失去一般性,也可以通过n次迭代得到T和ω,下面用n表示迭代的次数,该示例中n≥2,且n为整数。
可选的,作为另一种实施例,ω为第n转动速度矢量估计值ω nn通过第一装置的第k-1转动速度矢量估计值ω k-1、v和静止目标的位置数据确定的第一装置的第k转动速度矢量估计值ω k得到,其中,ω k-1是根据v、静止目标的位置数据和ω k-2确定的,其中2≤k≤n;T为第n平动速度矢量估计值T n,T n通过v、静止目标的位置数据和ω k确定的第一装置的第k平动速度矢量估计值T k得到。
该过程也可以理解为:根据ω k-1、v和静止目标的位置数据确定第一装置的第k转动速度矢量估计值ω k,其中2≤k≤n;根据v、静止目标的位置数据和ω k确定第一装置的第k平动速度矢量估计值T k,2≤k≤n;将T n作为T,将ω n作为ω。
具体地,与前述类似,满足关系式T k=v-ω k×r或者T k=v+r×ω k,利用数据集中部分或者全部静止目标的测量数据最终得到T n
同样,与前述类似,满足关系式
Figure PCTCN2021079509-appb-000042
利用数据集中部分或者全部静止目标的测量数据最终得到ω n。将T n作为T,将ω n作为ω。
通过上述迭代方案可知,通过多次迭代输出的T和ω的精准度更高。
本申请实施例根据关系式v-ω×r=T或者v+r×ω=T,利用静止目标的位置数据,通过ω k-1得到ω k,从而可以充分利用上述关系和静止目标的数据,在ω k-1估计精度不断提高的基础上,最终得到更为准确的ω n;此外在更为准确的ω n基础上利用上述关系得到更为精确的估计T n,因此,上述多次迭代过程使得最终得到的T和ω的估计精度大大提高。
以上所描述方案,在迭代过程中使用的静止目标的位置数据是数据集的全集或者子集,实际上,可以将数据集的全集划分为至少两个子集,在不同的迭代步骤,例如第k次迭代和第l次迭代,k≠l,使用不同的数据集的子集计算,从而最终得到T n和ω n。至少两个子集中各子集包含的静止目标的位置数据可以没有交集,也可以有部分重叠,但不完全重叠。
以迭代两次的情况为例,数据集中包括第一子集和第二子集,可以使用第二子集中的静止目标的位置数据确定T 1,使用第一子集中的静止目标的位置数据确定ω 2和T 2
可选的,作为一种实施例,当数据集包括至少两个子集时,ω为根据第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T 1是根据v、第二子集中的静止目标的位置数据和ω 1确定的;T为根据v、第一子集中的静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2
该过程也可以理解为:根据v、第二子集中的静止目标的位置数据和ω 1确定第一平动速度矢量估计值T 1;根据T 1、v和第一子集中的静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2;根据v、第一子集中的静止目标的位置数据和ω 2确定第一装置的第二平动速度矢量估计值T 2;然后将T 2作为T,将ω 2作为ω。
该实施例中确定T 1可以满足关系式T 1=v-ω 1×r i或者T 1=v+r i×ω 1,其中,r i是第二数据集中的静止目标的位置数据。
确定T 2可以满足关系式T 2=v-ω 2×r j或者T 2=v+r i×ω 2,其,r j是第一数据集中的静止目标的位置数据。
确定ω 2的过程与前述过程相同,ω 2可以满足关系式
Figure PCTCN2021079509-appb-000043
只是其中使用的r j是第一数据集中的静止目标的位置数据。
该种可能的实施例中,确定T 2和ω 2时使用的是同一个子集中静止目标的位置数据。
当然,该种使用不同子集迭代的场景中,在确定ω 2时也可以使用ω 1,而不需要确定出T 1具体使用ω 1确定ω 2的过程可以参阅前述使用关系式
Figure PCTCN2021079509-appb-000044
确定ω n的过程 进行理解,此处不再重复赘述。
当然,不限于两次迭代得到T和ω。不失去一般性,也可以通过n次迭代得到T和ω,下面用n表示迭代的次数,该示例中n≥2,且n为整数。
可选的,作为另一种实施例,当数据集包括n个子集时,ω为第n转动速度矢量估计值ω nn通过第一装置的第k-1平动速度矢量估计值T k-1、v和第k子集中的静止目标的位置数据确定的第一装置的第k转动速度矢量估计值ω k得到,其中,T k-1是根据v、第k-1子集中静止目标的位置数据和ω k-1确定的,其中2≤k≤n;T为第n平动速度矢量估计值T n,T n通过v、第k子集中的静止目标的位置数据和ω k确定的第一装置的第n平动速度矢量估计值T k得到。
该过程也可以理解为:根据v、第k-1子集中的静止目标的位置数据和ω k-1确定T k-1,其中2≤k≤n;根据T k-1、v和第k子集中的静止目标的位置数据确定第一装置的第k转动速度矢量估计值ω k;根据v、第k子集中的静止目标的位置数据和ω k确定第一装置的第k平动速度矢量估计值T k;将T n作为T,将ω n作为ω。
具体地,与前述类似,可以满足关系式T k=v-ω k×r或者T k=v+r×ω k,利用数据集中部分或者全部静止目标的测量数据,最终得到T n
同样,与前述类似,可以满足关系式
Figure PCTCN2021079509-appb-000045
利用数据集中部分或者全部静止目标的测量数据,最终得到ω n。将T n作为T,将ω n作为ω。
与前述实施例的不同,本实施例在不同的迭代步骤,确定T k和ω k使用不同的数据子集中静止目标的位置数据,通常子集中静止目标的位置数据相互独立,使用不同的子集多次迭代可以充分利用不同静止目标的位置数据包含的信息,从而使得最终得到的T和ω的估计精度大大提高。
上述所描述的使用不同子集迭代不同次的T n和ω n,还可以在迭代T n和ω n时也使用不同的子集。该种情况下,数据集包括至少三个子集,至少三个子集中各子集包含的静止目标的位置数据可以没有交集,也可以有部分重叠,但不完全重叠。
以迭代两次的情况为例,数据集中包括第一子集、第二子集和第三子集,可以使用第二子集中的静止目标的位置数据确定T 1,使用第一子集中的静止目标的位置数据确定ω 2,使用第三子集中的静止目标的位置数据确定T 2
可选的,作为一种实施例,当数据集包括至少三个子集时,ω为根据第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T 1是根据v、第二子集中的静止目标的位置数据和ω1确定的;T为根据v、第三子集中的静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2
该过程也可以理解为:根据v、第二子集中的静止目标的位置数据和ω1确定第一平动速度矢量估计值T 1;根据T 1、v和第一子集中的静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2;根据v、第三子集中的静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2;然后将T 2作为T,将ω 2作为ω。
该实施例中,T 1可以满足关系式T 1=v-ω 1×r i或者T 1=v+r i×ω 1,其中,r i是第二数据集中的静止目标的位置数据。
确定ω 2的过程与前述过程相同,ω 2可以满足关系式
Figure PCTCN2021079509-appb-000046
只是其中使用的r j是第一数据集中的静止目标的位置数据。
T 2可以满足关系式T 2=v-ω 2×r s或者T 2=v+r s×ω 2,其,r s是第三数据集中的静止目标的位置数据。
该种可能的实施例中,确定T 1、T 2和ω 2时使用的都不是同一个子集中静止目标的位置数据。
当然,该种使用不同子集迭代的场景中,在确定ω 2时也可以使用ω 1,而不需要确定出T 1具体使用ω 1确定ω 2的过程可以参阅前述使用关系式
Figure PCTCN2021079509-appb-000047
确定ω n的过程进行理解,此处不再重复赘述。
当然,不限于两次迭代得到T和ω。不失去一般性,也可以通过n次迭代得到T和ω,下面用n表示迭代的次数,该示例中n≥2,且n为整数。
可选的,作为另一种实施例,当数据集包括2n个子集时,ω为第n转动速度矢量估计值ω nn通过第一装置的第k-1平动速度矢量估计值T k-1、v和第2k-1子集中的静止目标的位置数据确定的第一装置的第n转动速度矢量估计值ω k,其中,T k-1是根据v、第2k-2子集中静止目标的位置数据和ω k-1确定的,其中2≤k≤n;确定T为第n平动速度矢量估计值T n,T n通过v、第2k子集中的静止目标的位置数据和ω k确定的第一装置的第k平动速度矢量估计值T k得到。
该过程也可以理解为:根据v、第2k-2子集中的静止目标的位置数据和ω k-1确定T k-1,其中2≤k≤n;根据T k-1、v和第2k-1子集中的静止目标的位置数据确定第一装置的第k转动速度矢量估计值ω k;根据v、第2k子集中的静止目标的位置数据和ω k确定第一装置的第k平动速度矢量估计值T k;将T n作为T,将ω n作为ω。
具体地,与前述类似,T n可以满足关系式T n=v-ω n×r或者T n=v+r×ω n,利用数据集中部分或者全部静止目标的测量数据得到T n
同样,与前述类似,ω n可以满足关系式
Figure PCTCN2021079509-appb-000048
利用数据集中部分或者全部静止目标的测量数据得到ω n
与前述实施例的不同,本实施例在确定T k和ω k时使用的都是不同的数据子集中静止目标的位置数据,通常子集中静止目标的位置数据相互独立,使用不同的子集多次迭代可以充分利用不同静止目标的位置数据包含的信息,从而使得最终得到的T和ω的估计精度大大提高。
请参见图4,图4是本申请实施例提供的一种自运动估计方法的另一流程示意图。如图4所示,本申请实施例还提供的自运动估计的方法的另一实施例可以包括:
301、获取第一传感器的第一转动速度矢量估计值ω 1和第一传感器的平动速度矢量尺度伸缩的估计值T′。
302、获取第二传感器的运动速度矢量估计值v和相对于参考系的静止目标的数据集,该数据集包括静止目标的位置数据。
303、根据v、静止目标的位置数据、T′和ω 1确定第一装置自运动的平动速度矢量估计 值T和转动速度矢量估计值ω。
本申请实施例中涉及到特征在前述实施例中都有介绍,可以参阅前述的内容进行理解,此处不再重复赘述。
本申请实施例提供的方案根据第二传感器的v、静止目标的位置数据、第一传感器的T′和ω 1确定第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω,可以有效的提高第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω的准确度。
可选的,作为一种实施例,T为根据v、静止目标的位置数据、ω1和T′确定的第一装置的第一平动速度矢量估计值T 1,ω为ω 1
也可以理解为,上述步骤303可以包括:根据v、静止目标的位置数据、ω 1和T′确定的第一装置的第一平动速度矢量估计值T 1,然后将T 1作为T,将ω 1作为ω。
可选的,作为一种实施例,根据v、静止目标的位置数据、ω 1和T′确定的第一装置的第一平动速度矢量估计值T 1,可以包括:基于关系式T=|T|·T′=v-ω×r,或者,T=|T|·T′=v+r×ω得到第一平动速度矢量估计值T1。ω×r表示ω与r的矢量叉积(cross product),r×ω表示r与ω的矢量叉积(cross product),r表示静止目标的位置矢量。
具体地,T 1可以根据所述数据集中的一个静止目标的位置数据得到,如T 1满足关系式T 1=|T 1|·T′=v-ω 1×r i,或者T 1=|T 1|·T′=v+r i×ω 1其中,r i=[x i y i z i] T是第i个静止目标的位置矢量。
具体地,T 1也可以根据所述数据集中的M个静止目标的位置数据得到,如T 1满足关系式T 1=|T 1|·T′=v-ω 1×r,或者T 1=|T 1|·T′=v+r×ω 1,其中
Figure PCTCN2021079509-appb-000049
Figure PCTCN2021079509-appb-000050
是第i k个静止目标的位置矢量,M≥1。
具体地,T 1基于M个静止目标的位置数据得到,也可以是根据最小二乘法或者最小均方误差(minimum mean-squared error,MMSE)准则得到,例如:T 1满足关系式T 1=|T 1|·T′=v-ω 1×r或者T 1=|T 1|·T′=v+r×ω 1,其中
Figure PCTCN2021079509-appb-000051
Figure PCTCN2021079509-appb-000052
是第i k个静止目标的位置矢量,M≥1。
其中
Figure PCTCN2021079509-appb-000053
Figure PCTCN2021079509-appb-000054
为第i k个静止目标的位置矢量测量误差的协方差。从传感器的测量精度得到位置误差协方差可以采用现有技术,此处不进一步赘述。
当然,关系式T=|T|·T′=v-ω×r,或者,T=|T|·T′=v+r×ω是可以变形的,例如:在该关系式中的一个、两个或多个参数前面添加系数。r可以是一个位置数据,也可以是从两个或多个位置数据(数据集中的全部位置数据,或者部分位置数据)得到的位置矢量,如前述。
T满足关系式T=|T|·T′=v-ω×r,或者,T=|T|·T′=v+r×ω,基于T′、v、ω和r,可以得到|T|;通过|T|·T′=T可以得到T。
上述图4所对应实施例描述的方案中,T、T′、v和ω之间满足关系式T=|T|·T′=v-ω×r,或者,T=|T|·T′=v+r×ω,其中,r为数据集中静止目标的位置数据。也可以描述为:所述T基于关系式T=|T|·T′=v-ω×r,或者,T=|T|·T′=v+r×ω,从v、T′和ω以及所述数据集中静止目标的位置数据r得到。
上述计算T的过程可以是一次计算得到的,也可以是通过两次或多次迭代计算得到的。
可选的,作为一种实施例,ω为根据第一装置的第一平动速度矢量估计值T 1、v和静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T 1是根据v、静止目标的位置数据、ω 1和T′确定的;T为根据v、静止目标的位置数据ω 2和T′确定的第一装置的第二平动速度矢量估计值T 2
该过程也可以理解为,上述步骤303可以包括:根据v、静止目标的位置数据、ω 1和T′确定T 1;根据T 1、v和静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2;根据v、静止目标的位置数据、ω 2和T′确定第一装置的第二平动速度矢量估计值T 2,然后将T 2作为T,将ω 2作为ω。
其中,关于根据v、静止目标的位置数据和ω 1确定T 1的过程可以参阅前述描述进行理解,此处不再重复。
T 1、v和静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2,可以满足关系式
Figure PCTCN2021079509-appb-000055
则可以通过该ω 2关系式来确定ω 2
通过关系式
Figure PCTCN2021079509-appb-000056
来确定ω 2可以包括,通过一个静止目标的位置数据来确定ω 2或者通过多个静止目标的位置数据来确定ω 2,下面分别进行描述。
具体地,第二转动速度矢量估计值ω 2可以由T 1、v和数据集中的一个静止目标的位置数据,根据以下关系式给出:
Figure PCTCN2021079509-appb-000057
其中
Figure PCTCN2021079509-appb-000058
r i=[x i y i z i] T是第i个静止目标的位置矢量,其中,[x i y i z i] T表示第i个静止目标的位置矢量的转置。
或者第二转动速度矢量估计值ω 2可以由T 1、v和数据集中的多个(以N个为例,N≥1,且N为整数)静止目标的位置数据,通过以下关系式确定出:
Figure PCTCN2021079509-appb-000059
其中
Figure PCTCN2021079509-appb-000060
Figure PCTCN2021079509-appb-000061
是第i k个静止目标的位置矢量,其中,
Figure PCTCN2021079509-appb-000062
表示第i k个静止目标的位置矢量的转置。
所述根据v、静止目标的位置数据、ω 2和T′确定第一装置的第二平动速度矢量估计值T 2与如前所述的根据v、静止目标的位置数据、ω 1和T′确定T 1步骤类似,T 2满足关系式T 2=|T 2|·T′=v-ω 2×r,或者T 2=|T 2|·T′=v+r×ω 2,可以根据前述的过程进行理解,此处不重复赘述。
当然,不限于两次迭代得到T和ω。不失去一般性,也可以通过n次迭代得到T和ω,下面用n表示迭代的次数,该示例中n≥2,且n为整数。
可选的,作为另一种实施例,ω为第n转动速度矢量估计值ω nn通过第一装置的第k-1平动速度矢量估计值T k-1、v和静止目标的位置数据确定的第一装置的第k转动速度矢量估计值ω k,其中,T k-1是根据v、静止目标的位置数据和ω k-1和T′确定的,其中2≤k≤n;T为根据v、静止目标的位置数据和ω k和T′确定的第一装置的第k平动速度矢量估计值T k
该过程也可以理解为:根据v、静止目标的位置数据和ω k-1和T′确定T k-1,其中2≤k≤n;根据T k-1、v和静止目标的位置数据确定第一装置的第k转动速度矢量估计值ω k;根据v、静止目标的位置数据和ω k和T′确定第一装置的第k平动速度矢量估计值T k,将T n作 为T,将ω n作为ω。
具体地,与前述类似,T k可以满足关系式T k=|T k|·T′=v-ω k×r或者T k=|T k|·T′=v+r×ω k,利用数据集中部分或者全部静止目标的测量数据最终得到T n
同样,与前述类似,ω k可以满足关系式
Figure PCTCN2021079509-appb-000063
利用数据集中部分或者全部静止目标的测量数据最终得到ω n
本申请实施例中,T满足关系式T=|T|·T′=v-ω×r,或者,T=|T|·T′=v+r×ω,利用静止目标的位置数据,通过ω n-1得到T n-1,从而可以ω n-1估计精度提高的基础上得到更为准确的T n-1;同时在T n-1精度提高的基础上得到更为精确地ω n,因此,上述多次迭代过程使得最终得到的T和ω的估计精度大大提高。
需要说明的是,上述ω k的迭代过程不限于使用T k-1这一种,还可以是直接从使用
Figure PCTCN2021079509-appb-000064
迭代得到。具体过程可以参阅前述实施例中的相应描述进行理解,此处不再重复赘述。
以上所描述方案,在迭代过程中使用的静止目标的位置数据是数据集的全集或者子集,实际上,可以将数据集的全集划分为至少两个子集,在不同的迭代步骤,例如第k次迭代和第l次迭代,k≠l,使用不同的数据集的子集计算,从而最终得到T n和ω n。至少两个子集中各子集包含的静止目标的位置数据可以没有交集,也可以有部分重叠,但不完全重叠。
以迭代两次的情况为例,数据集中包括第一子集和第二子集,可以使用第二子集中的静止目标的位置数据确定T 1,使用第一子集中的静止目标的位置数据确定ω 2和T 2
可选的,作为一种实施例,当数据集包括至少两个子集时,ω为根据第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T 1是根据v、第二子集中的静止目标的位置数据、ω 1和T′确定的;T为根据v、第一子集中的静止目标的位置数据、ω 2和T′确定的第一装置的第二平动速度矢量估计值T 2
该过程也可以理解为:根据v、第二子集中的静止目标的位置数据、ω 1和T′确定第一平动速度矢量估计值T 1;根据T 1、v和第一子集中的静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2;根据v、第一子集中的静止目标的位置数据、ω 2和T′确定第一装置的第二平动速度矢量估计值T 2;然后将T 2作为T,将ω 2作为ω。
该实施例中,T 1可以满足关系式T 1=|T 1|·T′=v-ω 1×r i或者T 1=|T 1|·T′=v+r i×ω 1,其中,r i是第二数据集中的静止目标的位置数据。
T 2可以满足关系式T 2=|T 2|·T′=v-ω 2×r j或者T 2=|T 2|·T′=v+r j×ω 2,其,r j是第一数据集中的静止目标的位置数据。
确定ω 2的过程与前述过程相同,ω 2可以满足关系式
Figure PCTCN2021079509-appb-000065
只是其中使用的r j是第一数据集中的静止目标的位置数据。
该种可能的实施例中,确定T 2和ω 2时使用的是同一个子集中静止目标的位置数据。
当然,该种使用不同子集迭代的场景中,在确定ω 2时也可以使用ω 1,而不需要确定出 T 1具体使用ω 1确定ω 2的过程可以参阅前述使用关系式
Figure PCTCN2021079509-appb-000066
确定ω n的过程进行理解,此处不再重复赘述。
当然,不限于第二次就输出T和ω。不失去一般性,也可以通过n次迭代得到T和ω,下面用n表示迭代的次数,该示例中n≥2,且n为整数。
可选的,作为另一种实施例,当数据集包括n个子集时,ω为第n转动速度矢量估计值ω nn通过第一装置的第k-1平动速度矢量估计值T k-1、v和第k子集中的静止目标的位置数据确定的第一装置的第k转动速度矢量估计值ω k得到,其中,T k-1是根据v、第k-1子集中静止目标的位置数据、ω k-1和T′确定的,其中2≤k≤n;T为第n平动速度矢量估计值T n,T n通过v、第k子集中的静止目标的位置数据、ω k和T′确定的第一装置的第k平动速度矢量估计值T k得到。
该过程也可以理解为:根据v、第k-1子集中的静止目标的位置数据、ω k-1和T′确定T k-1,其中2≤k≤n;根据T k-1、v和第k子集中的静止目标的位置数据确定第一装置的第k转动速度矢量估计值ω k;根据v、第k子集中的静止目标的位置数据、ω k和T′确定第一装置的第k平动速度矢量估计值T k;将T n作为T,将ω n作为ω。
具体地,与前述类似,T k可以满足关系式T k=|T k|·T′=v-ω k×r或者T k=|T k|·T′=v+r×ω k,利用数据集中部分或者全部静止目标的测量数据最终得到T n
同样,与前述类似,ω k可以满足关系式
Figure PCTCN2021079509-appb-000067
利用数据集中部分或者全部静止目标的测量数据最终得到ω n
与前述实施例的不同,本实施例在不同的迭代步骤,确定T k和ω k使用不同的数据子集中静止目标的位置数据,通常子集中静止目标的位置数据相互独立,使用不同的子集多次迭代可以充分利用不同静止目标的位置数据包含的信息,从而使得最终得到的T和ω的估计精度大大提高。
上述所描述的使用不同子集迭代不同次的T n和ω n,还可以在迭代T n和ω n时也使用不同的子集。该种情况下,数据集包括至少三个子集,至少三个子集中各子集包含的静止目标的位置数据可以没有交集,也可以有部分重叠,但不完全重叠。
以迭代两次的情况为例,数据集中包括第一子集、第二子集和第三子集,可以使用第二子集中的静止目标的位置数据确定T 1,使用第一子集中的静止目标的位置数据确定ω 2,使用第三子集中的静止目标的位置数据确定T 2
可选的,作为一种实施例,当数据集包括至少三个子集时,ω为根据第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T 1是根据v、第二子集中的静止目标的位置数据、ω 1和T′确定的;T为根据v、第三子集中的静止目标的位置数据、ω 2和T′确定的第一装置的第二平动速度矢量估计值T 2
该过程也可以理解为:根据v、第二子集中的静止目标的位置数据、ω 1和T′确定第一平动速度矢量估计值T 1;根据T 1、v和第一子集中的静止目标的位置数据确定第一装置的第二转动速度矢量估计值ω 2;根据v、第三子集中的静止目标的位置数据、ω 2和T′确定的第一装置的第二平动速度矢量估计值T 2;然后将T 2作为T,将ω 2作为ω。
该实施例中,T 1可以满足关系式T 1=|T 1|·T′=v-ω 1×r i或者T 1=|T 1|·T′=v+ r i×ω 1,其中,r i是第二数据集中的静止目标的位置数据。
确定ω 2的过程与前述过程相同,ω 2可以满足关系式
Figure PCTCN2021079509-appb-000068
只是其中使用的r j是第一数据集中的静止目标的位置数据。
确定T 2的过程与前述过程相同,T 2可以满足关系式T 2=|T 2|·T′=v-ω 2×r s或者T 2=|T 2|·T′=v+r s×ω 2,其中,r s是第三数据集中的静止目标的位置数据。
该种可能的实施例中,确定T 1、T 2和ω 2时使用的都不是同一个子集中静止目标的位置数据。
当然,该种使用不同子集迭代的场景中,在确定ω 2时也可以使用ω 1,而不需要确定出T 1具体使用ω 1确定ω 2的过程可以参阅前述使用关系式
Figure PCTCN2021079509-appb-000069
确定ω n的过程进行理解,此处不再重复赘述。
当然,不限于两次迭代得到T和ω。不失去一般性,也可以通过n次迭代得到T和ω,下面用n表示迭代的次数,该示例中n≥2,且n为整数。
可选的,作为另一种实施例,当数据集包括2n个子集时,ω为第n转动速度矢量估计值ω nn通过第一装置的第k-1平动速度矢量估计值T k-1、v和第2k-1子集中的静止目标的位置数据确定的第一装置的第k转动速度矢量估计值ω k得到,其中,T k-1是根据v、第2k-2子集中静止目标的位置数据、ω k-1和T′确定的,其中2≤k≤n;确定T为第n平动速度矢量估计值T n,T n通过v、第2k子集中的静止目标的位置数据、和ω k和T′确定的第一装置的第k平动速度矢量估计值T k得到。
该过程也可以理解为:根据v、第2k-2子集中的静止目标的位置数据、ω k-1和T′确定T k-1,其中2≤k≤n;
根据T k-1、v和第2k-1子集中的静止目标的位置数据确定第一装置的第n转动速度矢量估计值ω k,其中2≤k≤n;根据v、第2k子集中的静止目标的位置数据、ω k和T′确定第一装置的第k平动速度矢量估计值T k;将T n作为T,将ω n作为ω。
具体地,与前述类似,T k可以满足关系式T k=|T k|·T′=v-ω k×r或者T k=|T k|·T′=v+r×ω k,利用数据集中部分或者全部静止目标的测量数据最终得到T n
同样,与前述类似,ω k可以满足关系式
Figure PCTCN2021079509-appb-000070
利用数据集中部分或者全部静止目标的测量数据最终得到ω n
与前述实施例的不同,本实施例在确定T k和ω k时使用的都是不同的数据子集中静止目标的位置数据,通常子集中静止目标的位置数据相互独立,使用不同的子集多次迭代可以充分利用不同静止目标的位置数据包含的信息,从而使得最终得到的T和ω的估计精度大大提高。
上述图2对应实施例中的步骤202和图4对应实施例中的步骤302可以利用例如基于随机采样一致性(RANdom SAmple Consensus,RANSAC)方法实现;也可以通过以下两种方案实现,这两种方案都可以获取速度矢量估计值。下面分别对这两种方案进行介绍。
1、第一种方案:通过方位角θ、俯仰角
Figure PCTCN2021079509-appb-000071
和径向速度v′确定第二传感器的运动速度矢量估计值v。
该第一种方案中,如图5所示,以第二传感器为原点的三维直角坐标系,第二传感器在图5中用点O表示,静止目标在图5中用点P表示。
该第一种方案包括如下步骤:
S11:获取静止目标相对第二传感器的方位角θ、俯仰角
Figure PCTCN2021079509-appb-000072
和第二传感器相对静止目标的径向速度v′。
S12:根据静止目标相对第二传感器的方位角θ、俯仰角
Figure PCTCN2021079509-appb-000073
和径向速度v′,确定第二传感器的运动速度矢量估计值v。
其中,步骤S12包括:根据θ和
Figure PCTCN2021079509-appb-000074
确定静止目标相对第二传感器的方向余弦矢量Λ;根据方向余弦矢量Λ和径向速度v′,确定第二传感器的运动速度矢量估计值v。
方向余弦矢量Λ包括三个维度的分量,可以表示为Λ=[Λ x Λ y Λ z],其中,
Figure PCTCN2021079509-appb-000075
利用静止目标对应的测量数据包括角度和径向速度,通过关系式v′=Λv,可以利用最小二乘法或者最小均方误差估计准则确定运动速度矢量估计值v。
该第一种方案,通过静止目标相对第二传感器的方位角θ、俯仰角
Figure PCTCN2021079509-appb-000076
和径向速度v′,可以确定出三维的第二传感器的运动速度矢量估计值v。
2、第二种方案:通过三维位置矢量(x,y,z)、径向距离s和径向速度v′确定第二传感器的运动速度矢量估计值v。
该第一种方案中,如图6所示,以第二传感器为原点的三维直角坐标系,第二传感器在图6中用点O表示,静止目标在图6中用点P表示。
该第二种方案包括如下步骤:
S21、获取静止目标相对第二传感器的三维位置矢量和径向速度v′。
S22、根据静止目标相对第二传感器的三维位置矢量和径向速度v′,确定第二传感器的运动速度矢量估计值v。
其中,步骤S22包括:根据静止目标相对第二传感器的三维位置矢量(x,y,z)确定静止目标相对第二传感器的方向余弦矢量Λ;根据方向余弦矢量Λ和径向速度v′,确定第二传感器的运动速度矢量估计值v。
方向余弦矢量Λ包括三个维度的分量,可以表示为Λ=[Λ x Λ y Λ z],其中,Λ x=x/s,Λ y=y/s,Λ z=z/s,其中,
Figure PCTCN2021079509-appb-000077
利用位置分量x、y、z,可以确定Λ=[Λ x Λ y Λ z],通过关系式v′=Λv,基于最小二乘法或者最小均方误差准则可以确定速度矢量v。
该第二种方案,通过静止目标相对第二传感器的三维位置矢量和径向速度v′,可以确定出三维的第二传感器的运动速度矢量估计值。
需要说明的是,本申请实施例所描述的获取速度矢量估计值的方法的两种方案,可以是包含于上述自运动估计的方法,也可以是独立于上述自运动估计的方法。
另外,本申请实施例提供的自运动估计的方法还可以确定静止目标,以及生成数据集。该过程可以包括:根据第二传感器相对目标物的运动速度矢量估计值、该目标物相对述第二传感器的方向余弦矢量Λ、该目标物相对第二传感器的径向速度v′,以及速度门限值 V Thresh,确定目标物为静止目标。确定目标物为静止目标后,还可以获取该静止目标的位置数据,将该静止目标的位置数据划分到数据集中。
其中,第二传感器相对目标物的运动速度矢量估计值的分量包括[v x v y v z]。
该方案可以参与如下关系式进行理解,|v x·Λ x+v y·Λ x+v z·Λ z+v′|≤V Thresh
以上描述了本申请实施例提供的自运动估计的方法以及场景,下面结合附图介绍本申请实施例提供的自运动估计的装置。
如图7所示,本申请实施例提供的自运动估计的装置40的一实施例包括:
第一获取单元401,用于获取第一传感器的第一转动速度矢量估计值ω 1
第二获取单元402,用于获取第二传感器的运动速度矢量估计值v和相对于参考系的静止目标的数据集,数据集包括静止目标的位置数据。
处理单元403,用于根据第二获取单元402获取的v、静止目标的位置数据和第一获取单元401获取的ω 1确定第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω。
本申请实施例提供的方案,通过第一传感器的第一转动速度矢量估计值ω 1、第二传感器的运动速度矢量估计值v和相对于参考系的静止目标的数据集可以计算出第一装置的自运动的平动速度矢量估计值T转动速度矢量估计值ω。相比于通过惯性测量单元(inertial measurement unit,IMU)测量出加速度,再进行加速度累加的方式可以有效的提高第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω的准确度。
一种可能的实施例中,第一获取单元401,还用于获取第一传感器的平动速度矢量尺度伸缩的估计值T′。
处理单元403,用于根据v、静止目标的位置数据、T′和ω 1确定第一装置自运动的平动速度矢量的估计值T和转动速度矢量估计值ω。
一种可能的实施例中,ω 1、v、T′和静止目标的位置数据是相对于公共坐标系的数据。
一种可能的实施例中,T为根据v、静止目标的位置数据和ω 1确定的第一装置的第一平动速度矢量估计值T 1,ω为ω 1
一种可能的实施例中,ω为根据第一装置的第一平动速度矢量估计值T 1、v和静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T 1是根据v、静止目标的位置数据和ω 1确定的;T为根据v、静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2
一种可能的实施例中,数据集包括至少两个子集,ω为根据第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T 1是根据v、第二子集中的静止目标的位置数据和ω 1确定的;T为根据v、第一子集中的静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2
一种可能的实施例中,数据集包括至少三个子集,ω为根据第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的第一装置的第二转动速度矢量估计值ω 2,T 1是根据v、第二子集中的静止目标的位置数据和ω 1确定的;T为根据v、第三子集中的静止目标的位置数据和ω 2确定的第一装置的第二平动速度矢量估计值T 2
一种可能的实施例中,T、v和ω之间满足关系式v-ω×r=T,其中,r为数据集中静止 目标的位置数据。
一种可能的实施例中,T、T′、v和ω之间满足关系式v-ω×r=|T|·T′=T,其中,r为数据集中静止目标的位置数据。
一种可能的实施例中,T 1、v和ω 2之间满足关系式
Figure PCTCN2021079509-appb-000078
其中,r为数据集中静止目标的位置数据。
一种可能的实施例中,T 1、v和ω 2之间满足关系式
Figure PCTCN2021079509-appb-000079
其中,r为第一子集中静止目标的位置数据。
一种可能的实施例中,第二获取单元402,用于根据静止目标相对第二传感器的方位角θ、俯仰角
Figure PCTCN2021079509-appb-000080
和径向速度v′,确定第二传感器的运动速度矢量估计值v;或者,根据静止目标相对第二传感器的三维位置矢量、径向距离和径向速度v′,确定第二传感器的运动速度矢量估计值v。
一种可能的实施例中,第二获取单元402,用于根据第二传感器相对地面的高度H和第二传感器到静止目标的径向距离s,以及径向速度v′,确定第二传感器的运动速度矢量估计值v。
需要说明的是,上述所描述的自运动估计的装置由于与本申请方法实施例基于同一构思,其带来的技术效果与本申请方法实施例相同,具体内容可参见本申请前述所示的方法实施例的叙述,此处不再赘述。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储有程序,该程序执行包括上述方法实施例中记载的部分或全部步骤。
如图8所示,为本申请实施例的又一种自运动估计的装置的结构示意图,该自运动估计的装置可以是芯片,也可以是其他可以实现本申请功能终端设备、以及车辆、次船、飞机、卫星、机器人等设备。该自运动估计的装置可以包括:至少一个处理器(以包括两个处理器为例,可以包括处理器501和处理器502),通信线路503、收发器504以及存储器505。
处理器501和处理器502可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,服务器IC),或一个或多个用于控制本申请方案程序执行的集成电路。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机执行指令)的处理核。
通信线路503可包括一通路,在上述组件之间传送信息。
收发器504,使用任何收发器一类的装置,用于与其他设备或通信网络通信,如以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。该收发器504也可以是收发电路或者收发信机,可以包括接收器和发送器。
存储器505可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only  memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过通信线路503与处理器501和处理器502相连接。存储器505也可以和处理器501和处理器502中的至少一个集成在一起。
该自运动估计的装置也可以包括通信接口506。图8中所描述的各器件可以是通过通信线路503连接,也可以是通过其他连接方式连接,对此,本申请实施例中不做限定。
其中,存储器505用于存储执行本申请方案的计算机执行指令,并由处理器501和处理器502中的至少一个来控制执行。处理器501和处理器502用于执行存储器505中存储的计算机执行指令,从而实现本申请上述方法实施例提供的自运动估计的方法。在一些实施例中,上述存储器502用于存储计算机可执行程序代码,程序代码包括指令;当处理器501和处理器502中的至少一个执行指令时,自运动估计的装置中的处理器501和处理器502中的至少一个可以执行图7中处理单元403执行的动作,自运动估计的装置中的收发器504或通信接口506可以执行图7中第一获取单元401和第二获取单元402执行的动作,其实现原理和技术效果类似,在此不再赘述。
在具体实现中,作为一种实施例,处理器501和处理器502可以包括一个或多个CPU,例如图8中的CPU0和CPU1。
另外,如图9所示,本申请实施例还提供一种传感器系统60,该传感器系统60包括第一传感器601、第二传感器602和用于执行前述方法实施例的自运动估计的装置603。
本申请还提供了一种芯片系统,该芯片系统包括处理器,用于支持上述自运动估计的装置实现其所涉及的功能,例如,例如接收或处理上述方法实施例中所涉及的数据和/或信息。在一种可能的设计中,芯片系统还包括存储器,存储器,用于保存计算机设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(Digital Subscriber Line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (26)

  1. 一种自运动估计的方法,其特征在于,包括:
    获取第一传感器的第一转动速度矢量估计值ω 1
    获取第二传感器的运动速度矢量估计值v和相对于参考系的静止目标的数据集,所述数据集包括所述静止目标的位置数据;
    根据所述v、所述静止目标的位置数据和ω 1确定第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取第一传感器的平动速度矢量尺度伸缩的估计值T′;
    所述根据所述v、所述静止目标的位置数据和ω 1确定第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω,包括:
    根据v、所述静止目标的位置数据、T′和ω 1确定第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω。
  3. 根据权利要求2所述的方法,其特征在于,所述ω 1、v、T′和所述静止目标的位置数据是相对于公共坐标系的数据。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,
    所述T为根据所述v、所述静止目标的位置数据和ω 1确定的所述第一装置的第一平动速度矢量估计值T 1,所述ω为所述ω 1
  5. 根据权利要求1-3任一项所述的方法,其特征在于,
    所述ω为根据所述第一装置的第一平动速度矢量估计值T 1、v和所述静止目标的位置数据确定的所述第一装置的第二转动速度矢量估计值ω 2,其中所述T 1是根据所述v、所述静止目标的位置数据和ω 1确定的;
    所述T为根据所述v、所述静止目标的位置数据和所述ω 2确定的所述第一装置的第二平动速度矢量估计值T 2
  6. 根据权利要求1-3任一项所述的方法,其特征在于,所述数据集包括至少两个子集,所述ω为根据所述第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的所述第一装置的第二转动速度矢量估计值ω 2,所述T 1是根据所述v、第二子集中的静止目标的位置数据和ω 1确定的;
    所述T为根据所述v、所述第一子集中的静止目标的位置数据和所述ω 2确定的所述第一装置的第二平动速度矢量估计值T 2
  7. 根据权利要求1-3任一项所述的方法,其特征在于,所述数据集包括至少三个子集,所述ω为根据所述第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的所述第一装置的第二转动速度矢量估计值ω 2,所述T 1是根据所述v、第二子集中的静止目标的位置数据和ω 1确定的;
    所述T为根据所述v、第三子集中的静止目标的位置数据和所述ω 2确定的所述第一装置的第二平动速度矢量估计值T 2
  8. 根据权利要求4-7任一项所述的方法,其特征在于,所述T、v和ω之间满足关系式 v-ω×r=T,其中,所述r为所述数据集中静止目标的位置数据。
  9. 根据权利要求2所述的方法,其特征在于,所述T、T′、v和ω之间满足关系式v-ω×r=|T|·T′=T,其中,所述r为所述数据集中静止目标的位置数据。
  10. 根据权利要求5-7任一项所述的方法,其特征在于,所述T 1、v和ω 2之间满足关系式
    Figure PCTCN2021079509-appb-100001
    其中,所述r为所述数据集中静止目标的位置数据。
  11. 根据权利要求6或7所述的方法,其特征在于,所述T 1、v和ω 2之间满足关系式
    Figure PCTCN2021079509-appb-100002
    其中,所述r为所述第一子集中静止目标的位置数据。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述获取第二传感器的运动速度矢量估计值v,包括:
    根据静止目标相对所述第二传感器的方位角θ、俯仰角
    Figure PCTCN2021079509-appb-100003
    和径向速度v′,确定所述第二传感器的运动速度矢量估计值v;或者,根据静止目标相对所述第二传感器的三维位置矢量和径向速度v′,确定所述第二传感器的运动速度矢量估计值v。
  13. 一种自运动估计的装置,其特征在于,包括:
    第一获取单元,用于获取第一传感器的第一转动速度矢量估计值ω 1
    第二获取单元,用于获取第二传感器的运动速度矢量估计值v和相对于参考系的静止目标的数据集,所述数据集包括所述静止目标的位置数据;
    处理单元,用于根据所述第二获取单元获取的v、所述静止目标的位置数据和所述第一获取单元获取的ω 1确定第一装置自运动的平动速度矢量估计值T和转动速度矢量估计值ω。
  14. 根据权利要求13所述的装置,其特征在于,
    所述第一获取单元,还用于获取第一传感器的平动速度矢量尺度伸缩的估计值T′;
    所述处理单元,用于根据v、所述静止目标的位置数据、T′和ω 1确定第一装置自运动的平动速度矢量的估计值T和转动速度矢量估计值ω。
  15. 根据权利要求14所述的装置,其特征在于,所述ω 1、v、T′和所述静止目标的位置数据是相对于公共坐标系的数据。
  16. 根据权利要求13-15任一项所述的装置,其特征在于,
    所述T为根据所述v、所述静止目标的位置数据和ω 1确定的所述第一装置的第一平动速度矢量估计值T 1,所述ω为所述ω 1
  17. 根据权利要求13-15任一项所述的装置,其特征在于,
    所述ω为根据所述第一装置的第一平动速度矢量估计值T 1、v和所述静止目标的位置数据确定的所述第一装置的第二转动速度矢量估计值ω 2,所述T 1是根据所述v、所述静止目标的位置数据和ω 1确定的;
    所述T为根据所述v、所述静止目标的位置数据和所述ω 2确定的所述第一装置的第二平动速度矢量估计值T 2
  18. 根据权利要求13-15任一项所述的装置,其特征在于,所述数据集包括至少两个子集,
    所述ω为根据所述第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的所述第一装置的第二转动速度矢量估计值ω 2,所述T 1是根据所述v、第二 子集中的静止目标的位置数据和ω 1确定的;
    所述T为根据所述v、所述第一子集中的静止目标的位置数据和所述ω 2确定的所述第一装置的第二平动速度矢量估计值T 2
  19. 根据权利要求13-15任一项所述的装置,其特征在于,所述数据集包括至少三个子集,
    所述ω为根据所述第一装置的第一平动速度矢量估计值T 1、v和第一子集中的静止目标的位置数据确定的所述第一装置的第二转动速度矢量估计值ω 2,所述T 1是根据所述v、第二子集中的静止目标的位置数据和ω 1确定的;
    所述T为根据所述v、第三子集中的静止目标的位置数据和所述ω 2确定的所述第一装置的第二平动速度矢量估计值T 2
  20. 根据权利要求13-19任一项所述的装置,其特征在于,所述T、v和ω之间满足关系式v-ω×r=T,其中,所述r为所述数据集中静止目标的位置数据。
  21. 根据权利要求14所述的装置,其特征在于,所述T、T′、v和ω之间满足关系式v-ω×r=|T|·T′=T,其中,所述r为所述数据集中静止目标的位置数据。
  22. 根据权利要求17-19任一项所述的装置,其特征在于,所述T 1、v和ω 2之间满足关系式
    Figure PCTCN2021079509-appb-100004
    其中,所述r为所述数据集中静止目标的位置数据。
  23. 根据权利要求18或19所述的装置,其特征在于,所述T 1、v和ω 2之间满足关系式
    Figure PCTCN2021079509-appb-100005
    其中,所述r为所述第一子集中静止目标的位置数据。
  24. 根据权利要求13-23任一项所述的装置,其特征在于,
    所述第二获取单元,用于根据静止目标相对所述第二传感器的方位角θ、俯仰角
    Figure PCTCN2021079509-appb-100006
    和径向速度v′,确定所述第二传感器的运动速度矢量估计值v;或者,根据静止目标相对所述第二传感器的三维位置矢量和径向速度v′,确定所述第二传感器的运动速度矢量估计值v。
  25. 一种自运动估计的装置,其特征在于,包括:至少一个处理器和至少一个存储器,所述至少一个存储器用于存储程序或者数据;
    所述至少一个处理器调用所述程序或者数据,以使得所述装置实现上述权利要求1-12中任一项所述的方法。
  26. 一种计算机可读存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被至少一个处理器执行时使所述处理器执行如权利要求1至12中任一项所述的方法。
PCT/CN2021/079509 2020-03-30 2021-03-08 一种自运动估计的方法及装置 WO2021196983A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010236957.1A CN113470342B (zh) 2020-03-30 2020-03-30 一种自运动估计的方法及装置
CN202010236957.1 2020-03-30

Publications (1)

Publication Number Publication Date
WO2021196983A1 true WO2021196983A1 (zh) 2021-10-07

Family

ID=77864891

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079509 WO2021196983A1 (zh) 2020-03-30 2021-03-08 一种自运动估计的方法及装置

Country Status (2)

Country Link
CN (1) CN113470342B (zh)
WO (1) WO2021196983A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030089542A1 (en) * 2001-11-15 2003-05-15 Honda Giken Kogyo Kabushiki Kaisha Method of estimating quantities that represent state of vehicle
CN101320089A (zh) * 2007-06-05 2008-12-10 通用汽车环球科技运作公司 用于车辆动力估计的雷达、激光雷达和摄像机增强的方法
CN106053879A (zh) * 2015-04-07 2016-10-26 通用汽车环球科技运作有限责任公司 通过数据融合的失效操作的车辆速度估计
CN108573500A (zh) * 2018-04-24 2018-09-25 西安交通大学 一种直接估计车载相机运动参数的方法
CN109668553A (zh) * 2017-10-12 2019-04-23 韩华迪纷斯株式会社 基于惯性的导航设备和基于相对预积分的惯性导航方法
US20190180451A1 (en) * 2016-08-19 2019-06-13 Dominik Kellner Enhanced object detection and motion estimation for a vehicle environment detection system
CN110095116A (zh) * 2019-04-29 2019-08-06 桂林电子科技大学 一种基于lift的视觉定位和惯性导航组合的定位方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4969036A (en) * 1989-03-31 1990-11-06 Bir Bhanu System for computing the self-motion of moving images devices
CN101419711B (zh) * 2008-12-15 2012-05-30 东软集团股份有限公司 一种估计车辆自运动参数的方法和装置
US8903127B2 (en) * 2011-09-16 2014-12-02 Harman International (China) Holdings Co., Ltd. Egomotion estimation system and method
EP2730888A1 (en) * 2012-11-07 2014-05-14 Ecole Polytechnique Federale de Lausanne EPFL-SRI Method to determine a direction and amplitude of a current velocity estimate of a moving device
KR101618501B1 (ko) * 2015-02-04 2016-05-09 한국기술교육대학교 산학협력단 자차량의 에고 모션 추정방법
KR20200010640A (ko) * 2018-06-27 2020-01-31 삼성전자주식회사 모션 인식 모델을 이용한 자체 운동 추정 장치 및 방법, 모션 인식 모델 트레이닝 장치 및 방법

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030089542A1 (en) * 2001-11-15 2003-05-15 Honda Giken Kogyo Kabushiki Kaisha Method of estimating quantities that represent state of vehicle
CN101320089A (zh) * 2007-06-05 2008-12-10 通用汽车环球科技运作公司 用于车辆动力估计的雷达、激光雷达和摄像机增强的方法
CN106053879A (zh) * 2015-04-07 2016-10-26 通用汽车环球科技运作有限责任公司 通过数据融合的失效操作的车辆速度估计
US20190180451A1 (en) * 2016-08-19 2019-06-13 Dominik Kellner Enhanced object detection and motion estimation for a vehicle environment detection system
CN109668553A (zh) * 2017-10-12 2019-04-23 韩华迪纷斯株式会社 基于惯性的导航设备和基于相对预积分的惯性导航方法
CN108573500A (zh) * 2018-04-24 2018-09-25 西安交通大学 一种直接估计车载相机运动参数的方法
CN110095116A (zh) * 2019-04-29 2019-08-06 桂林电子科技大学 一种基于lift的视觉定位和惯性导航组合的定位方法

Also Published As

Publication number Publication date
CN113470342A (zh) 2021-10-01
CN113470342B (zh) 2023-04-07

Similar Documents

Publication Publication Date Title
US11036237B2 (en) Radar-based system and method for real-time simultaneous localization and mapping
US10788830B2 (en) Systems and methods for determining a vehicle position
US11373418B2 (en) Information processing apparatus, information processing method, program, and mobile object
US11521329B2 (en) Updated point cloud registration pipeline based on ADMM algorithm for autonomous vehicles
EP3447729B1 (en) 2d vehicle localizing using geoarcs
US11754415B2 (en) Sensor localization from external source data
EP3690849A1 (en) Method and device for detecting emergency vehicles in real time and planning driving routes to cope with situations to be expected to be occurred by the emergency vehicles
EP3291178B1 (en) 3d vehicle localizing using geoarcs
US11798427B2 (en) Auto-labeling sensor data for machine learning
Jiménez et al. Improving the lane reference detection for autonomous road vehicle control
WO2022036332A1 (en) Method and system for radar-based odometry
US11204248B2 (en) Navigating using electromagnetic signals
Ivancsits et al. Visual navigation system for small unmanned aerial vehicles
KR101821992B1 (ko) 무인비행체를 이용한 목표물의 3차원 위치 산출 방법 및 장치
WO2021196983A1 (zh) 一种自运动估计的方法及装置
WO2022037370A1 (zh) 一种运动估计方法及装置
US20220091252A1 (en) Motion state determining method and apparatus
US20220089166A1 (en) Motion state estimation method and apparatus
Zahran et al. Augmented radar odometry by nested optimal filter aided navigation for UAVS in GNSS denied environment
CN115508836A (zh) 一种运动状态确定方法及装置
US20240053487A1 (en) Systems and methods for transforming autonomous aerial vehicle sensor data between platforms
Tyagi et al. Vehicle Localization and Navigation
US20240118410A1 (en) Curvelet-based low level fusion of camera and radar sensor information
WO2022160101A1 (zh) 朝向估计方法、装置、可移动平台及可读存储介质
US20230161026A1 (en) Circuitry and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21779875

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21779875

Country of ref document: EP

Kind code of ref document: A1