WO2024037295A1 - 定位 - Google Patents
定位 Download PDFInfo
- Publication number
- WO2024037295A1 WO2024037295A1 PCT/CN2023/109080 CN2023109080W WO2024037295A1 WO 2024037295 A1 WO2024037295 A1 WO 2024037295A1 CN 2023109080 W CN2023109080 W CN 2023109080W WO 2024037295 A1 WO2024037295 A1 WO 2024037295A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pose
- robot
- positioning
- fusion
- global positioning
- Prior art date
Links
- 230000004927 fusion Effects 0.000 claims abstract description 107
- 238000000034 method Methods 0.000 claims abstract description 64
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 29
- 230000008569 process Effects 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 17
- 230000009466 transformation Effects 0.000 claims description 17
- 230000008859 change Effects 0.000 claims description 12
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 abstract 2
- 238000010586 diagram Methods 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
Definitions
- the present application relates to the field of positioning technology, and in particular to a kind of positioning.
- Positioning and navigation are core issues in the field of robotics research. Among them, positioning is mainly to determine the real-time position of the robot during its movement.
- robot positioning methods include local positioning methods based on robot sensors and global positioning methods based on Global Positioning System (GPS).
- GPS Global Positioning System
- the embodiment of this application proposes a robot positioning solution that integrates local positioning and global positioning.
- the solution includes the following aspects.
- embodiments of the present application provide a positioning method, including:
- VIO Visual-Inertial Odometry
- the pose fusion update of the robot is performed based on the fusion state vector, where the fusion state vector includes the VIO local positioning variable and the global positioning variable, and the VIO local positioning variable includes the speed and sensor of the robot. Bias, the speed and the sensor bias maintain the Schmidt state;
- the observation error of the global positioning pose is determined, the coordinate system transformation is performed on the fusion state vector according to the observation error, and the fusion state vector after transformation according to the coordinate system is Perform pose updates of the robot.
- the conditions for the global positioning pose to be consistent with the local positioning pose estimate include one or a combination of the following:
- the error value between the global positioning pose and the local positioning pose is less than a first threshold
- the correlation coefficient between the multiple frame images captured by the robot's camera is greater than the second threshold, and the multiple frame image is used to determine the local positioning pose.
- the error value between the global positioning pose and the local positioning pose includes: a position error value and an attitude angle error value;
- the interior point index parameters include: the number of interior points, the interior point rate and the average reprojection error of the interior points.
- the VIO local positioning variables also include: the rotation matrix and position of the robot;
- the global positioning variables include: the position and attitude angle of the robot on the global map.
- performing pose fusion update of the robot based on the fusion state vector includes:
- the method also includes:
- the Kalman gain is calculated based on the Kalman gain influence factor.
- the Kalman gain is used for the pose fusion update of the robot.
- the Kalman gain influence factor is used to constrain the fusion state during the pose fusion update process. The roll and pitch angles contained in the vector remain unchanged before and after the update.
- determining the observation error of the global positioning pose includes:
- the observation error of the global positioning pose is calculated according to the Kalman gain.
- the coordinate system transformation of the fusion state vector according to the observation error includes:
- Coordinate system transformation is performed on the fusion state vector and the covariance matrix corresponding to the fusion state vector according to the world coordinate system change matrix.
- a positioning device including:
- the VIO module is used to estimate the local positioning pose of the robot based on the VIO algorithm
- a global positioning module used to estimate the global positioning pose of the robot based on the map feature point matching algorithm
- a positioning fusion module used to determine whether the global positioning pose is consistent with the local positioning pose estimate; if the estimates are consistent, perform a pose fusion update of the robot based on a fusion state vector, wherein the fusion state vector It includes VIO local positioning variables and global positioning variables.
- the VIO local positioning variables include the speed of the robot and the sensor offset. The speed and the sensor offset maintain the Schmidt state; if the estimates are inconsistent, determine the Observation error of the global positioning pose, perform coordinate system transformation on the fusion state vector based on the observation error, and perform pose update of the robot based on the fusion state vector after the coordinate system transformation.
- embodiments of the present application provide an electronic device, including: at least one processor; and at least one memory communicatively connected to the processor, wherein: the memory stores information that can be executed by the processor.
- Program instructions which are called by the processor to enable the electronic device to execute the method described in the above first aspect or any one of the first aspects.
- embodiments of the present application provide a non-transitory computer-readable storage medium.
- the non-transitory computer-readable storage medium includes a stored program, wherein the computer-readable storage medium is controlled when the program is running.
- the device where the storage medium is located performs the method described in the above first aspect or any one of the first aspects.
- a computer program is provided. At least one computer instruction is stored in the computer program. The at least one computer instruction is loaded and executed by a processor, so that the device where the computer program is located executes the above-mentioned first aspect. Or the method described in any one of the first aspects.
- a computer program product is provided. At least one computer instruction is stored in the computer program product. The at least one computer instruction is loaded and executed by a processor, so that the device where the computer program product is located executes as described above. The method described in the first aspect or any one of the first aspects.
- the local positioning pose of the robot is obtained based on the VIO algorithm
- the global positioning pose of the robot is obtained based on the map feature point matching algorithm.
- different fusion strategies are proposed based on the consistency of local positioning pose and global positioning pose.
- the speed and sensor offset are set as Schmidt variables to ensure that the speed and sensor offset do not jump before and after the global pose update.
- the coordinate system transformation of the fusion state vector is performed based on the observation error, thereby trying to avoid the robot pose changing when it is updated. Deterioration in the new process. Therefore, through the embodiments of the present application, the accuracy and stability of the robot positioning results can be improved.
- Figure 1 is a schematic framework diagram of a positioning method provided by an embodiment of the present application.
- Figure 2 is a flow chart of a positioning method provided by an embodiment of the present application.
- Figure 3 is a schematic structural diagram of a positioning device provided by an embodiment of the present application.
- FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- robot positioning methods include local positioning methods based on robot sensors, global positioning methods based on GPS, and so on.
- local positioning methods based on robot sensors are prone to cumulative drift of errors, while global positioning methods based on GPS are more susceptible to signal interference. Therefore, how to make the robot positioning results more accurate and stable has become a problem that needs to be solved.
- FIG. 1 is a schematic framework diagram of a positioning method provided by an embodiment of the present application.
- the method shown in Figure 1 can be applied to robots, and can also be applied to terminal devices or servers that communicate with robots.
- the robot can be a device with walking function, such as a sweeping robot, a delivery robot, or an unmanned vehicle.
- the robot can also be an aircraft, etc.
- the robot is equipped with an inertial measurement unit (IMU) and a camera.
- the IMU includes an accelerometer and a gyroscope, which are used to collect the robot's acceleration and angular velocity respectively.
- the acceleration and angular velocity collected by the IMU are simply referred to as IMU data.
- a camera is used to capture image frames.
- the VIO algorithm can be executed on the image frame to obtain the local positioning pose of the robot.
- the map feature point matching algorithm can be executed to obtain the global positioning pose of the robot.
- a fusion update strategy of the global positioning pose and the local positioning pose can be executed to obtain the positioning output of the robot.
- the VIO algorithm is a mileage calculation method that integrates vision and IMU to realize robot positioning. Among them, the IMU returns IMU data more frequently, and the camera returns image frames less frequently. When executing the VIO algorithm, IMU data can be used to perform higher-frequency pose estimation, and image frames collected by the camera can be used for pose updates to obtain the local positioning pose of the robot.
- the fusion positioning of the IMU and the camera may adopt a loose coupling scheme or a tight coupling scheme.
- the loose coupling scheme refers to the combination of the pose estimated based on IMU data and the position estimated based on the camera image frame (i.e., the image frame shown in Figure 1, or the image frame collected by the camera, or the image frame returned by the camera).
- the poses are directly fused, and the fusion process has no impact on the IMU data and image frame data (i.e., the aforementioned camera image frames).
- a Kalman filter may be used to perform fused positioning of the IMU and camera.
- Tight coupling refers to fusing the IMU data and the feature data of the image frame, and estimating the robot's pose based on the fused data.
- the fusion process affects the IMU data and the image frame data.
- Multi-State Constraint Kalman Filter MSCKF
- Robust Visual Inertial Odometry ROIIO
- Extended Kalman Filter Extended Kalman Filter
- the map feature point matching algorithm includes: extracting image feature points from the image frames captured by the camera. Match the image feature points with the feature points of the global map, and the global positioning pose of the robot can be estimated based on the matching results.
- the global positioning posture of the robot can include the six degrees of freedom posture of the robot on the global map. For example, it can include the three-axis position coordinates of the robot in the world coordinate system and the attitude angle around the three coordinate axes, that is, the yaw angle (yaw). ), roll angle (roll) and pitch angle (pitch).
- the global positioning pose estimation is a low-frequency measurement, and the update frequency is lower than the local positioning pose. If the local positioning pose is forcibly updated based on the global positioning pose, the positioning result may jump. Therefore, in the embodiment of the present application, after calculating the global positioning pose of the robot, the consistency of the global positioning pose and the local positioning pose is further determined. Based on the consistency judgment results, different pose fusion update strategies can be adopted to ensure the accuracy and stability of the positioning results.
- FIG 2 is a flow chart of a positioning method provided by an embodiment of the present application. As shown in Figure 2, the processing steps of this method include:
- strategy one is used to fuse and update the global positioning pose and the local positioning pose.
- strategy one includes: performing pose fusion update of the robot based on the fusion state vector X.
- the global positioning pose and the local positioning pose can be fused and updated based on the Schmidt-Kalman filter algorithm.
- a fusion state vector X is determined.
- the fusion state vector X includes VIO local positioning variables and global positioning variables.
- VIO local positioning variables include: robot speed and sensor offset, the speed and sensor offset maintain Schmidt state.
- the global positioning pose estimation is a low-frequency measurement, and the update frequency is lower than the local positioning pose.
- the VIO local positioning variables may also include the rotation matrix and position of the robot.
- Global positioning variables include the robot's position and attitude angle on the global map.
- the local positioning posture of the robot includes posture angle parameters.
- Attitude angle parameters include yaw angle, roll angle and pitch angle.
- the roll angle and pitch angle are considerable, and there is no cumulative error in the roll angle and pitch angle.
- the Kalman gain influence factor is determined in the embodiment of this application.
- the Kalman gain can be calculated according to the Kalman gain influence factor, and the Kalman gain is used for the pose fusion update of the robot.
- the Kalman gain influence factor is used to constrain the roll angle and pitch angle contained in the fusion state vector to remain unchanged during the robot's pose fusion update process.
- strategy 2 is used to fuse and update the global positioning pose and the local positioning pose.
- strategy two includes: determining the observation error of the global positioning pose.
- the coordinate system of the fusion state vector X is transformed according to the observation error.
- the pose update of the robot is performed based on the fused state vector after coordinate system transformation.
- the above conditions for determining that the global positioning pose is consistent with the local positioning pose estimate include one or a combination of the following:
- the Random Sample Consensus (ransac) algorithm can be used to solve the global positioning pose of the robot.
- the global positioning pose of the robot can be solved by using the ransac algorithm. to get the interior point indicator parameters.
- the interior point indicator parameters may include: the number of interior points, the interior point rate and the average reprojection error of the interior points.
- the interior point indicator parameters that meet the set conditions include: the number of interior points is greater than the set number, the interior point rate is greater than the set interior point rate, and the average reprojection error of the interior points is less than the set error value. For example, set the number to 10 and set the interior point rate to 0.3.
- the embodiment of the present application does not limit the set number, set interior point rate and set error value.
- both the global positioning pose and the local positioning pose include position parameters and attitude angle parameters.
- the error value between the global positioning pose and the local positioning pose may include a position error value and an attitude angle error value.
- the first sub-threshold and the second sub-threshold may be set respectively corresponding to the above-mentioned position error value and attitude angle error value.
- the position error value between the global positioning pose and the local positioning pose is less than the first sub-threshold
- the attitude angle error value is less than the second sub-threshold
- it is determined that the error value between the global positioning pose and the local positioning pose is less than the first threshold.
- the first sub-threshold is 50 meters and the second sub-threshold is 10 degrees.
- the embodiment of the present application does not limit the first sub-threshold and the second sub-threshold.
- the state vector of the IMU may be expressed as Xi :
- XI can be predicted and updated based on the IMU data continuously collected by the IMU to obtain an estimated pose based on the IMU.
- the state vector determined from the image frame of the camera may be expressed as To include the image feature points of the image frame, or X C can represent the pose angle and position estimated based on the image feature points.
- the estimated pose of the IMU can be updated according to X C to achieve positioning fusion of the camera and IMU.
- the state vector of the local positioning pose based on the VIO output is expressed as X′ I .
- the state vector of the global positioning pose estimated based on the map feature point matching algorithm can be expressed as X S , where X S includes the position and attitude angle of the robot on the global map.
- fusing the global positioning pose and the local positioning pose may include: establishing a fusion state variable X, where the fusion state variable X includes the VIO local positioning variable and the global positioning variable.
- the fusion state variable X [X′ I , X S ].
- the global positioning variable X S can be used as the interference parameter of the fusion state vector X. That is, during the time when the global positioning pose is not updated, the fusion state vector X is updated based on VIO. During the update process of X, the global positioning variable X S remains in the Schmidt state until the global positioning pose is updated.
- the global positioning pose update includes the process of executing the map feature point matching algorithm.
- the global positioning position is determined.
- the posture is updated, and the corresponding global positioning variable X S is updated.
- the fusion state variable X is updated based on X S .
- the global positioning pose estimation is a low-frequency measurement
- the relative local positioning pose update frequency is lower.
- the roll (roll angle) and pitch (pitch angle) of the VIO are considerable, and there is no cumulative error. Therefore, roll and pitch do not need to be updated when the fusion state variable X is updated.
- the embodiment of the present application sets the Kalman gain influence factor. The Kalman gain is calculated based on the Kalman influence factor. The Kalman gain obtained at this time can constrain roll and pitch to remain unchanged before and after the fusion state variable X is updated.
- the estimated value of the fusion state variable X and the covariance of the estimated value can be updated according to K.
- the second strategy of the embodiment of this application proposes a coordinate system update strategy. That is, the current pose update is converted into a coordinate system change.
- the local positioning pose can be updated according to the observation error ⁇ x, and the updated local positioning pose can be obtained.
- the local positioning pose before update is expressed as T i G .
- the world coordinate system change matrix can be determined Change matrix according to world coordinate system
- the coordinate system transformation can be performed on the fusion state vector X and the covariance matrix P corresponding to the fusion state vector X.
- Covariance matrix P new JPJ T .
- J represents Jacobian matrix relative to X G.
- the embodiment of this application proposes an orientation method that fuses visual maps and VIO based on a filtering framework.
- This method embodiment is based on the fusion of measurement characteristics of visual map positioning and VIO pose estimation, thereby improving the robustness and efficiency of robot positioning. While improving positioning accuracy, positioning efficiency can be ensured and the negative impact of global measurement accuracy on VIO can be avoided.
- FIG. 3 is a schematic structural diagram of a positioning device provided by an embodiment of the present application. As shown in Figure 3, the positioning device includes:
- VIO module 201 is used to estimate the local positioning pose of the robot according to the VIO algorithm
- the global positioning module 202 is used to estimate the global positioning pose of the robot based on the map feature point matching algorithm
- the positioning fusion module 203 is used to determine whether the global positioning pose is consistent with the local positioning pose estimate; if the estimates are consistent, perform a pose fusion update of the robot based on the fusion state vector, where the fusion state vector includes the VIO local positioning variable and the global positioning Positioning variables, VIO local positioning variables include the robot's speed and sensor offset. The speed and sensor offset maintain the Schmidt state; if the estimates are inconsistent, the observation error of the global positioning pose is determined, and the fusion state vector is coordinates based on the observation error. System transformation, the robot's pose update is performed based on the fusion state vector after coordinate system transformation.
- the conditions for the global positioning pose to be consistent with the local positioning pose estimate include one or a combination of the following:
- the error value between the global positioning pose and the local positioning pose is less than the first threshold
- the correlation coefficient between the multi-frame images captured by the robot's camera is greater than the second threshold, and the multi-frame images are used to determine the local positioning pose.
- the error value between the global positioning pose and the local positioning pose includes: a position error value and an attitude angle error value;
- the interior point index parameters include: the number of interior points, the interior point rate and the weight of the interior point. Projection average error.
- the VIO local positioning variables also include: the rotation matrix and position of the robot; the global positioning variables include: the position and attitude angle of the robot on the global map.
- the positioning fusion module 203 is used to maintain the speed and sensor bias when a new map feature point is recognized in the image frame captured by the robot's camera or the recognized map feature point disappears. state, perform pose fusion update of the robot based on the fusion state vector.
- the positioning fusion module 203 is also used to determine the Kalman gain influence factor; calculate the Kalman gain based on the Kalman gain influence factor.
- the Kalman gain is used for the pose fusion update of the robot.
- the Kalman gain influence factor is used The roll angle and pitch angle contained in the constraint fusion state vector during the pose fusion update process remain unchanged before and after the update.
- the positioning fusion module 203 is used to calculate the observation error of the global positioning pose according to the Kalman gain.
- the positioning fusion module 203 is used to update the local positioning pose according to the observation error to obtain the updated local positioning pose; and determine the world coordinate system change matrix according to the updated local positioning pose. ;Convert the coordinate system to the fusion state vector and the covariance matrix corresponding to the fusion state vector according to the world coordinate system change matrix.
- the positioning device can perform the positioning method related to the embodiment shown in FIG. 2 .
- parts that are not described in detail in this embodiment please refer to the relevant description of the embodiment shown in FIG. 2 .
- modules of the positioning device shown in Figure 3 is only a division of logical functions. In actual implementation, they can be fully or partially integrated into a physical entity, or they can also be physically separated. And these modules can all be implemented in the form of software calling through processing elements; they can also all be implemented in the form of hardware; some modules can also be implemented in the form of software calling through processing elements, and some modules can be implemented in the form of hardware.
- the VIO module 201 and the global positioning module 202 can be separately established processing elements, or can be integrated and implemented in a certain chip of the electronic device. The implementation of other modules is similar. In addition, all or part of these modules can be integrated together or implemented independently. has been realized In the process, each step of the above method or each of the above modules can be completed by instructions in the form of hardware integrated logic circuits or software in the processor element.
- the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more specific integrated circuits (Application Specific Integrated Circuit, ASIC), or one or more microprocessors (Digital Signal Processor, DSP), or one or more Field Programmable Gate Array (Field Programmable Gate Array, FPGA), etc.
- ASIC Application Specific Integrated Circuit
- DSP Digital Signal Processor
- FPGA Field Programmable Gate Array
- these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC).
- SOC system-on-a-chip
- FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- the electronic device can be used to perform the above positioning method.
- the electronic device takes the form of a general computing device.
- the components of the electronic device may include, but are not limited to: one or more processors 410, communication interfaces 420, memory 430, and a communication bus 440 connecting different system components (including the processor 410, the communication interface 420, and the memory 430).
- Communications bus 440 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics accelerated port, a processor, or a local bus using any of a variety of bus structures.
- these architectures include but are not limited to the Industry Standard Architecture (hereinafter referred to as: ISA) bus, the Micro Channel Architecture (Micro Channel Architecture, MAC) bus, the enhanced ISA bus, the Video Electronics Standards Association ( Video Electronics Standards Association (VESA) local bus and Peripheral Component Interconnection (PCI) bus.
- the memory 430 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter referred to as: RAM) and/or cache memory. Electronic devices may further include other removable/non-removable, volatile/non-volatile computer system storage media.
- the memory 430 may include at least one program product having a set of (for example, at least one) program modules configured to execute the positioning method involved in the embodiment shown in FIG. 2 of the embodiment of the present application.
- a program/utility having a set of (at least one) program modules may be stored in memory 430.
- Such program modules include, but are not limited to, an operating system, one or more application programs, other program modules, and program data. In these examples, Each of these, or some combination thereof, may include the implementation of a network environment.
- the program module usually executes the positioning method involved in the embodiment shown in Figure 2 of the embodiment of this application.
- the processor 410 executes programs stored in the memory 430 to perform various functional applications and data processing, for example, implementing the positioning method involved in the embodiment shown in FIG. 2 of this specification.
- the present application also provides a non-transitory computer storage medium, wherein the non-transitory computer storage medium can store a program, and when executed, the program can include parts or components of the embodiments provided by the present application. All steps.
- the non-transitory computer-readable storage medium can be a magnetic disk, an optical disk, a read-only memory (English: read-only memory, ROM for short) or RAM, etc.
- inventions of the present application also provide a computer program product.
- the computer program product contains executable instructions.
- the executable instructions When executed on a computer, the computer is caused to perform some or all of the steps in the above method embodiments.
- the embodiments of the present application also provide a computer program. At least one computer instruction is stored in the computer program. The at least one computer instruction is loaded and executed by the processor, so that the device where the computer program is located executes the above method embodiments. some or all of the steps.
- At least one refers to one or more, and “multiple” refers to two or more.
- And/or describes the relationship between associated objects, indicating that there can be three relationships. For example, A and/or B can represent the existence of A alone, the existence of A and B at the same time, or the existence of B alone. Where A and B can be singular or plural.
- the character “/” generally indicates that the related objects are in an “or” relationship.
- At least one of the following" and similar expressions refers to any combination of these items, including any combination of single or plural items.
- At least one of a, b and c can represent: a, b, c, a-b, a-c, b-c or a-b-c, where a, b, c can be single or multiple.
- any function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
- the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
- the computer software product exists Stored in a storage medium, it includes several instructions to cause a computer device (which can be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
- the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Numerical Control (AREA)
Abstract
定位方法包括如下步骤:定位过程中,根据VIO算法估计机器人的局部定位位姿(101);根据地图特征点匹配算法估计机器人的全局定位位姿(102);确定全局定位位姿是否与局部定位位姿估计一致(103);若估计一致,则基于融合状态向量执行机器人的位姿融合更新(104),融合状态向量包括VIO局部定位变量和全局定位变量,VIO局部定位变量包括机器人的速度和传感器偏置,速度和传感器偏置保持施密特状态;若估计不一致,则确定全局定位位姿的观测误差,根据观测误差对融合状态向量进行坐标系转换,根据坐标系转换后的融合状态向量执行机器人的位姿更新(105)。
Description
本申请要求于2022年08月16日提交的申请号为202210979139.X、申请名称为“定位方法和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及定位技术领域,尤其涉及一种定位。
定位和导航是机器人研究领域的核心问题。其中,定位主要是在机器人移动过程中确定机器人的实时位置。目前,机器人的定位方法包括基于机器人传感器的局部定位方法,还包括基于全球定位系统(Global Positioning System,GPS)等的全局定位方法。
发明内容
本申请实施例提出了一种融合局部定位和全局定位的机器人定位方案,该方案包括如下的几个方面。
第一方面,本申请实施例提供了一种定位方法,包括:
根据视觉惯性里程计(Visual-Inertial Odometry,VIO)算法估计机器人的局部定位位姿;
根据地图特征点匹配算法估计所述机器人的全局定位位姿;
确定所述全局定位位姿是否与所述局部定位位姿估计一致;
若估计一致,则基于融合状态向量执行所述机器人的位姿融合更新,其中,所述融合状态向量包括VIO局部定位变量和全局定位变量,所述VIO局部定位变量包括所述机器人的速度和传感器偏置,所述速度和所述传感器偏置保持施密特状态;
若估计不一致,则确定所述全局定位位姿的观测误差,根据所述观测误差对所述融合状态向量进行坐标系转换,根据所述坐标系转换后的融合状态向量
执行所述机器人的位姿更新。
可选的,所述全局定位位姿与所述局部定位位姿估计一致的条件,包括以下一项或多项的组合:
求解所述全局定位位姿时的内点指标参数满足设定条件;
所述全局定位位姿与所述局部定位位姿之间的误差值小于第一阈值;
所述机器人的相机拍摄到的多帧图像之间的相关系数大于第二阈值,所述多帧图像用于确定所述局部定位位姿。
可选的,所述全局定位位姿与所述局部定位位姿之间的误差值包括:位置误差值和姿态角误差值;
所述内点指标参数包括:内点个数、内点率和内点的重投影平均误差。
可选的,所述VIO局部定位变量还包括:所述机器人的旋转矩阵和位置;
所述全局定位变量包括:所述机器人在全局地图的位置和姿态角。
可选的,所述基于融合状态向量执行所述机器人的位姿融合更新,包括:
当在所述机器人的相机拍摄的图像帧识别到新的地图特征点或者已识别到的地图特征点消失时,使所述速度和所述传感器偏置保持施密特状态,基于所述融合状态向量执行所述机器人的位姿融合更新。
可选的,所述方法还包括:
确定卡尔曼增益影响因子;
基于所述卡尔曼增益影响因子计算卡尔曼增益,所述卡尔曼增益用于所述机器人的位姿融合更新,所述卡尔曼增益影响因子用于在所述位姿融合更新过程中约束融合状态向量包含的横摇角和俯仰角更新前后保持不变。
可选的,所述确定所述全局定位位姿的观测误差,包括:
根据所述卡尔曼增益计算所述全局定位位姿的观测误差。
可选的,所述根据所述观测误差对所述融合状态向量进行坐标系转换,包括:
根据所述观测误差对所述局部定位位姿进行位姿更新,以得到更新后的局部定位位姿;
根据所述更新后的局部定位位姿确定世界坐标系变化矩阵;
根据所述世界坐标系变化矩阵对所述融合状态向量和所述融合状态向量对应的协方差矩阵进行坐标系转换。
第二方面,本申请实施例提供了一种定位装置,包括:
VIO模块,用于根据VIO算法估计机器人的局部定位位姿;
全局定位模块,用于根据地图特征点匹配算法估计所述机器人的全局定位位姿;
定位融合模块,用于确定所述全局定位位姿是否与所述局部定位位姿估计一致;若估计一致,则基于融合状态向量执行所述机器人的位姿融合更新,其中,所述融合状态向量包括VIO局部定位变量和全局定位变量,所述VIO局部定位变量包括所述机器人的速度和传感器偏置,所述速度和所述传感器偏置保持施密特状态;若估计不一致,则确定所述全局定位位姿的观测误差,根据所述观测误差对所述融合状态向量进行坐标系转换,根据所述坐标系转换后的融合状态向量执行所述机器人的位姿更新。
第三方面,本申请实施例提供了一种电子设备,包括:至少一个处理器;以及与所述处理器通信连接的至少一个存储器,其中:所述存储器存储有可被所述处理器执行的程序指令,所述处理器调用所述程序指令使电子设备能够执行如上述第一方面或者第一方面任一项所述的方法。
第四方面,本申请实施例提供了一种非临时性计算机可读存储介质,所述非临时性计算机可读存储介质包括存储的程序,其中,在所述程序运行时控制所述计算机可读存储介质所在设备执行如上述第一方面或者第一方面任一项所述的方法。
第五方面,提供了一种计算机程序,所述计算机程序中存储有至少一条计算机指令,所述至少一条计算机指令由处理器加载并执行,以使所述计算机程序所在设备执行如上述第一方面或者第一方面任一项所述的方法。
第六方面,提供了一种计算机程序产品,所述计算机程序产品中存储有至少一条计算机指令,所述至少一条计算机指令由处理器加载并执行,以使所述计算机程序产品所在设备执行如上述第一方面或者第一方面任一项所述的方法。
本申请实施例方案中,基于VIO算法获取机器人的局部定位位姿,基于地图特征点匹配算法获取机器人的全局定位位姿。并且根据局部定位位姿和全局定位位姿的一致性提出了不同的融合策略。其中当全局定位位姿与局部定位位姿一致时,将速度和传感器偏置设置为施密特变量,确保全局位姿更新时速度和传感器偏置更新前后不发生跳变。当全局定位位姿与局部定位位姿不一致时,基于观测误差对融合状态向量进行坐标系转换,由此尽量避免机器人位姿在更
新过程中变差。因此通过本申请实施例方案,可以提高机器人定位结果的准确性和稳定性。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种定位方法的框架示意图;
图2为本申请实施例提供的一种定位方法的流程图;
图3为本申请实施例提供的一种定位装置的结构示意图;
图4为本申请实施例提供的一种电子设备的结构示意图。
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在机器人的移动过程中,定位用于确定机器人的实时位置。目前,机器人的定位方法包括基于机器人传感器的局部定位方法、基于GPS的全局定位方法等等。然而,基于机器人传感器的局部定位方法容易存在误差的累计漂移,基于GPS等的全局定位方法则比较容易受到信号干扰。因此,如何使机器人定位结果更加准确、稳定成为需要解决的问题。
参见图1,为本申请实施例提供的一种定位方法的框架示意图。图1所示方法可以应用于机器人,也可以应用于与机器人通信连接的终端设备或者服务器等。可选的,机器人可以是具有行走功能的设备,如扫地机器人、配送机器人或者无人驾驶车辆等。可选的,机器人还可以是飞行器等。如图1所示,机器人设置有惯性测量单元(Inertial Measurement Unit,IMU)和相机。IMU包括加速计和陀螺仪,分别用于采集机器人的加速度和角速度。本申请实施例将IMU采集的加速度和角速度简称为IMU数据。相机用于拍摄图像帧。根据IMU数据
和图像帧可以执行VIO算法得到机器人的局部定位位姿。根据图像帧和预先配置的全局地图可以执行地图特征点匹配算法得到机器人的全局定位位姿。本申请实施例中,根据全局定位位姿与局部定位位姿的一致性,可以执行全局定位位姿和局部定位位姿的融合更新策略,以此得到机器人的定位输出。
VIO算法是以视觉和IMU融合实现机器人定位的里程计算法。其中,IMU返回IMU数据的频率较高,相机返回图像帧的频率较低。在执行VIO算法时,可以使用IMU数据执行较高频率的位姿估计,并利用相机采集的图像帧进行位姿更新,以得到机器人的局部定位位姿。
在一些实施方式中,IMU和相机的融合定位可以采用松耦合方案,也可以采用紧耦合方案。其中,松耦合方案是指将基于IMU数据估计的位姿与基于相机图像帧(即图1所示的图像帧,或者说相机采集的图像帧,又或者说相机返回的图像帧)估计的位姿直接进行融合,融合过程对IMU数据和图像帧数据(即前述的相机图像帧)不产生影响。在一些实施例中,可以采用卡尔曼滤波器执行IMU和相机的融合定位。紧耦合是指将IMU数据和图像帧的特征数据进行融合,并基于融合数据进行机器人的位姿估计,融合过程对IMU数据和图像帧数据产生影响。在一些实施例中,可以采用多状态约束卡尔曼滤波器(Multi-State Constraint Kalman Filter,MSCKF)、鲁棒视觉惯性里程计(Robust Visual Inertial Odometry,ROVIO)或者扩展卡尔曼滤波(Extended Kalman Filter,EKF)执行IMU和相机的融合定位。
地图特征点匹配算法包括:从相机拍摄的图像帧中提取图像特征点。将图像特征点与全局地图的特征点匹配,根据匹配结果可以估计机器人的全局定位位姿。机器人的全局定位位姿可以包括机器人在全局地图的六个自由度位姿,比如,可以包括机器人在世界坐标系的三轴位置坐标以及环绕三个坐标轴的姿态角,即偏航角(yaw)、横摇角(roll)和俯仰角(pitch)。
在一些实施例中,全局定位位姿估计为低频测量,相对局部定位位姿更新频率较低,若根据全局定位位姿强行对局部定位位姿更新,可能会使定位结果产生跳变。因此,本申请实施例在计算出机器人的全局定位位姿之后,进一步判断全局定位位姿与局部定位位姿的一致性。根据一致性判断结果,可以采取不同的位姿融合更新策略,由此保证定位结果的准确性和稳定性。
参见图2,为本申请实施例提供的一种定位方法的流程图。如图2所示,该方法的处理步骤包括:
101,根据VIO算法估计机器人的局部定位位姿。
102,根据地图特征点匹配算法估计机器人的全局定位位姿。
103,确定全局定位位姿是否与局部定位位姿估计一致。
104,若估计一致,则采用策略一进行全局定位位姿和局部定位位姿的融合更新。在一些实施方式中,策略一包括:基于融合状态向量X执行机器人的位姿融合更新。可选的,可以基于施密特-卡尔曼滤波算法对全局定位位姿和局部定位位姿进行融合更新。在一些实施方式中,确定融合状态向量X。融合状态向量X包括VIO局部定位变量和全局定位变量。可选的,VIO局部定位变量包括:机器人的速度和传感器偏置,速度和传感器偏置保持施密特状态。全局定位位姿估计为低频测量,相对局部定位位姿更新频率较低。若在获取到全局定位位姿后基于融合状态向量对局部定位位姿更新,可能会使局部定位位姿中的速度和传感器偏置发生突变(又称为跳变)。因此当全局定位位姿与局部定位位姿具有一致性时,可以令速度和传感器偏置保持施密特状态,由此在基于融合状态向量进行位姿更新时可以使更新前后的速度和传感器偏置不变。在一些实施例中,VIO局部定位变量还可以包括机器人的旋转矩阵和位置。全局定位变量包括机器人在全局地图的位置和姿态角。
在一些实施例中,机器人的局部定位姿态包括姿态角参数。姿态角参数包括偏航角、横摇角和俯仰角。在VIO算法中横摇角和俯仰角是可观的,横摇角和俯仰角不存在累计误差。为此本申请实施例中确定卡尔曼增益影响因子。根据卡尔曼增益影响因子可以计算卡尔曼增益,卡尔曼增益用于机器人的位姿融合更新。卡尔曼增益影响因子用于在机器人的位姿融合更新过程中约束融合状态向量包含的横摇角和俯仰角更新前后保持不变。
105,若估计不一致,则采用策略二进行全局定位位姿和局部定位位姿的融合更新。在一些实施方式中,策略二包括:确定全局定位位姿的观测误差。根据观测误差对融合状态向量X进行坐标系转换。根据坐标系转换后的融合状态向量执行机器人的位姿更新。
在一些实施例中,上述确定全局定位位姿与局部定位位姿估计一致的条件,包括以下一项或多项的组合:
(1)求解全局定位位姿时的内点指标参数满足设定条件。
可选的,可以采用随机采样一致性(Random Sample Consensus,ransac)算法求解机器人的全局定位位姿。通过ransac算法求解机器人的全局定位位姿可
以获取内点指标参数。内点指标参数可以包括:内点个数、内点率和内点的重投影平均误差。内点指标参数满足设定条件包括:内点个数大于设定个数、内点率大于设定内点率以及内点的重投影平均误差小于设定误差值。比如,设定个数为10,设定内点率为0.3。本申请实施例不对设定个数、设定内点率和设定误差值进行限定。
(2)全局定位位姿与局部定位位姿之间的误差值小于第一阈值。
可选的,全局定位位姿和局部定位位姿均包含位置参数和姿态角参数。相应的,全局定位位姿与局部定位位姿之间的误差值可以包括位置误差值和姿态角误差值。对应上述位置误差值和姿态角误差值可以分别设置第一子阈值和第二子阈值。当全局定位位姿与局部定位位姿的位置误差值小于第一子阈值,且姿态角误差值小于第二子阈值时,确定全局定位位姿与局部定位位姿之间的误差值小于第一阈值。比如,第一子阈值为50米,第二子阈值为10度。本申请实施例不对第一子阈值和第二子阈值进行限定。
(3)机器人的相机拍摄到的用于确定局部定位位姿的多帧图像之间的相关系数大于第二阈值。
如果上述(1)-(3)中的任一项或者多项组合的判断结果为是,则确定全局定位位姿与局部定位位姿一致,否则确定全局定位位姿与局部定位位姿不一致。
以下将对本申请实施例涉及的定位方法进行说明。在一些实施例中,IMU的状态向量可以表示为XI:
其中,表示k时刻从世界坐标系到IMU坐标系的旋转矩阵的四元数。
表示k时刻IMU在世界坐标系的速度。
表示k时刻IMU在世界坐标系的位置。
表示k时刻IMU的角速度偏差。
表示k时刻IMU的加速度偏差。
和统称为传感器偏置。
在一些实施例中,根据IMU连续采集的IMU数据可以对XI进行预测和更新,得到基于IMU的预估位姿。
在一些实施例中,根据相机的图像帧确定的状态向量可以表示为XC,XC可
以包含图像帧的图像特征点,或者XC可以表示根据图像特征点估计的姿态角和位置。可选的,根据XC可以对IMU的预估位姿进行更新,实现相机和IMU的定位融合。其中基于VIO输出的局部定位位姿的状态向量表示为X′I。
在一些实施例中,基于地图特征点匹配算法估计的全局定位位姿的状态向量可以表示为XS,XS包括机器人在全局地图的位置和姿态角。
在一些实施例中,对全局定位位姿和局部定位位姿融合可以包括:建立融合状态变量X,融合状态变量X包括VIO局部定位变量和全局定位变量。在一些实施方式中,融合状态变量X=[X′I,XS]。采用施密特-卡尔曼滤波算法对全局定位位姿和局部定位位姿进行融合更新时,可以将全局定位变量XS作为融合状态向量X的干扰参数。即在全局定位位姿未更新的时间内,基于VIO对融合状态向量X进行更新。在X更新的过程中全局定位变量XS保持在施密特状态,直至全局定位位姿更新。其中,全局定位位姿更新包括在执行地图特征点匹配算法的过程中,当在机器人的相机拍摄的图像帧识别到新的地图特征点或者已识别到的地图特征点消失时,确定全局定位位姿更新,相应的全局定位变量XS更新。当全局定位变量XS更新时,基于XS对融合状态变量X更新。
进一步,对于上述融合更新策略一:全局定位位姿估计为低频测量,相对局部定位位姿更新频率较低。当全局定位变量XS更新时,若根据全局定位变量XS强行对VIO局部定位变量X′I更新,可能会使X′I中的速度和传感器偏置产生跳变。因此采用策略一更新融合状态变量X时,将融合状态变量X包含的速度和传感器偏置设置为施密特变量。当在机器人相机拍摄的图像帧识别到新的地图特征点或者已识别到的图像特征点消失时,即需要通过全局定位变量XS对融合状态变量X更新时,令融合状态变量X包含的速度和传感器偏置保持为施密特状态。即速度和传感器偏置更新前后保持不变。
在一些实施例中,VIO的roll(横摇角)和pitch(俯仰角)是可观的,不存在累计误差。因此在融合状态变量X更新时不需要对roll和pitch更新。可选的,为了在更新过程中约束融合状态变量X包含的roll和pitch不更新,本申请实施例设置了卡尔曼增益影响因子。基于卡尔曼影响因子计算卡尔曼增益,此时得到的卡尔曼增益可以约束roll和pitch在融合状态变量X更新前后保持不变。
在一些实施例中,卡尔曼影响因子可以表示为Kyaw,
e3=[0,0,1]T,表示旋转矩阵。卡尔曼增益
其中,根据K可以对融合状态变量X的估计值和估计值的协方差进行更新。
对于上述融合更新策略二:由于全局定位位姿与局部定位位姿相差较大,此时如果采用策略一进行位姿更新可能会使机器人位姿变差。因此本申请实施例策略二提出了坐标系更新策略。即将当前位姿更新转换为坐标系变化。在一些实施方式中,基于策略一计算的卡尔曼增益K计算全局定位位姿的观测误差δx。其中,δx=Kr,r全局定位位姿的残差。根据观测误差δx可以对局部定位位姿进行更新,得到更新后的局部定位位姿其中,更新前的局部定位位姿表示为Ti
G。根据和可以确定世界坐标系变化矩阵
根据世界坐标系变化矩阵可以对融合状态向量X以及融合状态向量X对应的协方差矩阵P进行坐标系转换。协方差矩阵Pnew=JPJT。其中J表示相对XG的雅克比(jacobian)矩阵。
本申请实施例提出了基于滤波框架的融合视觉地图和VIO的方位方法。该方法实施例基于视觉地图定位的测量特性与VIO位姿估计进行融合,提高了机器人定位的鲁棒性和高效性。在提高定位精度的同时可以保证定位效率,避免全局测量精度对VIO的负面影响。
对应上述定位方法,本申请实施例提供了一种定位装置。参见图3,为本申请实施例提供的一种定位装置的结构示意图。如图3所示,该定位装置包括:
VIO模块201,用于根据VIO算法估计机器人的局部定位位姿;
全局定位模块202,用于根据地图特征点匹配算法估计机器人的全局定位位姿;
定位融合模块203,用于确定全局定位位姿是否与局部定位位姿估计一致;若估计一致,则基于融合状态向量执行机器人的位姿融合更新,其中,融合状态向量包括VIO局部定位变量和全局定位变量,VIO局部定位变量包括机器人的速度和传感器偏置,速度和传感器偏置保持施密特状态;若估计不一致,则确定全局定位位姿的观测误差,根据观测误差对融合状态向量进行坐标系转换,根据坐标系转换后的融合状态向量执行机器人的位姿更新。
在一些实施方式中,全局定位位姿与局部定位位姿估计一致的条件,包括以下一项或多项的组合:
求解全局定位位姿时的内点指标参数满足设定条件;
全局定位位姿与局部定位位姿之间的误差值小于第一阈值;
机器人的相机拍摄到的多帧图像之间的相关系数大于第二阈值,多帧图像用于确定局部定位位姿。
在一些实施方式中,全局定位位姿与局部定位位姿之间的误差值包括:位置误差值和姿态角误差值;内点指标参数包括:内点个数、内点率和内点的重投影平均误差。
在一些实施方式中,VIO局部定位变量还包括:机器人的旋转矩阵和位置;全局定位变量包括:机器人在全局地图的位置和姿态角。
在一些实施方式中,定位融合模块203,用于当在机器人的相机拍摄的图像帧识别到新的地图特征点或者已识别到的地图特征点消失时,使速度和传感器偏置保持施密特状态,基于融合状态向量执行机器人的位姿融合更新。
在一些实施方式中,定位融合模块203,还用于确定卡尔曼增益影响因子;基于卡尔曼增益影响因子计算卡尔曼增益,卡尔曼增益用于机器人的位姿融合更新,卡尔曼增益影响因子用于在位姿融合更新过程中约束融合状态向量包含的横摇角和俯仰角更新前后保持不变。
在一些实施方式中,定位融合模块203,用于根据卡尔曼增益计算全局定位位姿的观测误差。
在一些实施方式中,定位融合模块203,用于根据观测误差对局部定位位姿进行位姿更新,以得到更新后的局部定位位姿;根据更新后的局部定位位姿确定世界坐标系变化矩阵;根据世界坐标系变化矩阵对融合状态向量和融合状态向量对应的协方差矩阵进行坐标系转换。
本申请实施例的定位装置可以执行图2所示实施例涉及的定位方法。本实施例未详细描述的部分,可以参考对图2所示实施例的相关说明。该技术方案的执行过程和技术效果参见图2所示实施例中的描述,在此不再赘述。
应理解,图3所示定位装置的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块以软件通过处理元件调用的形式实现,部分模块通过硬件的形式实现。例如,VIO模块201和全局定位模块202可以为单独设立的处理元件,也可以集成在电子设备的某一个芯片中实现。其它模块的实现与之类似。此外这些模块全部或部分可以集成在一起,也可以独立实现。在实现过
程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。
例如,以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(Application Specific Integrated Circuit,ASIC),或,一个或多个微处理器(Digital Signal Processor,DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,FPGA)等。再如,这些模块可以集成在一起,以片上系统(System-On-a-Chip,SOC)的形式实现。
图4为本申请实施例提供的一种电子设备的结构示意图。该电子设备可以用于执行上述的定位方法。如图4所示,该电子设备以通用计算设备的形式表现。电子设备的组件可以包括但不限于:一个或者多个处理器410,通信接口420,存储器430以及连接不同系统组件(包括处理器410、通信接口420和存储器430)的通信总线440。
通信总线440表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(Industry Standard Architecture;以下简称:ISA)总线,微通道体系结构(Micro Channel Architecture,MAC)总线,增强型ISA总线、视频电子标准协会(Video Electronics Standards Association,VESA)局域总线以及外围组件互连(Peripheral Component Interconnection,PCI)总线。
电子设备典型地包括多种计算机系统可读介质。这些介质可以是任何能够被电子设备访问的可用介质,包括易失性和非易失性(又称非临时性)介质,可移动的和不可移动的介质。存储器430可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(Random Access Memory;以下简称:RAM)和/或高速缓存存储器。电子设备可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。存储器430可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请实施例图2所示实施例涉及的定位方法。
具有一组(至少一个)程序模块的程序/实用工具,可以存储在存储器430中,这样的程序模块包括但不限于操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块通常执行本申请实施例图2所示实施例涉及的定位方法。
处理器410通过运行存储在存储器430中的程序,从而执行各种功能应用以及数据处理,例如实现本说明书图2所示实施例涉及的定位方法。
在一些实施方式中,本申请还提供一种非临时性计算机存储介质,其中,该非临时性计算机存储介质可存储有程序,该程序执行时可包括本申请提供的各实施例中的部分或全部步骤。该非临时性计算机可读存储介质可为磁碟、光盘、只读存储记忆体(英文:read-only memory,简称:ROM)或RAM等。
示例性地,本申请实施例还提供了一种计算机程序产品,计算机程序产品包含可执行指令,当可执行指令在计算机上执行时,使得计算机执行上述方法实施例中的部分或全部步骤。
在一些实施方式中,本申请实施例还提供了一种计算机程序,计算机程序中存储有至少一条计算机指令,至少一条计算机指令由处理器加载并执行,以使计算机程序所在设备执行上述方法实施例中的部分或全部步骤。
本申请实施例中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示单独存在A、同时存在A和B、单独存在B的情况。其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项”及其类似表达,是指的这些项中的任意组合,包括单项或复数项的任意组合。例如,a,b和c中的至少一项可以表示:a,b,c,a-b,a-c,b-c或a-b-c,其中a,b,c可以是单个,也可以是多个。
本领域普通技术人员可以意识到,本文中公开的实施例中描述的各单元及算法步骤,能够以电子硬件、计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,任一功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存
储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的示例性实施方式,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。本申请的保护范围应以所述权利要求的保护范围为准。
Claims (20)
- 一种定位方法,其中,包括:根据视觉惯性里程计VIO算法估计机器人的局部定位位姿;根据地图特征点匹配算法估计所述机器人的全局定位位姿;确定所述全局定位位姿是否与所述局部定位位姿估计一致;若估计一致,则基于融合状态向量执行所述机器人的位姿融合更新,其中,所述融合状态向量包括VIO局部定位变量和全局定位变量,所述VIO局部定位变量包括所述机器人的速度和传感器偏置,所述速度和所述传感器偏置保持施密特状态;若估计不一致,则确定所述全局定位位姿的观测误差,根据所述观测误差对所述融合状态向量进行坐标系转换,根据所述坐标系转换后的融合状态向量执行所述机器人的位姿更新。
- 根据权利要求1所述的方法,其中,所述全局定位位姿与所述局部定位位姿估计一致的条件,包括以下一项或多项的组合:求解所述全局定位位姿时的内点指标参数满足设定条件;所述全局定位位姿与所述局部定位位姿之间的误差值小于第一阈值;所述机器人的相机拍摄到的多帧图像之间的相关系数大于第二阈值,所述多帧图像用于确定所述局部定位位姿。
- 根据权利要求2所述的方法,其中,所述全局定位位姿与所述局部定位位姿之间的误差值包括:位置误差值和姿态角误差值;所述内点指标参数包括:内点个数、内点率和内点的重投影平均误差。
- 根据权利要求1-3任一所述的方法,其中,所述VIO局部定位变量还包括:所述机器人的旋转矩阵和位置;所述全局定位变量包括:所述机器人在全局地图的位置和姿态角。
- 根据权利要求1-4任一所述的方法,其中,所述基于融合状态向量执行所述机器人的位姿融合更新,包括:当在所述机器人的相机拍摄的图像帧识别到新的地图特征点或者已识别到的地图特征点消失时,使所述速度和所述传感器偏置保持施密特状态,基于所述融合状态向量执行所述机器人的位姿融合更新。
- 根据权利要求1-5任一所述的方法,其中,所述方法还包括:确定卡尔曼增益影响因子;基于所述卡尔曼增益影响因子计算卡尔曼增益,所述卡尔曼增益用于所述机器人的位姿融合更新,所述卡尔曼增益影响因子用于在所述位姿融合更新过程中约束融合状态向量包含的横摇角和俯仰角更新前后保持不变。
- 根据权利要求6所述的方法,其中,所述确定所述全局定位位姿的观测误差,包括:根据所述卡尔曼增益计算所述全局定位位姿的观测误差。
- 根据权利要求7所述的方法,其中,所述根据所述观测误差对所述融合状态向量进行坐标系转换,包括:根据所述观测误差对所述局部定位位姿进行位姿更新,以得到更新后的局部定位位姿;根据所述更新后的局部定位位姿确定世界坐标系变化矩阵;根据所述世界坐标系变化矩阵对所述融合状态向量和所述融合状态向量对应的协方差矩阵进行坐标系转换。
- 一种定位装置,其中,包括:视觉惯性里程计VIO模块,用于根据VIO算法估计机器人的局部定位位姿;全局定位模块,用于根据地图特征点匹配算法估计所述机器人的全局定位位姿;定位融合模块,用于确定所述全局定位位姿是否与所述局部定位位姿估计一致;若估计一致,则基于融合状态向量执行所述机器人的位姿融合更新,其中,所述融合状态向量包括VIO局部定位变量和全局定位变量,所述VIO局部定位变量包括所述机器人的速度和传感器偏置,所述速度和所述传感器偏置保持施密特状态;若估计不一致,则确定所述全局定位位姿的观测误差,根据所 述观测误差对所述融合状态向量进行坐标系转换,根据所述坐标系转换后的融合状态向量执行所述机器人的位姿更新。
- 根据权利要求9所述的装置,其中,所述全局定位位姿与所述局部定位位姿估计一致的条件,包括以下一项或多项的组合:求解所述全局定位位姿时的内点指标参数满足设定条件;所述全局定位位姿与所述局部定位位姿之间的误差值小于第一阈值;所述机器人的相机拍摄到的多帧图像之间的相关系数大于第二阈值,所述多帧图像用于确定所述局部定位位姿。
- 根据权利要求10所述的装置,其中,所述全局定位位姿与所述局部定位位姿之间的误差值包括:位置误差值和姿态角误差值;所述内点指标参数包括:内点个数、内点率和内点的重投影平均误差。
- 根据权利要求9-11任一所述的装置,其中,所述VIO局部定位变量还包括:所述机器人的旋转矩阵和位置;所述全局定位变量包括:所述机器人在全局地图的位置和姿态角。
- 根据权利要求9-12任一所述的装置,其中,所述定位融合模块,用于当在所述机器人的相机拍摄的图像帧识别到新的地图特征点或者已识别到的地图特征点消失时,使所述速度和所述传感器偏置保持施密特状态,基于所述融合状态向量执行所述机器人的位姿融合更新。
- 根据权利要求9-13任一所述的装置,其中,所述定位融合模块,还用于确定卡尔曼增益影响因子;基于所述卡尔曼增益影响因子计算卡尔曼增益,所述卡尔曼增益用于所述机器人的位姿融合更新,所述卡尔曼增益影响因子用于在所述位姿融合更新过程中约束融合状态向量包含的横摇角和俯仰角更新前后保持不变。
- 根据权利要求14所述的装置,其中,所述定位融合模块,用于根据所述卡尔曼增益计算所述全局定位位姿的观测误差。
- 根据权利要求15所述的装置,其中,所述定位融合模块,用于根据所述观测误差对所述局部定位位姿进行位姿更新,以得到更新后的局部定位位姿;根据所述更新后的局部定位位姿确定世界坐标系变化矩阵;根据所述世界坐标系变化矩阵对所述融合状态向量和所述融合状态向量对应的协方差矩阵进行坐标系转换。
- 一种电子设备,其中,包括:至少一个处理器;以及与所述处理器通信连接的至少一个存储器,其中:所述存储器存储有可被所述处理器执行的程序指令,所述处理器调用所述程序指令能够使所述电子设备执行如权利要求1至8任一所述的定位方法。
- 一种非临时性计算机可读存储介质,其中,所述非临时性计算机可读存储介质包括存储的程序,其中,在所述程序运行时控制所述非临时性计算机可读存储介质所在设备执行权利要求1至8任一所述的定位方法。
- 一种计算机程序,其中,所述计算机程序中存储有至少一条计算机指令,所述至少一条计算机指令由处理器加载并执行,以使所述计算机程序所在设备执行权利要求1至8任一所述的定位方法。
- 一种计算机程序产品,其中,所述计算机程序产品中存储有至少一条计算机指令,所述至少一条计算机指令由处理器加载并执行,以使所述计算机程序产品所在设备执行权利要求1至8任一所述的定位方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210979139.XA CN117629204A (zh) | 2022-08-16 | 2022-08-16 | 定位方法和设备 |
CN202210979139.X | 2022-08-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024037295A1 true WO2024037295A1 (zh) | 2024-02-22 |
Family
ID=89940686
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/109080 WO2024037295A1 (zh) | 2022-08-16 | 2023-07-25 | 定位 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117629204A (zh) |
WO (1) | WO2024037295A1 (zh) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180188032A1 (en) * | 2017-01-04 | 2018-07-05 | Qualcomm Incorporated | Systems and methods for using a global positioning system velocity in visual-inertial odometry |
CN110706279A (zh) * | 2019-09-27 | 2020-01-17 | 清华大学 | 基于全局地图与多传感器信息融合的全程位姿估计方法 |
CN111136660A (zh) * | 2020-02-19 | 2020-05-12 | 清华大学深圳国际研究生院 | 机器人位姿定位方法及系统 |
KR20210026795A (ko) * | 2019-09-02 | 2021-03-10 | 경북대학교 산학협력단 | Imu 센서와 카메라를 이용한 하이브리드 실내 측위 시스템 |
CN112734852A (zh) * | 2021-03-31 | 2021-04-30 | 浙江欣奕华智能科技有限公司 | 一种机器人建图方法、装置及计算设备 |
CN114001733A (zh) * | 2021-10-28 | 2022-02-01 | 浙江大学 | 一种基于地图的一致性高效视觉惯性定位算法 |
-
2022
- 2022-08-16 CN CN202210979139.XA patent/CN117629204A/zh active Pending
-
2023
- 2023-07-25 WO PCT/CN2023/109080 patent/WO2024037295A1/zh unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180188032A1 (en) * | 2017-01-04 | 2018-07-05 | Qualcomm Incorporated | Systems and methods for using a global positioning system velocity in visual-inertial odometry |
KR20210026795A (ko) * | 2019-09-02 | 2021-03-10 | 경북대학교 산학협력단 | Imu 센서와 카메라를 이용한 하이브리드 실내 측위 시스템 |
CN110706279A (zh) * | 2019-09-27 | 2020-01-17 | 清华大学 | 基于全局地图与多传感器信息融合的全程位姿估计方法 |
CN111136660A (zh) * | 2020-02-19 | 2020-05-12 | 清华大学深圳国际研究生院 | 机器人位姿定位方法及系统 |
CN112734852A (zh) * | 2021-03-31 | 2021-04-30 | 浙江欣奕华智能科技有限公司 | 一种机器人建图方法、装置及计算设备 |
CN114001733A (zh) * | 2021-10-28 | 2022-02-01 | 浙江大学 | 一种基于地图的一致性高效视觉惯性定位算法 |
Also Published As
Publication number | Publication date |
---|---|
CN117629204A (zh) | 2024-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111811506B (zh) | 视觉/惯性里程计组合导航方法、电子设备及存储介质 | |
WO2020253854A1 (zh) | 移动机器人姿态角解算方法 | |
CN110084832B (zh) | 相机位姿的纠正方法、装置、系统、设备和存储介质 | |
WO2020221307A1 (zh) | 一种运动物体的追踪方法和装置 | |
CN112304307A (zh) | 一种基于多传感器融合的定位方法、装置和存储介质 | |
CN108036785A (zh) | 一种基于直接法与惯导融合的飞行器位姿估计方法 | |
CN105931275A (zh) | 基于移动端单目和imu融合的稳定运动跟踪方法和装置 | |
CN110517324B (zh) | 基于变分贝叶斯自适应算法的双目vio实现方法 | |
CN109813308A (zh) | 姿态估计方法、装置及计算机可读存储介质 | |
CN114013449A (zh) | 针对自动驾驶车辆的数据处理方法、装置和自动驾驶车辆 | |
CN112066985B (zh) | 一种组合导航系统初始化方法、装置、介质及电子设备 | |
CN112116651B (zh) | 一种基于无人机单目视觉的地面目标定位方法和系统 | |
WO2023082050A1 (zh) | 一种基于双层滤波框架的高精度里程估计方法 | |
CN114136315B (zh) | 一种基于单目视觉辅助惯性组合导航方法及系统 | |
CN114046800B (zh) | 一种基于双层滤波框架的高精度里程估计方法 | |
CN114001733A (zh) | 一种基于地图的一致性高效视觉惯性定位算法 | |
CN115540860A (zh) | 一种多传感器融合位姿估计算法 | |
CN110598370B (zh) | 基于sip和ekf融合的多旋翼无人机鲁棒姿态估计 | |
WO2023142353A1 (zh) | 一种位姿预测方法及装置 | |
CN108827287B (zh) | 一种复杂环境下的鲁棒视觉slam系统 | |
WO2024037295A1 (zh) | 定位 | |
CN109674480B (zh) | 一种基于改进互补滤波的人体运动姿态解算方法 | |
CN109917644B (zh) | 一种提高视觉惯导系统鲁棒性的方法、装置和机器人设备 | |
CN115727871A (zh) | 一种轨迹质量检测方法、装置、电子设备和存储介质 | |
CN117058430B (zh) | 用于视场匹配的方法、装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23854199 Country of ref document: EP Kind code of ref document: A1 |