WO2022061495A1 - 参数标定方法、装置及可移动平台 - Google Patents

参数标定方法、装置及可移动平台 Download PDF

Info

Publication number
WO2022061495A1
WO2022061495A1 PCT/CN2020/116757 CN2020116757W WO2022061495A1 WO 2022061495 A1 WO2022061495 A1 WO 2022061495A1 CN 2020116757 W CN2020116757 W CN 2020116757W WO 2022061495 A1 WO2022061495 A1 WO 2022061495A1
Authority
WO
WIPO (PCT)
Prior art keywords
state data
axial
electronic device
visual sensors
attitude
Prior art date
Application number
PCT/CN2020/116757
Other languages
English (en)
French (fr)
Inventor
周游
熊策
徐彬
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/116757 priority Critical patent/WO2022061495A1/zh
Publication of WO2022061495A1 publication Critical patent/WO2022061495A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Definitions

  • the present application relates to the technical field of computer vision, and in particular, to a parameter calibration method. device and movable platform.
  • Electronic devices such as driverless cars, aircraft, and VR glasses are usually equipped with multiple vision sensors.
  • the images collected by multiple vision sensors measure the distance of objects in three-dimensional space, locate electronic devices, or use them for other purposes.
  • vision sensors to locate electronic devices Taking the use of vision sensors to locate electronic devices as an example, when using images collected by vision sensors to locate electronic devices, the pose parameters between different vision sensors and the pose parameters between the vision sensor and the body of the electronic device will be serious. Affects the accuracy of positioning. Therefore, it is necessary to accurately determine the pose parameters between different vision sensors and the pose parameters between each vision sensor and the body of the electronic device.
  • the present application provides a parameter calibration method, device and movable platform.
  • a parameter calibration method is provided, the method is applied to an electronic device, the electronic device includes at least two visual sensors, and the at least two visual sensors are loaded on the electronic device through a support structure fuselage, the method includes:
  • the state data includes at least two of the following data: temperature state data of the support structure of the at least two visual sensors, which are set in the at least two visual sensors motion state data collected by the attitude measurement unit of the sensor, and image data collected by the at least two visual sensors;
  • the pose parameters of the at least two vision sensors are determined according to the state data, the pose parameters include relative pose parameters between any two of the at least two vision sensors, and/or the Relative pose parameters between any one of the at least two visual sensors and the body of the electronic device.
  • a parameter calibration device characterized in that the device is applied to an electronic device, and the electronic device includes at least two visual sensors, and the at least two visual sensors are mounted on a support structure through a support structure.
  • the body of the electronic device, the device includes a processor, a memory, and a computer program stored in the memory and executable by the processor, and when the processor executes the computer program, the following steps are implemented:
  • the state data includes at least two of the following data: temperature state data of the support structure of the at least two visual sensors, which are set in the at least two visual sensors motion state data collected by the attitude measurement unit of the sensor, and image data collected by the at least two visual sensors;
  • the pose parameters of the at least two vision sensors are determined according to the state data, the pose parameters include relative pose parameters between any two of the at least two vision sensors, and/or the Relative pose parameters between any one of the at least two visual sensors and the body of the electronic device.
  • a movable platform includes at least two visual sensors and the parameter calibration device according to the above second aspect, the visual sensors are loaded on the movable platform through a support structure The body of the mobile platform, the visual sensor is provided with an attitude measurement unit.
  • the temperature state data of the support structure of any two visual sensors in the electronic device can be used.
  • At least two kinds of relative pose parameters to determine the relative pose parameters between two vision sensors, or the relative pose parameters of any visual sensor and the body of the electronic device are determined by determining the temperature change and the deformation of the support structure caused by the vibration of the body.
  • the resulting changes in the pose parameters of the vision sensor can obtain more accurate pose parameters, and at the same time, the pose parameters can be further corrected in combination with the image data collected by the binocular vision sensor, and more accurate pose parameters can be obtained.
  • FIG. 1 is a flowchart of a parameter calibration method according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a temperature-angle change curve according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of attitude angles of a binocular vision sensor in three axial directions according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of performing epipolar correction on images collected by two vision sensors according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram showing deviations in the ordinates of pixel points of the same three-dimensional point on images collected by two vision sensors according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a parameter calibration method according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a parameter calibration device according to an embodiment of the present application.
  • images collected by multiple vision sensors are usually used to measure the distance of objects, locate electronic devices, or use them for other purposes.
  • the pose of the vision sensor can be determined through a series of images collected by different vision sensors and the pose parameters between the vision sensors, and then the pose of the vision sensor and the body of the electronic device can be determined. parameters, the position of the body of the electronic device can be determined, so as to realize the positioning of the electronic device.
  • Most electronic devices will calibrate the pose parameters between different vision sensors and the pose parameters between the vision sensors and the body of the electronic device when they leave the factory.
  • the vision sensor is usually fixed in the electronic device through the bracket. With the change of the ambient temperature and the vibration during the operation of the electronic device, the bracket will deform to a certain extent, resulting in the pose parameters between the vision sensor, the vision sensor and the electronic device. The pose parameters between the fuselage change, if the previously calibrated pose parameters are still used, the positioning results will be inaccurate.
  • the present application provides a parameter calibration method, which is suitable for an electronic device including at least two visual sensors, wherein the visual sensor can be loaded on the body of the electronic device through a support structure, and can be based on the support structure.
  • At least two kinds of data among the temperature data, the motion state data collected by the attitude measurement unit disposed on the vision sensor, or the image data collected by the vision sensor are used to calibrate the pose parameters of the vision sensor to determine the difference between different vision sensors.
  • the pose parameters, as well as the pose parameters between any one of the vision sensors and the body of the electronic device includes the following steps:
  • S102 Acquire state data of the at least two visual sensors, where the state data includes at least two of the following data: the temperature state data of the support structure of the at least two visual sensors, set in the at least two motion state data collected by the attitude measurement units of the two vision sensors, and image data collected by the at least two vision sensors;
  • S104 Determine the pose parameters of the at least two visual sensors according to the state data, where the pose parameters include relative pose parameters between any two of the at least two visual sensors, and/or Relative pose parameters between any one of the at least two visual sensors and the body of the electronic device.
  • the parameter calibration method of the present application may be performed by an electronic device including at least two visual sensors, or may be performed by other devices communicatively connected to the electronic device, such as a control terminal communicatively connected to the electronic device, or a cloud server.
  • the parameter calibration method of the present application may be executed in real time during the working process of the electronic device, or may be executed once every preset time period, which is not limited herein.
  • the electronic device of the present application can be any electronic device including at least two visual sensors, and the two visual sensors can be mounted on the body of the electronic device through a support structure, for example, can be mounted on the body of the electronic device through a bracket.
  • the electronic device may be an unmanned car, a drone, an unmanned car, an intelligent robot, and VR glasses.
  • the vision sensor Since the vision sensor is mounted on the body of the electronic device through the support structure, with the change of the ambient temperature around the support structure or the vibration of the electronic device during the working process, the support structure will be deformed to a certain extent, resulting in the vision sensor. The relative position of the sensor will change, and the relative position of the vision sensor and the fuselage will also change.
  • the temperature state data of the support structure of the vision sensor in the electronic device can be obtained, setting At least two types of data from the motion state data collected by the attitude measurement unit of the vision sensor or the image data collected by the vision sensor, the pose parameters of the vision sensor are calibrated according to the above at least two types of data, fully considering the ambient temperature, vibration
  • the influence of the situation on the deformation of the support structure can also be combined with the image data collected by the vision sensor to further finely calculate the pose parameters of the vision sensor, so that more accurate pose parameters can be obtained.
  • the electronic device may be a movable platform
  • the movable platform may be a drone, an unmanned vehicle, an unmanned car, an intelligent robot, or the like.
  • the movable platform includes a power component for driving the movable platform to move, and the power component of the movable platform will cause the body to vibrate in the process of driving the movable platform to move, resulting in deformation of the body of the vision sensor.
  • the vision sensor can collect image data in different poses.
  • an attitude measurement unit can be set in the vision sensor to measure the deformation of the support structure in three axial directions.
  • the attitude measurement unit may be any sensor for measuring the motion attitude of an object.
  • the attitude measurement unit may be an inertial measurement unit (IMU), a gyroscope, an accelerometer, and other sensors. Due to the deformation of the support structure, the relative positional relationship between the vision sensor and the body of the electronic device will change, and the relative positions of the vision sensors at both ends of the support structure will also change. Therefore, the change of the above-mentioned relative pose parameters caused by the deformation of the support structure can be determined according to the motion state parameters measured by the pose measurement unit.
  • the attitude measurement unit can be set behind the vision sensor, one attitude measurement unit can be set for each vision sensor on the same support structure, or only one attitude measurement unit can be set for one vision sensor among multiple vision sensors on the same support structure , which is not limited in this application.
  • the temperature data of the support structure can be determined according to the temperature data of the current environment. For example, if the electronic device is in a non-working state and its internal heat generation is small, it can be considered that the temperature of the current environment is the temperature of the support structure, Of course, the corresponding relationship between the temperature of the support structure and the ambient temperature can also be predetermined, and the temperature of the support structure can be estimated according to the current ambient temperature and the corresponding relationship. In some embodiments, in order to obtain more accurate temperature state data of the support structure, a temperature sensor may also be provided in the electronic device for measuring the temperature state data of the support structure.
  • the validity of the above three types of data is also different.
  • an electronic device when an electronic device is just powered on, its movable parts have not yet started to move, so the entire fuselage basically does not vibrate. At this stage, there is no need to consider the deformation of the support structure due to vibration.
  • the motion state data collected by the attitude unit is used to calibrate the pose of the vision sensor.
  • the images collected by its vision sensor are all images collected by the same pose, so it is impossible to determine the pose parameters of the vision sensor based on these images.
  • the current working status of the electronic device may be determined first, and then the corresponding type of status data may be acquired according to the determined working status, and then the status data of the corresponding type may be acquired according to the acquired status data.
  • the pose parameters of the vision sensor in the current working state may be preset, and when the state data is acquired, the type of state data corresponding to the current working state may be determined according to the corresponding relationship.
  • the working state of the electronic device may include a first working state, a second working state or a third working state.
  • the electronic device In the first working state, the electronic device is powered on, that is, each component of the electronic device can be powered on ,
  • the movable parts of the electronic equipment perform self-checking operation, that is, after power-on, it is necessary to perform self-checking on each component of the electronic equipment to detect whether each component is faulty.
  • the electronic device moves in response to the motion triggering instruction. After the self-inspection is completed, it is determined that each component of the electronic equipment is in a normal state, that is, it can enter normal work.
  • the time from when the electronic device is powered on to when each movable component of the electronic device performs a self-check operation is longer than the time from when the electronic device is powered on to when the temperature sensor enters normal operation. That is to say, when each movable component of the electronic device performs the self-checking operation, the temperature sensor of the electronic device of the electronic device has entered a normal working state, that is, the temperature sensor can obtain valid temperature state data.
  • the image data collected by the vision sensor is only the image data of the same pose, which is not suitable for determining vision
  • the pose parameter of the sensor therefore, in the first working state, the obtained pose parameter is only the temperature state data collected by the temperature sensor.
  • the image data collected by the vision sensor is still the image data of the same pose. Therefore, the valid state obtained at this time is
  • the data may be temperature state data measured by the temperature sensor and motion state data collected by the attitude measurement unit.
  • the body when the electronic device is in the third working state, the body starts to vibrate because the temperature sensor is already working, the movable parts start to move, and because the electronic device starts to move, the vision sensor also starts to move and can Image data is collected at different poses. Therefore, the valid state data that can be obtained in this working stage can be temperature data collected by a temperature sensor, motion state data collected by an attitude measurement unit, and image data collected by a vision sensor.
  • the initial values of the pose parameters can be determined first according to one or more of the temperature state data and the motion state data, and then according to The image data collected by the vision sensor corrects the initial value to obtain the calibration value of the pose parameters.
  • the change value of the pose parameter can be determined according to the temperature data measured by the temperature sensor, and then the initial value of the position parameter can be obtained according to the default value of the pose parameter and the determined change value, where , the default value can be the value obtained by pre-calibrating the pose parameters of the vision sensor when the electronic device leaves the factory, or the value obtained by calibrating the pose parameters of the vision sensor last time and stored.
  • the body When the electronic device enters the self-checking state, the body will vibrate because the movable parts start to move.
  • the determined change is worth the initial value of the pose parameter.
  • the change value of the pose parameter can also be determined according to the temperature data measured by the temperature sensor and the motion state data measured by the attitude measurement unit, and then determined according to the default value and the change value.
  • the initial value of the pose parameter A relatively accurate initial value of the pose parameters can be obtained through the temperature state data and motion state data, and then the initial value is calibrated through the image data collected by the vision sensor to obtain more accurate pose parameters as the final calibration value.
  • the changes of the pose parameters caused by the influence of temperature and vibration on the deformation of the support structure can be fully considered.
  • the image data is used to further correct the initial value, and a relatively accurate position can be obtained.
  • Attitude parameters can be used as the final calibration value.
  • the initial value can be used as the final value of the pose parameter, and the pose parameter can be used to locate the electronic device.
  • the initial value determined according to the temperature state data and motion state data can be used as the pose parameter to obtain more accurate positioning results.
  • the deformation of the support structure is constant.
  • the bending deformation of the support structure is 5°
  • the temperature is 40°C
  • the support structure is bent and deformed.
  • the radian is 10°.
  • the deformation value is basically determined. Therefore, in some embodiments, the corresponding relationship between temperature and pose parameter change can be predetermined, as shown in FIG. 2 , which is the change curve of angle change at different temperatures (take yaw as an example), and then according to the temperature
  • the corresponding relationship between the state data and the variation of the pose parameter determines the initial value of the position parameter.
  • the relative pose parameters between two vision sensors include the relative relationship between positions in three-dimensional space and the relative relationship between angles, for the angle between any two vision sensors in three axes, such as pitch Angle, yaw angle and roll angle, the corresponding relationship between the temperature and the angle change of each axis can be predetermined, and then the angle change of each axis can be determined according to the current temperature measured by the temperature sensor and the above corresponding relationship, according to the two
  • the default value of the attitude angle of the vision sensor in each axis and the determined angular change amount can determine the initial value of the attitude angle after temperature correction.
  • the length deformation of the support structure is usually small, it can generally be ignored, that is, the translation matrix of the two vision sensors does not change.
  • the corresponding relationship between the temperature and the angle of each axial direction can also be determined.
  • the angles of the pitch angle, the yaw angle and the roll angle at different temperatures can be determined, and then according to the temperature and the angle The correspondence determines the initial value.
  • the attitude unit can detect the attitude angle, angular velocity or angular acceleration of the vision sensor in each axis
  • the angle change of any two vision sensors in each axis can be determined according to the motion state data measured by the attitude unit, and according to the determined angle
  • the delta determines the initial values of the pose parameters.
  • an attitude measurement unit may be arranged behind each vision sensor, and the angle changes of the two vision sensors in each axis can be determined according to the data measured by the attitude measurement unit arranged behind each vision sensor. , so that the attitude angle between the two vision sensors can be determined.
  • an attitude measurement unit is also provided in the center of the body of the electronic device, so that the visual sensor and the electronic device can be determined according to the data measured by the attitude measurement unit in the center of the electronic device body and the data measured by the attitude measurement unit of the vision sensor.
  • the angle of the fuselage in each axis changes, so that the attitude angle between the vision sensor and the fuselage of the electronic device can be determined.
  • the position of the vision sensor can be determined by tracking and matching feature points on these multiple images.
  • the initial values of the relative pose parameters of the two vision sensors can be determined according to the temperature state data collected by the temperature sensor and the motion state data collected by the attitude measurement unit, and then the relative pose parameters of the two vision sensors can be determined.
  • the image data collected by the two vision sensors corrects the obtained initial value to obtain a more accurate calibration value.
  • the initial value of the first axial posture may be corrected according to the image data collected by the two vision sensors to obtain the first axial posture.
  • the calibration value of the attitude where the first axial attitude is the angle corresponding to the axis perpendicular to the baselines of the two vision sensors, and then collected according to the calibration value of the first axial attitude and any one of the two vision sensors
  • the image of determines the calibration value of the second axial pose, which is the angle corresponding to the axial direction along the baselines of the two vision sensors.
  • C1 and C2 are two vision sensors, which are located at two ends of the support structure, respectively, wherein the first axial posture is the angle corresponding to the axis perpendicular to the baselines of the two vision sensors, That is, the pich angle and the roll angle, and the second axial attitude is the angle corresponding to the axial direction along the baselines of the two vision sensors, that is, the yaw angle.
  • the images collected by the two vision sensors at the same time are image 1 and image 2, respectively, the pixel point of a certain point in the three-dimensional space on image 1 is P1, and the pixel point on image 2 is P2.
  • the deviation of the abscissa (ie, parallax) of the pixel coordinates of the pixel points P1 and P2 is related to yaw
  • the deviation of the ordinate of the pixel coordinates of the pixel points P1 and P2 is related to the pitch angle and the roll angle.
  • epipolar correction processing can be performed on the two images according to the initial values of the relative pose parameters of the two visual sensors, as shown in Figure 4, theoretically, the image 1 After polar line correction with image 2, the ordinates of the pixel coordinates of pixel points P1 and P2 should be consistent, that is, they are located in the same row, but due to the deviation between the initial values of the pitch angle and the roll angle and the actual value, it will cause the pixel point P1 There is a deviation from the ordinate of the pixel coordinates of P2, as shown in Figure 5, and the larger the deviation, the less accurate the pich angle and the yaw angle are.
  • the initial value of the first axial attitude is corrected according to the images collected by the two vision sensors, and when the calibration value of the first axial attitude is obtained, one of the two vision sensors may be determined first.
  • the first feature point of the image collected by the vision sensor is the first matching point in the image collected by another vision sensor, and then the difference between the first feature point and the first matching point is determined according to the initial value of the first axial attitude.
  • the deviation of the ordinate of the pixel coordinates, and the calibration value of the first axial attitude is determined according to the deviation.
  • a plurality of first feature points may be extracted from the image 1 collected by the visual sensor C1 , and the feature point extraction may use an existing feature point extraction algorithm, which will not be repeated here. Then determine the first matching point of the first feature point in the image 2 collected by the vision sensor C2, and perform epipolar correction on the images collected by the two vision sensors according to the initial values of the pose parameters of the binocular vision sensor to determine The deviation of the ordinates of the first feature point and the first matching point, and then the calibration value of the first axial attitude (ie the pitch angle and the roll angle) is determined according to the deviation.
  • the calibration value of the first axial posture may be determined according to the magnitude of the deviation.
  • the deviation if the deviation is smaller than the preset threshold, it means that the initial value of the first axial posture is relatively accurate, and thus the initial value of the first axial posture can be used as the calibration value.
  • the deviation if the deviation is greater than a preset threshold, it means that the initial value of the first axial attitude is not accurate enough, and the calibration value of the first axial attitude can be determined according to the first feature point and the first matching point,
  • the preset threshold can be flexibly set according to actual needs.
  • an essential matrix may be determined according to the first feature point and the first matching point, and then the first essential matrix may be determined according to the essential matrix.
  • a calibrated value of the axial attitude Since the essential matrix can be decomposed to obtain the matrix corresponding to the pose parameters between the binocular vision sensors, multiple pairs of feature points and matching points can be extracted from the images collected by the left and right sensors of the binocular vision sensor. According to these pairs of feature points and matching points, the essential matrix can be obtained, and then the essential matrix can be decomposed to obtain the pitch angle, roll angle, and yaw angle.
  • the yaw angle is not accurate, so the obtained pitch angle and roll angle can be taken as the calibration value, the yaw angle is discarded, and then the precise yaw angle can be further solved by comparing the precise pitch angle and roll angle.
  • the calibration value of the second axial posture when determining the calibration value of the second axial posture according to the calibration value of the first axial posture and the image collected by any one of the two vision sensors, the calibration value of the second axial posture may be first determined according to the second axial posture The initial value and the preset deviation range are used to determine multiple angles, and then a target angle is determined from the multiple angles according to the calibration value of the first axial attitude and the multiple frames of images collected by one of the two vision sensors. , as the calibration value of the second axial attitude. In some embodiments, when a target angle is determined from the multiple angles according to the calibration value of the first axial attitude and multiple frames of images collected by one of the two vision sensors, any two vision sensors can be used to determine a target angle.
  • the yaw angle varies within a range of ⁇ 3°, and then multiple angles are determined with a preset accuracy. Taking a gradient of 0.5 degrees as an example, multiple angles can be obtained, 0°, 0.5°, 1°, 1.5° etc6°, where the specific accuracy and angle range can be determined according to actual needs. After obtaining these multiple angles, the yaw angle can be made equal to the above angle respectively, and then the feature points of the images collected by the left and right sensors of the two vision sensors can be matched to determine the depth value of each feature point, so as to obtain each feature point. 3D information of each feature point.
  • the pose information of the camera is determined based on the PnP algorithm (Perspective-n-Point) and the obtained second feature points and second matching points, wherein the PnP algorithm uses a series of 3D points in the world coordinate system and the corresponding pixel coordinates in the image
  • the 2D points of the system are used to estimate the camera pose to estimate the camera pose.
  • the PnP algorithm with the RANSAC (Random sample consensus) algorithm can be used to solve the problem.
  • the second feature point suitable for the PnP algorithm to solve the camera pose can be determined. Then, a percentage is determined according to the number of second feature points suitable for the PnP algorithm to solve the camera pose and the total number of second feature points. When the percentage is the largest, the value of the yaw angle is used as the calibration value of yaw.
  • the calibration value of the attitude angle between any two visual sensors of the electronic device can be determined. Since the translation matrix between any two visual sensors is not greatly affected by temperature and vibration, the default value can be used, so that Get the relative pose parameters between the two vision sensors.
  • any visual sensor after determining the relative pose parameters between any two vision sensors of the electronic device, can also be corrected according to the relative pose parameters between any two vision sensors and image data collected by the vision sensors Relative pose parameters between the sensor and the fuselage of the electronic device, so as to obtain relatively accurate relative pose parameters between any visual sensor and the fuselage of the electronic device.
  • binocular vision sensors such as front and rear, up and down, left and right
  • the two sensors of each set of binocular vision sensors are fixed by brackets.
  • an inertial measurement unit will also be set on the fuselage position of the UAV, and the UAV can be positioned through dual vision sensors and inertial measurement unit.
  • the accuracy of the positioning will be affected by the relative pose parameters between the binocular vision sensors and the relationship between the binocular vision sensor and the UAV fuselage.
  • the influence of the accuracy of the relative pose parameters in which the relative pose parameters of the binocular vision sensor and the UAV body can be determined by the relative pose parameters of the binocular vision sensor and the inertial measurement unit set on the fuselage.
  • the structure of the bracket fixing the binocular vision sensor will be deformed due to the influence of temperature and vibration. Therefore, if the pre-calibrated orientation parameters are used for UAV positioning, the positioning will be inaccurate. The problem.
  • the relative pose parameters between the binocular vision sensors and the relative pose parameters between the binocular vision sensor and the drone body change.
  • a temperature sensor may be provided in the drone to detect the temperature of the bracket.
  • an inertial measurement unit can be set behind the vision sensor to detect the deformation of the bracket due to the vibration during the operation of the drone.
  • the temperature data measured by the temperature sensor, the motion state data measured by the inertial measurement unit and the image data collected by the binocular vision sensor are used to calibrate the relative pose parameters between the binocular vision sensors and any visual sensor and the UAV in real time.
  • the relative pose parameters between the bodies are used to calibrate the relative pose parameters between the bodies.
  • the specific calibration process can refer to Figure 6. Since the UAV is in different working stages, the status data collected by the temperature sensor, the inertial measurement unit and the binocular vision sensor are used for calibrating the above-mentioned pose parameters. The effectiveness of the parameters is also different. Therefore, different state data can be used to calibrate the above pose parameters at different stages.
  • the specific process is as follows:
  • the temperature sensor After the drone is powered on, the temperature sensor will enter the working state.
  • the temperature sensor can measure the temperature of the binocular vision sensor bracket.
  • each attitude angle corresponds to a temperature-angle change curve, and the changes of pitch angle, yaw angle and roll angle between the binocular vision sensors can be determined according to the temperature currently measured by the temperature sensor.
  • the inertial measurement unit set on the binocular vision sensor starts to work.
  • the inertial measurement unit can detect the vibration of the bracket.
  • the inertial measurement unit includes gyroscope and acceleration.
  • the angular acceleration is mainly measured by the gyroscope, and then the angular acceleration is integrated to obtain the angle change of the vision sensor in the three axes, and then the angle change of the UAV is determined according to the inertial measurement unit set on the UAV fuselage.
  • the angle change of the relative attitude angle between the binocular vision sensor and the drone body can be determined.
  • the following is an example of using the angular acceleration measured by the inertial measurement unit to estimate the angle change between the binocular vision sensor and the drone body.
  • the specific calculation formula is as follows:
  • q k+1 is the attitude quaternion at the current image moment
  • (b ⁇ ) k+1 is the zero-axis deviation of the gyroscope at the current image moment
  • q k is the attitude quaternion at the previous image moment
  • (b ⁇ ) k is the zero-axis deviation of the gyroscope at the moment of the previous frame of image
  • ⁇ t is the time difference between the two frames before and after the image, for example, if it is 20Hz, then the rough calculation is 50ms, of course, the exposure time difference of the two frames should be calculated for accurate calculation.
  • the four elements of attitude q k+1 , q k can be converted into a rotation matrix R, and further, the angle changes at the front and back moments can be obtained.
  • the angle change between the two binocular vision sensors can be further accurately determined according to the angle change measured by the inertial measurement unit.
  • the ambient temperature measured by the temperature sensor can also be used to perform temperature compensation on the inertial measurement unit, so as to accurately determine the influence of the ambient temperature on the pose of the inertial measurement unit, so as to make the inertial measurement unit more accurate. Accurately determine the relative attitude angle between the vision sensors and between the vision sensors and the fuselage.
  • the change amount of the pose parameters of the binocular vision sensor due to temperature and vibration can be determined, based on the default values of the pose parameters and the determined changes
  • the initial value of the pose parameters can be obtained.
  • the default value can be the pose parameter value calibrated when the UAV leaves the factory, or it can be the pose parameter value obtained by the last UAV self-calibration.
  • the binocular vision sensor can collect image sequences in different poses.
  • the image sequence collected by the eye vision sensor is used to further calibrate the determined initial values of the pose parameters to obtain more accurate calibration values of the pose parameters.
  • the specific process of calibrating the initial values of pose parameters using the image sequence collected by the binocular vision sensor is as follows:
  • the pitch angle between the binocular vision sensors in the non-baseline direction (up and down nodding direction) can be calculated through the Essential Matrix.
  • the roll angle (the direction of the head is tilted left and right), as follows:
  • the epipolar line correction is performed on the images collected by the left and right sensors of the binocular vision sensor.
  • the corrected images should correspond to the parallel epipolar lines.
  • a certain deviation is allowed.
  • the larger the ⁇ v deviation the more inaccurate the determined initial values of the pitch and roll angles of the pose parameters.
  • the feature point y can be extracted from the image collected by the left eye vision sensor, and the feature point matching can be tracked to the point y' in the image collected by the right eye vision sensor, and the matching feature point pair (y, y') can be found. If the ⁇ v deviation between the two is less than a certain threshold, it is considered that the current pitch angle and roll angle are relatively accurate, so the initial value can be the final calibration value.
  • the essential matrix (Essensial Matrix) can be determined according to the multiple pairs of feature points and matching in the images collected by the left and right vision sensors.
  • the feature extraction is performed on the image, and the feature point extraction can use general algorithms, such as Harris, SIFT, SURF, ORB and other algorithms.
  • the sparse method can be used to first extract the feature points of the image.
  • Corner detection is used.
  • the optional corner detection algorithms are: FAST (features from accelerated segment test), SUSAN, and Harris operator and other algorithms, you can use Harris Corner Detection Algorithms.
  • matrix A as a constructed tensor as follows:
  • Ix and Iy are the gradient information of a certain point on the image in the x and y directions, respectively, and the function Mc can be defined as:
  • det(A) is the determinant of matrix A
  • trace(A) is the trace of matrix A
  • is the parameter to adjust the sensitivity
  • set the threshold Mth when Mc>Mth, this point is considered as a feature point.
  • the displacement h of the feature point before and after the image frame can be obtained by formula iteration.
  • the above steps (a) and (b) can be used to track and match the feature points of the images collected by each sensor in the binocular vision sensor, and at the same time, the feature points of the frame images before and after the same vision sensor can be matched.
  • the parallax of each feature point can be calculated, and then the depth information of the feature point can be calculated, so as to obtain the three-dimensional information of the feature point.
  • the PnP algorithm with the RANSAC (Random sample consensus) algorithm to solve the camera pose information of the front and rear frame images.
  • yaw angle change should be within ⁇ 3° of the default parameters or the last calibration result (of course, other values can be set according to requirements)
  • you can use yaw [-3.0°, -2.9°, -2.8° one by one, ..., 2.8°, 2.9°, 3.0°] participate in the calculation of the above two steps to obtain different depth values, and then use the PnP algorithm with RANSAC to calculate the different depth values.
  • the number of feature points suitable for solving the camera pose through the PnP algorithm accounts for the percentage of the total number of feature points. Then, when the yaw angle changes within ⁇ 3°, the corresponding yaw angle when the percentage is the largest is used as the calibration of yaw value.
  • these poses may also be added to the state quantities, and a Kalman filter is used to iteratively calculate a more accurate value.
  • the pose parameters between any visual sensor and the fuselage can be further corrected in combination with the above pose parameters and the images collected by the binocular vision sensor, so as to obtain a more accurate image.
  • the calibration value of the pose parameter is the calibration value of the pose parameter.
  • the relative position parameters between the binocular vision sensors and the relative position parameters between any vision sensor and the fuselage can be calibrated in real time, and more accurate pose parameters can be obtained, so that more accurate positioning can be achieved.
  • the present application also provides a parameter calibration device, the device is applied to an electronic device, the electronic device includes at least two visual sensors, and the at least two visual sensors are loaded on a machine of the electronic device through a support structure.
  • the apparatus includes a processor 71, a memory 72, and a computer program stored in the memory 72 and executable by the processor 71.
  • the processor 71 executes the computer program, it realizes the following step:
  • the state data includes at least two of the following data: temperature state data of the support structure of the at least two visual sensors, which are set in the at least two visual sensors motion state data collected by the attitude measurement unit of the sensor, and image data collected by the at least two visual sensors;
  • the pose parameters of the at least two vision sensors are determined according to the state data, the pose parameters include relative pose parameters between any two of the at least two vision sensors, and/or the Relative pose parameters between any one of the at least two visual sensors and the body of the electronic device.
  • the temperature status data is acquired by a temperature sensor provided in the electronic device.
  • the processor when the processor is configured to acquire the state data of the at least two visual sensors, it is specifically configured to:
  • the state data of the corresponding type is acquired according to the working state.
  • the working state includes: a first working state, a second working state or a third working state;
  • the electronic device In the first working state, the electronic device is powered on;
  • the movable part of the electronic device performs a self-checking operation
  • the electronic device moves in response to the motion triggering instruction.
  • the time from when the electronic device is powered on to when the movable part of the electronic device enters the self-checking operation is greater than the time from when the electronic device is powered on to when the temperature sensor enters normal operation. time.
  • the processor when the processor is configured to acquire the state data of the corresponding type according to the working state, it is specifically configured to:
  • the acquired state data includes the temperature state data.
  • the processor when the processor is configured to acquire the state data of the corresponding type according to the working state, it is specifically configured to:
  • the acquired state data includes the temperature state data and the motion state data.
  • the processor when the processor is configured to acquire the state data of the corresponding type according to the working state, it is specifically configured to:
  • the acquired state data includes the temperature state data, the motion state data, and the image data.
  • the processor when the processor is configured to determine the pose parameters of the at least two visual sensors according to the state data, the processor is specifically configured to:
  • the initial value is corrected according to the image data to obtain the calibration value of the pose parameter.
  • the processor when the processor is configured to determine the initial value of the pose parameter according to the temperature state data, it is specifically configured to:
  • the initial value of the pose parameter is determined according to the temperature state data and the corresponding relationship between the temperature and the change amount of the pose parameter.
  • the processor when the processor is configured to determine the initial value of the pose parameter according to the motion state data, it is specifically configured to:
  • the initial value of the pose parameter is determined according to the angle change.
  • the pose parameter includes a relative pose parameter between any two of the at least two visual sensors, and the processor is configured to perform a calculation on the initial value according to the image data.
  • the calibration value of the pose parameter is obtained, it is specifically used for:
  • the calibration value of the second axial posture is determined according to the calibration value of the first axial posture and the image collected by any one of the two visual sensors; the first axial posture is perpendicular to the arbitrary The angle corresponding to the axial direction of the baselines of the two vision sensors, and the second axial attitude is an angle corresponding to the axial direction of the baselines of any two vision sensors.
  • the processor is configured to correct the initial value of the first axial posture according to the images collected by any two of the at least two visual sensors, to obtain the first axial posture
  • it is specifically used for:
  • a calibration value of the first axial attitude is determined according to the deviation.
  • the calibration value of the first axial attitude is determined according to the deviation, it is specifically used for:
  • the initial value of the first axial posture is used as the calibration value.
  • the processor when the processor is configured to determine the calibration value of the first axial attitude according to the deviation, the processor is specifically configured to:
  • the calibration value of the first axial posture is determined according to the first feature point and the first matching point.
  • the processor when the processor is configured to determine the calibration value of the first axial attitude according to the first feature point and the first matching point, the processor is specifically configured to:
  • the calibration value of the first axial attitude is determined according to the essential matrix.
  • the processor when the processor is configured to determine the calibration value of the second axial posture according to the calibration value of the first axial posture and an image collected by any one of the any two vision sensors, Specifically for:
  • a target angle is determined from the multiple angles according to the calibration value of the first axial attitude and multiple frames of images collected by one of the arbitrary two vision sensors, as the second axial attitude Calibration value.
  • the processor is configured to determine a target from the multiple angles according to the calibration value of the first axial attitude and multiple frames of images collected by one of the any two visual sensors angle, specifically for:
  • a second feature point is extracted from one frame of images in the multi-frame images collected by one of the two visual sensors, and the first feature point of the second feature point in the other images in the multi-frame images is determined.
  • two matching points are
  • the target angle is determined from the plurality of angles based on the second feature point, the second matching point, and the depth value.
  • the processor is further configured to:
  • the electronic device includes a movable platform
  • the movable platform includes a power component for driving the movable platform to move.
  • the present application also provides a movable platform, the movable platform includes at least two visual sensors and the parameter calibration device according to any one of the embodiments of this specification, the visual sensors are loaded on the supporting structure through the support structure.
  • the body of the movable platform, the visual sensor is provided with an attitude measurement unit.
  • the movable platform may be a drone, an unmanned vehicle, an intelligent robot, and the like.
  • an embodiment of the present specification further provides a computer storage medium, where a program is stored in the storage medium, and when the program is executed by a processor, the parameter calibration method in any of the foregoing embodiments is implemented.
  • Embodiments of the present specification may take the form of a computer program product embodied on one or more storage media having program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
  • Computer-usable storage media includes permanent and non-permanent, removable and non-removable media, and storage of information can be accomplished by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PRAM phase-change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • Flash Memory or other memory technology
  • CD-ROM Compact Disc Read Only Memory
  • CD-ROM Compact Disc Read Only Memory
  • DVD Digital Versatile Disc
  • Magnetic tape cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种参数标定方法、装置及可移动平台,可以通过电子设备中任意两个视觉传感器的支撑结构的温度状态数据、设置于视觉传感器上的姿态运动单元采集的运动状态数据以及两个视觉传感器采集的图像数据中的至少两种确定两个视觉传感器之间的相对位姿参数,或者是任一视觉传感器与电子设备的机身的相对位姿参数。通过确定温度变化以及机身振动所引起的支撑结构的形变而导致的视觉传感器的位姿参数的变化量,可以得到更加准确的位姿参数,同时还可以结合双目视觉传感器采集的图像数据进一步对位姿参数进行校正,可以得到更加准确的位姿参数。

Description

参数标定方法、装置及可移动平台 技术领域
本申请涉及计算机视觉技术领域,具体而言,涉及一种参数标定方法。装置及可移动平台。
背景技术
无人驾驶汽车、飞行器、VR眼镜等电子设备通常会设置多个视觉传感器,通过多个视觉传感器采集的图像测量三维空间中的物体的距离、对电子设备进行定位或者用作其他用途。以使用视觉传感器对电子设备进行定位为例,在使用视觉传感器采集的图像对电子设备进行定位时,不同视觉传感器之间位姿参数、视觉传感器与电子设备机身之间的位姿参数会严重影响定位的精度。因而,需要准确地确定不同视觉传感器之间的位姿参数、各视觉传感器与电子设备机身之间的位姿参数。
发明内容
有鉴于此,本申请提供一种参数标定方法、装置及可移动平台。
根据本申请的第一方面,提供一种参数标定方法,所述方法应用于电子设备,所述电子设备包括至少两个视觉传感器,所述至少两个视觉传感器通过支撑结构装载于所述电子设备的机身,所述方法包括:
获取所述至少两个视觉传感器的状态数据,所述状态数据包括以下数据中的至少两种:所述至少两个视觉传感器的所述支撑结构的温度状态数据,设置于所述至少两个视觉传感器的姿态测量单元采集的运动状态数据,由所述至少两个视觉传感器采集的图像数据;
根据所述状态数据确定所述至少两个视觉传感器的位姿参数,所述位姿 参数包括所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数,和/或所述至少两个视觉传感器中任意一个视觉传感器与所述电子设备的机身之间的相对位姿参数。
根据本申请的第二方面,提供一种参数标定装置,其特征在于,所述装置应用于电子设备,所述电子设备包括至少两个视觉传感器,所述至少两个视觉传感器通过支撑结构装载于所述电子设备的机身,所述装置包括处理器、存储器、存储于所述存储器所述处理器可执行的计算机程序,所述处理器执行所述计算机程序时,实现以下步骤:
获取所述至少两个视觉传感器的状态数据,所述状态数据包括以下数据中的至少两种:所述至少两个视觉传感器的所述支撑结构的温度状态数据,设置于所述至少两个视觉传感器的姿态测量单元采集的运动状态数据,由所述至少两个视觉传感器采集的图像数据;
根据所述状态数据确定所述至少两个视觉传感器的位姿参数,所述位姿参数包括所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数,和/或所述至少两个视觉传感器中任意一个视觉传感器与所述电子设备的机身之间的相对位姿参数。
根据本申请的第三方面,提供一种可移动平台,所述可移动平台包括至少两个视觉传感器以及上述第二方面所述的参数标定装置,所述视觉传感器通过支撑结构装载于所述可移动平台的机身,所述视觉传感器设置有姿态测量单元。
应用本申请提供的方案,可以通过电子设备中任意两个视觉传感器的支撑结构的温度状态数据、设置于视觉传感器上的姿态运动单元采集的运动状态数据以及两个视觉传感器采集的图像数据中的至少两种确定两个视 觉传感器之间的相对位姿参数,或者是任一视觉传感器与电子设备的机身的相对位姿参数,通过确定温度变化以及机身振动所引起的支撑结构的形变而导致的视觉传感器的位姿参数的变化量,可以得到更加准确的位姿参数,同时还可以结合双目视觉传感器采集的图像数据进一步对位姿参数进行校正,可以得到更加准确的位姿参数。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个实施例参数标定方法的流程图。
图2是本申请一个实施例的温度-角度变化量曲线示意图。
图3是本申请一个实施例的双目视觉传感器在三个轴向的姿态角的示意图。
图4是本申请一个实施例的对两个视觉传感器采集的图像进行极线校正的示意图。
图5是本申请一个实施例的同一三维点在两个视觉传感器采集的图像上的像素点的纵坐标存在偏差的示意图。
图6是本申请一个实施例的参数标定方法的示意图。
图7是本申请一个实施例的参数标定装置的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例, 而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
无人驾驶汽车、飞行器、VR眼镜等电子设备在工作过程中,通常会使用多个视觉传感器采集的图像测量物体距离、对电子设备进行定位或者用于其他用途。在用于对电子设备进行定位时,通过不同视觉传感器采集的一系列图像以及视觉传感器之间的位姿参数,可以确定视觉传感器的位姿,然后可以通过视觉传感器与电子设备机身的位姿参数,可以确定电子设备机身的位置,从而实现对电子设备进行定位。大多数电子设备在出厂时,即会标定不同视觉传感器之间的位姿参数、视觉传感器与电子设备机身之间的位姿参数,但是电子设备在工作过程中,由于环境温度变化、机械振动、存储温差等问题,会引起上述参数发生变化,与最开始标定的参数不符,因而会导致定位结果不准确。比如,视觉传感器通常通过支架固定在电子设备中,随着环境温度的变化、电子设备工作过程中的振动,支架会发生一定的形变,导致视觉传感器之间的位姿参数、视觉传感器与电子设备机身之间的位姿参数发生变化,如果仍使用之前标定的位姿参数,会造成定位结果不准确。
为了解决上述问题,本申请提供一种参数标定方法,该方法适用于包括至少两个视觉传感器的电子设备,其中,视觉传感器可以通过支撑结构装载于电子设备的机身,可以根据该支撑结构的温度数据、设置在视觉传感器上的姿态测量单元采集的运动状态数据或者是视觉传感器采集的图像数据中的至少两种数据对视觉传感器的位姿参数进行标定,以确定不同的视觉传感器之间的位姿参数,以及任意一个视觉传感器与电子设备机身之间的位姿参数。具体的,所述方法如图1所示,包括以下步骤:
S102、获取所述至少两个视觉传感器的状态数据,所述状态数据包括以下数据中的至少两种:所述至少两个视觉传感器的所述支撑结构的温度状态数据,设置于所述至少两个视觉传感器的姿态测量单元采集的运动状 态数据,由所述至少两个视觉传感器采集的图像数据;
S104、根据所述状态数据确定所述至少两个视觉传感器的位姿参数,所述位姿参数包括所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数,和/或所述至少两个视觉传感器中任意一个视觉传感器与所述电子设备的机身之间的相对位姿参数。
本申请的参数标定方法可以由包括至少两个视觉传感器的电子设备执行,也可以由于该电子设备通信连接的其他设备执行,比如,与该电子设备通信连接的控制终端、或者是云端服务器。本申请的参数标定方法可以在电子设备工作过程中实时执行,也可以每隔预设的时间段执行一次,在此不作限制。
本申请的电子设备可以是任一包括至少两个视觉传感器的电子设备,两个视觉传感器可以通过支撑结构装载于电子设备机身,比如,可以通过支架装载于电子设备的机身。其中,电子设备可以是无人驾驶汽车、无人机、无人小车、智能机器人、VR眼镜等。
由于视觉传感器是通过支撑结构装载于电子设备机身,随着支撑结构周围环境温度的变化、或者电子设备工作过程中的产生的振动情况的变化,支撑结构会发生一定的形变,从而导致视觉传感器的相对位置会发生变化、视觉传感器与机身的相对位置也会发生变化。为了确定当前电子设备中的不同视觉传感器之间位姿参数或者是任一视觉传感器与电子设备机身之间的位姿参数,可以获取电子设备中的视觉传感器的支撑结构的温度状态数据、设置于视觉传感器的姿态测量单元采集的运动状态数据或者是由视觉传感器采集的图像数据中的至少两类数据,根据上述至少两类数据对视觉传感器的位姿参数进行标定,充分考虑环境温度、振动情况对支撑结构形变的影响,同时也可以结合视觉传感器采集的图像数据进一步对视觉传感器的位姿参数进行精细化计算,从而可以得到更加准确的位姿参数。
在某些实施例中,该电子设备可以是可移动平台,可移动平台可以是无人机、无人驾驶汽车、无人小车、智能机器人等。可移动平台中包括动 力部件,用于驱使可移动平台运动,可移动平台的动力部件在驱使可移动平台运动过程中会使机身产生振动,从导致视觉传感器的机身产生形变。同时,可移动平台在运动过程中,视觉传感器可以在不同位姿采集图像数据。
为了测量由于电子设备工作过程中的振动引起的支撑结构的形变,可以在视觉传感器中设置姿态测量单元,用于测量支撑结构在三个轴向上的形变。其中姿态测量单元可以是任一用于测量物体运动姿态的传感器,比如,姿态测量单元可以是惯性测量单元(IMU)、陀螺仪、加速度计等传感器。由于支撑结构的形变,视觉传感器与电子设备机身的相对位置关系会发生变化,并且支撑结构两端的视觉传感器之间的相对位置也会发生变化。因而,可以根据姿态测量单元测量的运动状态参数来确定支撑结构的形变导致的上述相对位姿参数的变化情况。姿态测量单元可以设置在视觉传感器的后面,可以针对同一支撑结构上的每个视觉传感器都设置一个姿态测量单元,或者只对同一支撑结构上多个视觉传感器中的一个视觉传感器设置一个姿态测量单元,本申请不作限定。
在某些实施例,支撑结构的温度数据可以根据当前环境的温度数据确定,比如,电子设备在非工作状态,其内部产热较少,则可以认为当前环境的温度即为支撑结构的温度,当然,也可以预先确定支撑结构的温度与环境温度之间的对应关系,可以根据当前环境温度和该对应关系估计支撑结构的温度。在某些实施例中,为了获取更加准确地支撑结构的温度状态数据,也可以在电子设备中设置温度传感器,用于测量支撑结构的温度状态数据。
针对上述三种类型的状态数据,由于电子设备在不同的工作状态,上述三种类型的数据的有效性也不一样。举个例子,电子设备在刚上电的阶段,其可活动部件还未开始运动,因而整个机身基本不会产生振动,这个阶段则无需考虑支撑结构由于振动而产生的形变,因此,无需根据姿态单元采集的运动状态数据去标定视觉传感器的位姿。而在电子设备还未开始 运动的状态,其视觉传感器采集的图像都为同一个位姿采集的图像,因而也无法根据这些图像来确定视觉传感器的位姿参数。因此,在某些实施例中,在获取上述三种类型的状态数据之前,可以先确定电子设备当前的工作状态,然后根据所确定的工作状态获取对应种类的状态数据,根据获取的状态数据确定当前工作状态下视觉传感器的位姿参数。其中,可以预先设定工作状态与状态数据种类的对应关系,在获取状态数据时,可以根据该对应关系确定当前工作状态对应的状态数据的种类。
在某些实施例中,电子设备的工作状态可以包括第一工作状态、第二工作状态或第三工作状态,在第一工作状态时,电子设备上电,即电子设备的各个零部件可以通电,在第二工作状态时,电子设备的可活动部件进行自检操作,即在上电后,需先对电子设备的各个零部件进行自检,检测各个零部件是否存在故障,在第三工作状态,电子设备响应于运动触发指令进行运动。在自检完成后,确定电子设备各个零部件处于正常状态,即可以进入正常工作。
在某些实施例中,电子设备从上电到电子设备的各个可活动部件进行自检操作的时间,大于电子设备上电到所述温度传感器进入正常工作的时间。也就是说,在电子设备的各个可活动部件进行自检操作时,电子设备的电子设备的温度传感器已经进入正常工作状态,即温度传感器可以获取到有效的温度状态数据。
在某些实施例中,当电子设备处于第一工作状态时,由于电子设备可活动部件还未活动,视觉传感器采集的图像数据也只是同一个位姿的图像数据,都不适合用于确定视觉传感器的位姿参数,所以,在第一工作状态时,获取的位姿参数仅仅是温度传感器采集的温度状态数据。
在某些实施例中,当电子设备处于第二工作状态时,由于此时温度传感器已经正常工作,但是视觉传感器采集的图像数据还是同一个位姿的图像数据,因此,此时获取的有效状态数据可以是温度传感器测量的温度状态数据以及姿态测量单元采集的运动状态数据。
在某些实施例中,当电子设备处于第三工作状态时,由于温度传感器已经正常工作、可活动部件开始运动,机身开始产生振动,并且由于电子设备开始运动,视觉传感器也开始运动并且可以在不同位姿采集图像数据,因此,该工作阶段可以获取的有效状态数据可以是温度传感器采集的温度数据、姿态测量单元采集的运动状态数据以及视觉传感器采集的图像数据。
当电子设备进入第三状态后,在确定电子设备上的视觉传感器的位姿参数时,可以先根据温度状态数据、运动状态数据中的一种或多种确定位姿参数的初始值,然后根据视觉传感器采集的图像数据对初始值进行校正,得到位姿参数的标定值。举个例子,在电子设备上电后,即可以根据温度传感器测得的温度数据确定位姿参数的变化值,然后根据位姿参数的默认值和确定的变化值得到位置参数的初始值,其中,默认值可以是电子设备出厂时,预先对视觉传感器的位姿参数进行标定得到的数值,也可以是上一次对视觉传感器的位姿参数进行标定得到并存储下来的数值。当电子设备进入自检状态时,由于可活动部件开始运动,机身会产生振动,这时,则可以根据姿态测量单元测得的运动状态数据确定位姿参数的变化值,从而根据默认值和确定的变化值得到位姿参数的初始值,当然,也可以同时根据温度传感器测得的温度数据以及姿态测量单元测得的运动状态数据确定位姿参数的变化值,然后根据默认值和变化值确定位姿参数的初始值。通过温度状态数据和运动状态数据可以得到一个比较准确的位姿参数初始值,然后才通过视觉传感器采集的图像数据对该初始值进行校准,得到更加精确的位姿参数,作为最终的标定值。通过结合温度状态数据、运动状态数据确定初始值,可以充分考虑温度和振动对支撑结构形变的影响导致的位姿参数的变化,同时采用图像数据进一步对初始值进行校正,可以得到比较精确的位姿参数,即可以作为最终的标定值。
当然,电子设备在运动过程中,如果视觉传感器采集的图像数据无法进行特征点匹配,则可以使用该初始值作为位姿参数的最终值,并使用该位姿参数进行电子设备的定位。比如,如果电子设备采集的图像纹理较少, 无法提取特征点,这时无法使用图像数据对初始值进行校正,则可以使用根据温度状态数据和运动状态数据确定初始值作为位姿参数,以得到更为准确的定位结果。
当温度一定时,可以认为支撑结构的形变量是一定的,举个例子,温度为20℃时,支撑结构产生弯曲形变的弧度为5°,当温度为40℃时,支撑结构产生弯曲形变的弧度为10°,温度确定后,其形变量基本确定。因此,在某些实施例中,可以预先确定温度与位姿参数变化量的对应关系,如图2所示,为角度变化量在不同温度下的变化曲线(以yaw为例),然后根据温度状态数据与位姿参数变化量的对应关系确定位置参数的初始值。举个例子,通常两个视觉传感器之间的相对位姿参数包括在三维空间的位置的相对关系以及角度的相对关系,针对任意两个视觉传感器在三个轴向之间的夹角,比如pitch角、yaw角以及roll角,可以预先确定温度与每个轴向的角度变化量的对应关系,然后根据温度传感器测得的当前温度以及上述对应关系确定各个轴向的角度变化量,根据两个视觉传感器在各个轴向的姿态角的默认值以及确定的角度变化量即可以确定经过温度修正后的姿态角的初始值。当然,由于支撑结构的长度形变通常较小,因此,一般可以忽略不计,即两个视觉传感器的平移矩阵不变。当然,在某些实施例中,也可以确定温度与各个轴向的角度之间的对应关系,比如,可以确定不同温度下的pitch角、yaw角以及roll角的角度,然后根据温度与角度的对应关系确定初始值。
由于姿态单元可以检测视觉传感器在各个轴向的姿态角、角速度或角加速度,因此可以根据姿态单元测得的运动状态数据确定任意两个视觉传感器在各个轴向的角度变化量,根据确定的角度变化量确定位姿参数的初始值。比如,在某些实施例中,可以在每个视觉传感器后面设置一个姿态测量单元,根据每个视觉传感器后面设置的姿态测量单元测得数据即可以确定两个视觉传感器在各个轴向的角度变化,从而可以确定两个视觉传感器之间的姿态角。在某些实施例中,电子设备机身中心也设置有姿态测量 单元,因而可以根据电子设备机身中心的姿态测量单元测的数据与视觉传感器的姿态测量单元测得数据确定视觉传感器与电子设备机身在各个轴向的角度变化,从而可以确定视觉传感器与电子设备机身的姿态角。
由于视觉传感器可以在不同位姿采集多张图像,可以通过对这多张图像进行特征点跟踪和匹配,确定视觉传感器的位置。在确定任意两个视觉传感器之间的相对位置参数时,可以先根据温度传感器采集的温度状态数据和姿态测量单元采集的运动状态数据确定两个视觉传感器相对位姿参数的初始值,然后再采用这两个视觉传感器采集的图像数据对得到的初始值进行校正,得到更加精准的标定值。在某些实施例中,在根据图像数据对该初始值进行校正得到标定值时,可以根据这两个视觉传感器采集的图像数据对第一轴向姿态的初始值进行校正,得到第一轴向姿态的标定值,其中,第一轴向姿态为垂直于这两个视觉传感器基线的轴向对应的角度,然后根据第一轴向姿态的标定值以及这两个视觉传感器中任一视觉传感器采集的图像确定第二轴向姿态的标定值,第二轴向姿态为沿着这两个视觉传感器基线的轴向对应的角度。举个例子,如图3所示,C1和C2为两个视觉传感器,分别位于支撑结构的两端,其中,第一轴向姿态为垂直于这两个视觉传感器基线的轴向对应的角度,即pich角和roll角,第二轴向姿态为沿着这两个视觉传感器基线的轴向对应的角度,即yaw角。假设这两个视觉传感器在同一时刻采集的图像分别为图像1和图像2,三维空间上的某个点在图像1上的像素点为P1,在图像2上的像素点为P2。其中,像素点P1和P2的像素坐标的横坐标的偏差(即视差)与yaw有关,像素点P1和P2的像素坐标的纵坐标的偏差pitch角和roll角有关。因此,在采集得到图像1和图像2后,可以根据这两个视觉传感器的相对位姿参数的初始值对这两张图像进行极线校正处理,如图4所示,理论上,对图像1和图像2进行极线校正后,像素点P1和P2的像素坐标的纵坐标应一致,即位于同一行,但是由于pitch角和roll角的初始值与实际值存在偏差,因而会导致像素点P1和P2的像素坐标的纵坐标存在偏差,如图5所示,并且 偏差越大,说明pich角和yaw角越不准确。
由于两个视觉传感器采集的图像上对应于同一个三维点的像素点的纵坐标的偏差可以反映pich角和yaw角的初始值的准确程度。在某些实施例中,根据这两个视觉传感器采集的图像对第一轴向姿态的初始值进行校正,得到第一轴向姿态的标定值时,可以先确定这两个视觉传感器中其中一个视觉传感器采集的图像的第一特征点在另一个视觉传感器采集的图像中的第一匹配点,然后根据第一轴向姿态的初始值确定所述第一特征点和所述第一匹配点的像素坐标的纵坐标的偏差,根据该偏差确定第一轴向姿态的标定值。比如,可以从视觉传感器C1采集的图像1中提取多个第一特征点,其中特征点的提取可以采用现有的特征点提取算法,在此不再赘述。然后确定第一特征点在视觉传感器C2采集的图像2中的第一匹配点,可以根据双目视觉传感器的位姿参数的初始值对两个视觉传感器各自采集的图像进行极线校正,以确定第一特征点和第一匹配点的纵坐标的偏差,然后根据偏差确定第一轴向姿态(即pitch角和roll角)的标定值。在根据上述偏差确定第一轴向姿态的标定值时,可以根据该偏差的大小确定第一轴向姿态的标定值。在某些实施例中,如果该偏差小于预设阈值,则说明第一轴向姿态的初始值是比较准确的,因而可以将第一轴向姿态的初始值作为标定值。在某些实施例中,如果该偏差大于预设阈值,则说明第一轴向姿态的初始值不够准确,则可以根据第一特征点和第一匹配点确定第一轴向姿态的标定值,其中,预设阈值可以根据实际需求灵活设置。
在某些实施例中,根据第一特征点和第一匹配点确定第一轴向姿态的标定值时,可以先根据第一特征点和第一匹配点确定本质矩阵,然后根据本质矩阵确定第一轴向姿态的标定值。由于可以对本质矩阵进行分解,得到双目视觉传感之间的位姿参数对应的矩阵,因此,可以从双目视觉传感器左右两个传感器采集的图像中提取出多对特征点和匹配点,根据这多对特征点和匹配点即可以解算得到本质矩阵,然后对本质矩阵进行分解,即可以求得pitch角、roll角、yaw角,由于求得pitch角和roll角是比较准确 的,而yaw交不准确,因此可以取求得的pitch角和roll角作为标定值,舍弃yaw角,然后通过比较精确的pitch角和roll角进一步求解精确的yaw角。
在某些实施例中,在根据第一轴向姿态的标定值以及这两个视觉传感器中任一视觉传感器采集的图像确定第二轴向姿态的标定值时,可以先根据第二轴向姿态的初始值和预设的偏差范围确定多个角度,然后根据第一轴向姿态的标定值以及这个两个视觉传感器中其中一个视觉传感器采集的多帧图像从这多个角度中确定一个目标角度,作为第二轴向姿态的标定值。在某些实施例中,在根据第一轴向姿态的标定值以及这两个视觉传感器中其中一个传感器采集的多帧图像从这多个角度中确定一个目标角度时,可以从任意两个视觉传感器中其中一个视觉传感器采集的多帧图像中的一帧图像提取第二特征点,并确定第二特征点在多帧图像中的其他图像中的第二匹配点;将多个角度逐一确定为第二轴向姿态的标定值,并基于所确定的第二轴向姿态标定值以及第一轴向的标定值确定第一特征点对应的深度值;然后基于第二特征点、第二匹配点以及深度值从多个角度中确定一个角度作为目标角度。举个例子,由于yaw角的初始值已经确定,可以假设yaw角在±3°的范围内变动,然后以预设的精度确定多个角度,比如,假设当前yaw的初始值为3°,以0.5度的梯度为例,可以得到多个角度,0°、0.5°、1°、1.5°…..6°,其中,具体的精度以及角度范围可以根据实际需求确定。在得到这多个角度后,即可以让yaw角分别等于上述角度,然后可以对两个视觉传感器中左右两个传感器采集的图像进行特征点匹配,确定每个特征点的深度值,从而得到每个特征点的三维信息。然后再对两个视觉传感器中其中一个视觉传感器采集的多帧图像进行特征点匹配,可以从其中一帧图像提取第二特征点,确定第二特征点在其他图像的第二匹配点,然后可以基于PnP算法(Perspective-n-Point)以及得到的第二特征点和第二匹配点确定相机的位姿信息,其中,PnP算法是通过一系列世界坐标系的3D点以及图像中对应的像素坐标系的2D点,估算相机姿态估算 相机位姿。在估算相机的位姿时,可以采用带RANSAC(Random sample consensus,随机抽样一致)算法的PnP算法来求解,通过RANSAC算法可以确定出适合用于PnP算法来求解相机位姿的第二特征点,然后根据适合用于PnP算法来求解相机位姿的第二特征点的数量以及第二特征点的总数量确定一个百分比,选取百分比最大时,yaw角的取值,即作为yaw的标定值。
通过上述方法,即可以确定电子设备的任意两个视觉传感器之间的姿态角的标定值,由于任意两个视觉传感器之间的平移矩阵受温度和振动影响不大,因而可以采用默认值,从而得到两个视觉传感器之间的相对位姿参数。在某些实施例中,在确定电子设备的任意两个视觉传感器之间的相对位姿参数之后,还可以根据任意两个视觉传感器之间相对位姿参数以及视觉传感器采集的图像数据校正任意视觉传感器与电子设备的机身之间的相对位姿参数,从而得到任意视觉传感器与电子设备的机身之间比较准确的相对位姿参数。
为了进一步解释本申请的参数标定方法,以下结合一个具体的实施例加以解释。
通常无人机上会设置多组双目视觉传感器(比如前后、上下、左右各一组),每组双目视觉传感器的两个传感器通过支架固定。同时,无人机的机身位置还会设置惯性测量单元,通过双视觉传感器和惯性测量单元可以对无人机进行定位。在使用双目视觉传感器和惯性测量单元对无人机进行定位的过程中,定位的准确度会受到双目视觉传感器之间的相对位姿参数、以及双目视觉传感器与无人机机身的相对位姿参数的准确度的影响,其中,双目视觉传感器与无人机机身的相对位姿参数可以通过双目视觉传感器与机身设置的惯性测量单元的相对位姿参数确定。在无人机的使用过程中,固定双目视觉传感器的支架由于受到温度、振动等影响,其结构会发生形变,因而如果采用预先标定位姿参数用于无人机定位,会造成定位不准确的问题。
为了确定由于温度、振动导致的双目视觉传感器支架发生形变,从而引起双目视觉传感器之间的相对位姿参数、以及双目视觉传感器与无人机机身的相对位姿参数的变化。本申请实施例可以在无人机中设置温度传感器,用于检测支架的温度。同时,可以在视觉传感器后面设置惯性测量单元,用于检测由于无人机工作过程中的振动而导致的支架的形变。通过温度传感器测得的温度数据、惯性测量单元测得的运动状态数据以及双目视觉传感器采集的图像数据实时标定双目视觉传感器之间的相对位姿参数以及任一视觉传感器与无人机机身之间的相对位姿参数。
具体的标定过程可以参考图6,由于无人机在不同的工作阶段,温度传感器、惯性测量单元以及双目视觉传感器采集的状态数据用于标定上述位姿参数的有效性也不一样。因此,在不同的阶段可以采用不同的状态数据标定上述位姿参数,具体过程出如下:
1、无人机上电后,温度传感器即进入工作状态,温度传感器可以测量双目视觉传感器支架的温度,根据预先标定的温度-角度变化量曲线,可以计算出当前温度下双目之间在三个轴向的姿态角的变化量(pitch角、yaw角以及roll角)。其中,每个姿态角对应一个温度-角度变化量曲线,根据温度传感器当前测得的温度,即可以确定双目视觉传感器之间的pitch角、yaw角以及roll角的变化量。
2、无人机在上电后需要先进行自检,自检完成后双目视觉传感器上设置的惯性测量单元开始工作,惯性测量单元可以检测支架的振动,惯性测量单元包括了陀螺仪和加速度计,这里主要通过陀螺仪测量角加速度,然后对角加速度进行积分得到视觉传感器在三个轴向上的角度变化,然后根据无人机机身上设置的惯性测量单元确定无人机的角度变化,基于这两种视觉传感器测得的角度变化,即可以确定双目视觉传感器与无人机机身之间的相对姿态角的角度变化。
比如,以下为使用惯性测量单元测量的角加速度来预估双目视觉传感器与无人机机身之间的角度变化的例子,具体计算公式如下:
Figure PCTCN2020116757-appb-000001
Figure PCTCN2020116757-appb-000002
Figure PCTCN2020116757-appb-000003
Δq=q{(ω-b ω)Δt}
(b ω) k+1=(b ω) k
q k+1为当前图像时刻的姿态四元数,(b ω) k+1为当前图像时刻的陀螺仪零轴偏差;q k为上一帧图像时刻的姿态四元数,(b ω) k为上一帧图像时刻的陀螺仪零轴偏差;Δt为前后两张图像的帧间时差,比如,如果是20Hz,那粗略计算就是50ms,当然精确计算的话还要算上两帧的曝光时间差,姿态四元素q k+1,q k可以转化为旋转矩阵R,进一步即可要求得前后时刻角度变化。
当然,如果每个视觉传感器都设置了一个惯性测量单元,则可以根据惯性测量单元测得的角度变化进一步精确确定两个双目视觉传感器之间的角度变化。在某些实施例中,为了测得的位姿参数更加准确,也可以使用温度传感器测量的环境温度对惯性测量单元做温度补偿,以便精确确定环境温度对惯性测量单元位姿的影响,从而更加准确地确定视觉传感器之间,以及视觉传感器与机身的相对姿态角。
3、根据温度传感器测得的温度数据以及惯性测量单元测得的角度变化可以确定出双目视觉传感器的位姿参数因温度和震动引起的变化量,基于位姿参数的默认值以及确定的变化量,可以得到位姿参数的初始值。其中,默认值可以是无人机出厂时标定的位姿参数值,也可以是上一次无人机自行标定得到的位姿参数值。
4、自检结束后,用户控制无人机起飞,无人机的螺旋桨开始旋转,无人机起飞并开始运动,此时双目视觉传感器可以在不同位姿采集图像序列,因此,可以采用双目视觉传感器器采集的图像序列对确定的位姿参数初始值进行进一步的校准,得到更加准确的位姿参数的标定值。采用双目视觉 传感器采集的图像序列对位姿参数的初始值进行校准的具体过程如下:
(1)首先可以通过对双目视觉传感器中两个视觉传感器各自采集的图像进行匹配,通过本质矩阵Essential Matrix,计算出双目视觉传感器之间在非基线方向上的pitch角(上下点头方向)和roll角(左右歪脑袋方向),具体如下:
利用当前确定的双目视觉传感器之间的位姿参数,对双目视觉传感器左右两个传感器采集的图像进行极线校正,理论上校正后的图像,对应的点该在平行的极线Epipolar Line上,如图4所示,在匹配的时候,允许一定的偏差,左视觉传感器和右视觉传感器采集的图像上对应的一个点,其偏差记为(Δu,Δv) T=(u 0,v 0) T-(u 1,v 1) T,Δu就是视差,Δv理论上应该趋近于0,由于双目视觉传感器之间位姿参数的变化(pitch角和roll角)导致Δv不等于0。并且Δv偏差越大,说明确定的位姿参数pitch角和roll角的初始值越不准确。
可以从左目视觉传感器采集的图像中提取特征点y,并作特征点匹配跟踪到右目视觉传感器采集的图像中点y’,找到匹配的特征点对(y,y’)。如果两者的Δv偏差小于一定的阈值,则认为目前的pitch角和roll角是比较准确的,因而可以将其初始值最为最终的标定值。
如果两者的Δv偏差大于一定的阈值,则认为目前的pitch角和roll角是不够准确,因而可以根据左右视觉传感器采集的图像中的多对特征点和匹配确定本质矩阵(Essensial Matrix),通过解算本质矩阵,可以算出双目视觉传感器之间的旋转矩阵R,然后把R分解为三个方向的旋转角度,R=(roll,pitch,yaw),得到的roll角和pitch角即为两个视觉传感器之间的roll角和pitch角的标定值。
(2)得到精准的pitch和roll角度后,通过对同一视觉传感器前后采集的图像进行匹配,运用带随机抽样一致RANSAC(Random Sample Consensus)的透视多点PnP(pespective-n-point)算法,计算得到更加精准的双目视觉传感器之间的yaw角,具体过程如下:
(a)特征点提取
根据每个物体在视觉传感器采集的图像上的感兴趣区域(ROI,Region of Interest)在图像上进行特征电提取,特征点提取可以采用通用的算法,比如Harris、SIFT、SURF、ORB等算法。
为了减少计算量,可以采用sparse的方法,先提取图像的特征点,一般选用角点(Corner detection),可选的角点检测算法有:FAST(features from accelerated segment test)、SUSAN、以及Harris operator等算法,可以使用Harris Corner Detection Algorithms。
定义矩阵A为构造张量如下:
Figure PCTCN2020116757-appb-000004
其中Ix和Iy分别为图像上某一点在x和y方向上的梯度信息,可以定义函数Mc为:
M c=λ 1λ 2-κ(λ 12) 2=det(A)-κtrace 2(A)
其中det(A)为矩阵A的行列式,trace(A)为矩阵A的迹,κ为调节灵敏度的参数,设定阈值Mth,当Mc>Mth时认为此点为特征点。
(b)KLT(Kanade–Lucas–Tomasi feature tracker)特征点跟踪匹配
可以跟踪多帧图像之间的特征点,以便计算其移动情况(光流),可以取h作为前后两幅图的偏移量,F(x)and G(x)分别表示前后两帧图像,G(x)=F(x+h)
针对每个特征点,通过公式迭代可以得到特征点在前后图像帧的位移h。
Figure PCTCN2020116757-appb-000005
可以做一个双重确认,先令后一张图像为F(x),前一张图像为G(x),算出某一个特征点,在后一张图像相对于前一张的偏移h,再反过来,计算该特征点在前一张图像相对于后一张的偏移h’,理论上h=-h’,满足此 条件方可说明跟踪的点正确。
其中,h即为光流向量h=(Δu,Δv)。
(c)采用带RANSAC的PnP算法计算yaw角的标定值
可以采用上述步骤(a)和(b)对双目视觉传感器中各传感器采集的图像进行特征点的跟踪和匹配,同时可以对同一视觉传感器前后帧图像的特征点进行匹配。通过对双目视觉传感器中各传感器采集的图像进行特征点匹配,可以计算出每个特征点的视差,进而计算出特征点的深度信息,从而得到特征点的三维信息。再结合前后帧图像的特征点匹配,然后运用带RANSAC(Random sample consensus,随机抽样一致)算法的PnP算法求解前后帧图像的相机位姿信息。
可以假设yaw角变化应该是在默认参数或上一次标定结果的±3°内(当然也可以根据需求设定其他值),可以逐个使用yaw=[-3.0°,-2.9°,-2.8°,…,2.8°,2.9°,3.0°]参与上述两个步骤的计算,得到不同的深度值,再将不同深度值用带RANSAC的PnP算法计算。
通过RANSAC统计适合用于通过PnP算法来求解相机位姿的特征点数量占全部特征点数量的百分比,然后确定yaw角在±3°内变化时,百分比最大时对应的yaw角,作为yaw的标定值。
至此,yaw角,roll角,pitch角的标定值都确定。
5、由于双目视觉传感器的平移变换矩阵在温度和振动条件的影响下,变化非常小,因而可以忽略不计,即使用默认出厂的平移变化矩阵即可,至此,则计算出双目视觉传感器之间的位姿参数的标定值。
在某些实施例中,在使用视觉惯性里程计(vio)算法在计算过程中,也可以把这些位姿加入到状态量中,使用kalman filter卡尔曼滤波器,迭代计算出更加精准的值。
使用上述方法确定双目视觉传感器的位姿参数后,还可以结合上述位姿参数以及双目视觉传感器采集的图像进一步校正任一视觉传感器与机身之间的位姿参数,以得到更加准确的位姿参数的标定值。
通过上述方法,可以实时的对双目视觉传感器之间的相对位置参数以及任一视觉传感器与机身的相对位置参数进行标定,得到更加精准的位姿参数,从而可以实现更加准确的定位。
进一步地,本申请还提供一种参数标定装置,所述装置应用于电子设备,所述电子设备包括至少两个视觉传感器,所述至少两个视觉传感器通过支撑结构装载于所述电子设备的机身,如图7所示,所述装置包括处理器71、存储器72、存储于所述存储器72所述处理器71可执行的计算机程序,所述处理器71执行所述计算机程序时,实现以下步骤:
获取所述至少两个视觉传感器的状态数据,所述状态数据包括以下数据中的至少两种:所述至少两个视觉传感器的所述支撑结构的温度状态数据,设置于所述至少两个视觉传感器的姿态测量单元采集的运动状态数据,由所述至少两个视觉传感器采集的图像数据;
根据所述状态数据确定所述至少两个视觉传感器的位姿参数,所述位姿参数包括所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数,和/或所述至少两个视觉传感器中任意一个视觉传感器与所述电子设备的机身之间的相对位姿参数。
在某些实施例中,所述温度状态数据由设置于所述电子设备的温度传感器获取。
在某些实施例中,所述处理器用于获取所述至少两个视觉传感器的状态数据时,具体用于:
确定所述电子设备当前的工作状态;
根据所述工作状态获取对应种类的所述状态数据。
在某些实施例中,所述工作状态包括:第一工作状态、第二工作状态或第三工作状态;
在所述第一工作状态,所述电子设备上电;
在所述第二工作状态,所述电子设备的可活动部件进行自检操作;
在所述第三工作状态,所述电子设备响应于运动触发指令进行运动。
在某些实施例中,从所述电子设备上电到所述电子设备的所述可活动部件进入所述自检操作的时间,大于所述电子设备上电到所述温度传感器进入正常工作的时间。
在某些实施例中,所述处理器用于根据所述工作状态获取对应种类的所述状态数据时,具体用于:
若所述工作状态为所述第一状态,获取的所述状态数据包括所述温度状态数据。
在某些实施例中,所述处理器用于根据所述工作状态获取对应种类的所述状态数据时,具体用于:
若所述工作状态为所述第二状态,获取的所述状态数据包括所述温度状态数据以及所述运动状态数据。
在某些实施例中,所述处理器用于根据所述工作状态获取对应种类的所述状态数据时,具体用于:
若所述工作状态为所述第三状态,获取的所述状态数据包括所述温度状态数据、所述运动状态数据以及所述图像数据。
在某些实施例中,所述处理器用于根据所述状态数据确定所述至少两个视觉传感器的位姿参数时,具体用于:
根据所述温度状态数据和/或所述运动状态数据确定所述位姿参数的初始值;
根据所述图像数据对所述初始值进行校正,得到所述位姿参数的标定值。
在某些实施例中,所述处理器用于根据所述温度状态数据确定所述位姿参数的初始值时,具体用于:
根据所述温度状态数据以及温度与位姿参数变化量的对应关系确定所述位姿参数的初始值。
在某些实施例中,所述处理器用于根据所述运动状态数据确定所述位姿参数的初始值时,具体用于:
根据所述运动状态数据确定所述至少两个视觉传感器在各个轴向的角度变化量;
根据所述角度变化量确定所述位姿参数的初始值。
在某些实施例中,所述位姿参数包括所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数,所述处理器用于根据所述图像数据对所述初始值进行校正,得到所述位姿参数的标定值时,具体用于:
根据所述至少两个视觉传感器中任意两个视觉传感器采集的图像对第一轴向姿态的初始值进行校正,得到所述第一轴向姿态的标定值;
根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中任一视觉传感器采集的图像确定第二轴向姿态的标定值;所述第一轴向姿态为垂直于所述任意两个视觉传感器基线的轴向对应的角度,所述第二轴向姿态为沿着所述任意两个视觉传感器基线的轴向对应的角度。
在某些实施例中,所述处理器用于根据所述至少两个视觉传感器中任意两个视觉传感器采集的图像对第一轴向姿态的初始值进行校正,得到所述第一轴向姿态的标定值时,具体用于:
确定所述任意两个视觉传感器中其中一个视觉传感器采集的图像的第一特征点在另一个视觉传感器采集的图像中的第一匹配点;
根据所述第一轴向姿态的初始值确定所述第一特征点和所述第一匹配点的像素坐标的纵坐标的偏差;
根据所述偏差确定所述第一轴向姿态的标定值。
在某些实施例中,所述根据所述偏差确定所述第一轴向姿态的标定值时,具体用于:
若所述偏差小于预设阈值,则将所述第一轴向姿态的初始值作为所述标定值。
在某些实施例中,所述处理器用于根据所述偏差确定所述第一轴向姿态的标定值时,具体用于:
若所述偏差大于预设阈值,则根据所述第一特征点和所述第一匹配点 确定所述第一轴向姿态的标定值。
在某些实施例中,所述处理器用于根据所述第一特征点和所述第一匹配点确定所述第一轴向姿态的标定值时,具体用于:
根据所述第一特征点和所述第一匹配点确定本质矩阵;
根据所述本质矩阵确定所述第一轴向姿态的标定值。
在某些实施例中,所述处理器用于根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中任一视觉传感器采集的图像确定第二轴向姿态的标定值时,具体用于:
根据第二轴向姿态的初始值和预设的偏差范围确定多个角度;
根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中其中一个视觉传感器采集的多帧图像从所述多个角度中确定一个目标角度,作为所述第二轴向姿态的标定值。
在某些实施例中,所述处理器用于根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中其中一个传感器采集的多帧图像从所述多个角度中确定一个目标角度时,具体用于:
从所述任意两个视觉传感器中其中一个视觉传感器采集的多帧图像中的一帧图像提取第二特征点,并确定所述第二特征点在所述多帧图像中的其他图像中的第二匹配点;
将所述多个角度逐一确定为所述第二轴向姿态的标定值,并基于所确定的第二轴向姿态标定值以及所述第一轴向的标定值确定所述第一特征点对应的深度值;
基于所述第二特征点、所述第二匹配点以及所述深度值从所述多个角度中确定所述目标角度。
在某些实施例中,所述处理器在确定所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数之后,还用于:
根据所述任意两个视觉传感器之间相对位姿参数以及所述图像数据校正所述任意两个视觉传感器与所述电子设备的机身之间的相对位姿参数。
在某些实施例中,所述电子设备包括可移动平台,所述可移动平台包括动力部件,所述动力部件用于驱使所述可移动平台运动。
其中,所述参数标定装置用于实现视觉传感器的位姿参数的标定的具体细节可参考上述方法中的描述,在此不再赘述。
此外,本申请还提供一种可移动平台,所述可移动平台包括至少两个视觉传感器以及如本说明书实施例中任一项所述的参数标定装置,所述视觉传感器通过支撑结构装载于所述可移动平台的机身,所述视觉传感器设置有姿态测量单元。其中,该可移动平台可以是无人机、无人驾驶汽车、智能机器人等。
相应地,本说明书实施例还提供一种计算机存储介质,所述存储介质中存储有程序,所述程序被处理器执行时实现上述任一实施例中的参数标定方法。
本说明书实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一 个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (41)

  1. 一种参数标定方法,其特征在于,所述方法应用于电子设备,所述电子设备包括至少两个视觉传感器,所述至少两个视觉传感器通过支撑结构装载于所述电子设备的机身,所述方法包括:
    获取所述至少两个视觉传感器的状态数据,所述状态数据包括以下数据中的至少两种:所述至少两个视觉传感器的所述支撑结构的温度状态数据,设置于所述至少两个视觉传感器的姿态测量单元采集的运动状态数据,由所述至少两个视觉传感器采集的图像数据;
    根据所述状态数据确定所述至少两个视觉传感器的位姿参数,所述位姿参数包括所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数,和/或所述至少两个视觉传感器中任意一个视觉传感器与所述电子设备的机身之间的相对位姿参数。
  2. 根据权利要求1所述的方法,其特征在于,所述温度状态数据由设置于所述电子设备的温度传感器获取。
  3. 根据权利要求1或2所述的方法,其特征在于,所述获取所述至少两个视觉传感器的状态数据,包括:
    确定所述电子设备当前的工作状态;
    根据所述工作状态获取对应种类的所述状态数据。
  4. 根据权利要求3所述的方法,其特征在于,所述工作状态包括:第一工作状态、第二工作状态或第三工作状态;
    在所述第一工作状态,所述电子设备上电;
    在所述第二工作状态,所述电子设备的可活动部件进行自检操作;
    在所述第三工作状态,所述电子设备响应于运动触发指令进行运动。
  5. 根据权利要求4所述的方法,其特征在于,从所述电子设备上电到所 述电子设备的所述可活动部件进入所述自检操作的时间,大于所述电子设备上电到所述温度传感器进入正常工作的时间。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述工作状态获取对应种类的所述状态数据,包括:
    若所述工作状态为所述第一状态,获取的所述状态数据包括所述温度状态数据。
  7. 根据权利要求5所述的方法,其特征在于,所述根据所述工作状态获取对应种类的所述状态数据,包括:
    若所述工作状态为所述第二状态,获取的所述状态数据包括所述温度状态数据以及所述运动状态数据。
  8. 根据权利要求5所述的方法,其特征在于,所述根据所述工作状态获取对应种类的所述状态数据,包括:
    若所述工作状态为所述第三状态,获取的所述状态数据包括所述温度状态数据、所述运动状态数据以及所述图像数据。
  9. 根据权利要求8所述的方法,其特征在于,根据所述状态数据确定所述至少两个视觉传感器的位姿参数,包括:
    根据所述温度状态数据和/或所述运动状态数据确定所述位姿参数的初始值;
    根据所述图像数据对所述初始值进行校正,得到所述位姿参数的标定值。
  10. 根据权利要求9所述的方法,其特征在于,根据所述温度状态数据确定所述位姿参数的初始值,包括:
    根据所述温度状态数据以及温度与位姿参数变化量的对应关系确定所述位姿参数的初始值。
  11. 根据权利要求9或10所述的方法,其特征在于,根据所述运动状态 数据确定所述位姿参数的初始值,包括:
    根据所述运动状态数据确定所述至少两个视觉传感器在各个轴向的角度变化量;
    根据所述角度变化量确定所述位姿参数的初始值。
  12. 根据权利要求9-11任一项所述的方法,其特征在于,所述位姿参数包括所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数,根据所述图像数据对所述初始值进行校正,得到所述位姿参数的标定值,包括:
    根据所述至少两个视觉传感器中任意两个视觉传感器采集的图像对第一轴向姿态的初始值进行校正,得到所述第一轴向姿态的标定值;
    根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中任一视觉传感器采集的图像确定第二轴向姿态的标定值;所述第一轴向姿态为垂直于所述任意两个视觉传感器基线的轴向对应的角度,所述第二轴向姿态为沿着所述任意两个视觉传感器基线的轴向对应的角度。
  13. 根据权利要求12所述的方法,其特征在于,根据所述至少两个视觉传感器中任意两个视觉传感器采集的图像对第一轴向姿态的初始值进行校正,得到所述第一轴向姿态的标定值,包括:
    确定所述任意两个视觉传感器中其中一个视觉传感器采集的图像的第一特征点在另一个视觉传感器采集的图像中的第一匹配点;
    根据所述第一轴向姿态的初始值确定所述第一特征点和所述第一匹配点的像素坐标的纵坐标的偏差;
    根据所述偏差确定所述第一轴向姿态的标定值。
  14. 根据权利要求13所述的方法,其特征在于,根据所述偏差确定所述第一轴向姿态的标定值,包括:
    若所述偏差小于预设阈值,则将所述第一轴向姿态的初始值作为所述标定值。
  15. 根据权利要求13或14所述的方法,其特征在于,根据所述偏差确定所述第一轴向姿态的标定值,包括:
    若所述偏差大于预设阈值,则根据所述第一特征点和所述第一匹配点确定所述第一轴向姿态的标定值。
  16. 根据权利要求15所述的方法,其特征在于,根据所述第一特征点和所述第一匹配点确定所述第一轴向姿态的标定值,包括:
    根据所述第一特征点和所述第一匹配点确定本质矩阵;
    根据所述本质矩阵确定所述第一轴向姿态的标定值。
  17. 根据权利要求12-16任一项所述的方法,其特征在于,根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中任一视觉传感器采集的图像确定第二轴向姿态的标定值,包括:
    根据第二轴向姿态的初始值和预设的偏差范围确定多个角度;
    根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中其中一个视觉传感器采集的多帧图像从所述多个角度中确定一个目标角度,作为所述第二轴向姿态的标定值。
  18. 根据权利要求17所述的方法,其特征在于,根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中其中一个传感器采集的多帧图像从所述多个角度中确定一个目标角度,包括:
    从所述任意两个视觉传感器中其中一个视觉传感器采集的多帧图像中的一帧图像提取第二特征点,并确定所述第二特征点在所述多帧图像中的其他图像中的第二匹配点;
    将所述多个角度逐一确定为所述第二轴向姿态的标定值,并基于所确定 的第二轴向姿态标定值以及所述第一轴向的标定值确定所述第一特征点对应的深度值;
    基于所述第二特征点、所述第二匹配点以及所述深度值从所述多个角度中确定所述目标角度。
  19. 根据权利要求12-18任一项所述的方法,其特征在于,在确定所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数之后,还包括:
    根据所述任意两个视觉传感器之间相对位姿参数以及所述图像数据校正所述任意两个视觉传感器与所述电子设备的机身之间的相对位姿参数。
  20. 根据权利要求1-19任一项所述的方法,其特征在于,所述电子设备包括可移动平台,所述可移动平台包括动力部件,所述动力部件用于驱使所述可移动平台运动。
  21. 一种参数标定装置,其特征在于,所述装置应用于电子设备,所述电子设备包括至少两个视觉传感器,所述至少两个视觉传感器通过支撑结构装载于所述电子设备的机身,所述装置包括处理器、存储器、存储于所述存储器所述处理器可执行的计算机程序,所述处理器执行所述计算机程序时,实现以下步骤:
    获取所述至少两个视觉传感器的状态数据,所述状态数据包括以下数据中的至少两种:所述至少两个视觉传感器的所述支撑结构的温度状态数据,设置于所述至少两个视觉传感器的姿态测量单元采集的运动状态数据,由所述至少两个视觉传感器采集的图像数据;
    根据所述状态数据确定所述至少两个视觉传感器的位姿参数,所述位姿参数包括所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数,和/或所述至少两个视觉传感器中任意一个视觉传感器与所述电子设备的机身之间的相对位姿参数。
  22. 根据权利要求21所述的装置,其特征在于,所述温度状态数据由设置于所述电子设备的温度传感器获取。
  23. 根据权利要求21或22所述的装置,其特征在于,所述处理器用于获取所述至少两个视觉传感器的状态数据时,具体用于:
    确定所述电子设备当前的工作状态;
    根据所述工作状态获取对应种类的所述状态数据。
  24. 根据权利要求23所述的装置,其特征在于,所述工作状态包括:第一工作状态、第二工作状态或第三工作状态;
    在所述第一工作状态,所述电子设备上电;
    在所述第二工作状态,所述电子设备的可活动部件进行自检操作;
    在所述第三工作状态,所述电子设备响应于运动触发指令进行运动。
  25. 根据权利要求24所述的装置,其特征在于,从所述电子设备上电到所述电子设备的所述可活动部件进入所述自检操作的时间,大于所述电子设备上电到所述温度传感器进入正常工作的时间。
  26. 根据权利要求25所述的装置,其特征在于,所述处理器用于根据所述工作状态获取对应种类的所述状态数据时,具体用于:
    若所述工作状态为所述第一状态,获取的所述状态数据包括所述温度状态数据。
  27. 根据权利要求25所述的装置,其特征在于,所述处理器用于根据所述工作状态获取对应种类的所述状态数据时,具体用于:
    若所述工作状态为所述第二状态,获取的所述状态数据包括所述温度状态数据以及所述运动状态数据。
  28. 根据权利要求25所述的装置,其特征在于,所述处理器用于根据所述工作状态获取对应种类的所述状态数据时,具体用于:
    若所述工作状态为所述第三状态,获取的所述状态数据包括所述温度状态数据、所述运动状态数据以及所述图像数据。
  29. 根据权利要求28所述的装置,其特征在于,所述处理器用于根据所述状态数据确定所述至少两个视觉传感器的位姿参数时,具体用于:
    根据所述温度状态数据和/或所述运动状态数据确定所述位姿参数的初始值;
    根据所述图像数据对所述初始值进行校正,得到所述位姿参数的标定值。
  30. 根据权利要求29所述的装置,其特征在于,所述处理器用于根据所述温度状态数据确定所述位姿参数的初始值时,具体用于:
    根据所述温度状态数据以及温度与位姿参数变化量的对应关系确定所述位姿参数的初始值。
  31. 根据权利要求29或30所述的装置,其特征在于,所述处理器用于根据所述运动状态数据确定所述位姿参数的初始值时,具体用于:
    根据所述运动状态数据确定所述至少两个视觉传感器在各个轴向的角度变化量;
    根据所述角度变化量确定所述位姿参数的初始值。
  32. 根据权利要求29-31任一项所述的装置,其特征在于,所述位姿参数包括所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数,所述处理器用于根据所述图像数据对所述初始值进行校正,得到所述位姿参数的标定值时,具体用于:
    根据所述至少两个视觉传感器中任意两个视觉传感器采集的图像对第一轴向姿态的初始值进行校正,得到所述第一轴向姿态的标定值;
    根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中任一视觉传感器采集的图像确定第二轴向姿态的标定值;所述第一轴向姿态为垂直 于所述任意两个视觉传感器基线的轴向对应的角度,所述第二轴向姿态为沿着所述任意两个视觉传感器基线的轴向对应的角度。
  33. 根据权利要求32所述的装置,其特征在于,所述处理器用于根据所述至少两个视觉传感器中任意两个视觉传感器采集的图像对第一轴向姿态的初始值进行校正,得到所述第一轴向姿态的标定值时,具体用于:
    确定所述任意两个视觉传感器中其中一个视觉传感器采集的图像的第一特征点在另一个视觉传感器采集的图像中的第一匹配点;
    根据所述第一轴向姿态的初始值确定所述第一特征点和所述第一匹配点的像素坐标的纵坐标的偏差;
    根据所述偏差确定所述第一轴向姿态的标定值。
  34. 根据权利要求33所述的装置,其特征在于,所述根据所述偏差确定所述第一轴向姿态的标定值时,具体用于:
    若所述偏差小于预设阈值,则将所述第一轴向姿态的初始值作为所述标定值。
  35. 根据权利要求33或34所述的装置,其特征在于,所述处理器用于根据所述偏差确定所述第一轴向姿态的标定值时,具体用于:
    若所述偏差大于预设阈值,则根据所述第一特征点和所述第一匹配点确定所述第一轴向姿态的标定值。
  36. 根据权利要求35所述的装置,其特征在于,所述处理器用于根据所述第一特征点和所述第一匹配点确定所述第一轴向姿态的标定值时,具体用于:
    根据所述第一特征点和所述第一匹配点确定本质矩阵;
    根据所述本质矩阵确定所述第一轴向姿态的标定值。
  37. 根据权利要求32-36任一项所述的装置,其特征在于,所述处理器用 于根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中任一视觉传感器采集的图像确定第二轴向姿态的标定值时,具体用于:
    根据第二轴向姿态的初始值和预设的偏差范围确定多个角度;
    根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中其中一个视觉传感器采集的多帧图像从所述多个角度中确定一个目标角度,作为所述第二轴向姿态的标定值。
  38. 根据权利要求37所述的装置,其特征在于,所述处理器用于根据所述第一轴向姿态的标定值以及所述任意两个视觉传感器中其中一个传感器采集的多帧图像从所述多个角度中确定一个目标角度时,具体用于:
    从所述任意两个视觉传感器中其中一个视觉传感器采集的多帧图像中的一帧图像提取第二特征点,并确定所述第二特征点在所述多帧图像中的其他图像中的第二匹配点;
    将所述多个角度逐一确定为所述第二轴向姿态的标定值,并基于所确定的第二轴向姿态标定值以及所述第一轴向的标定值确定所述第一特征点对应的深度值;
    基于所述第二特征点、所述第二匹配点以及所述深度值从所述多个角度中确定所述目标角度。
  39. 根据权利要求32-38任一项所述的装置,其特征在于,所述处理器在确定所述至少两个视觉传感器中任意两个视觉传感器之间的相对位姿参数之后,还用于:
    根据所述任意两个视觉传感器之间相对位姿参数以及所述图像数据校正所述任意两个视觉传感器与所述电子设备的机身之间的相对位姿参数。
  40. 根据权利要求21-39任一项所述的装置,其特征在于,所述电子设备包括可移动平台,所述可移动平台包括动力部件,所述动力部件用于驱使所 述可移动平台运动。
  41. 一种可移动平台,其特征在于,所述可移动平台包括至少两个视觉传感器以及如所述权利要求21-40任一项所述的参数标定装置,所述视觉传感器通过支撑结构装载于所述可移动平台的机身,所述视觉传感器设置有姿态测量单元。
PCT/CN2020/116757 2020-09-22 2020-09-22 参数标定方法、装置及可移动平台 WO2022061495A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/116757 WO2022061495A1 (zh) 2020-09-22 2020-09-22 参数标定方法、装置及可移动平台

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/116757 WO2022061495A1 (zh) 2020-09-22 2020-09-22 参数标定方法、装置及可移动平台

Publications (1)

Publication Number Publication Date
WO2022061495A1 true WO2022061495A1 (zh) 2022-03-31

Family

ID=80844763

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/116757 WO2022061495A1 (zh) 2020-09-22 2020-09-22 参数标定方法、装置及可移动平台

Country Status (1)

Country Link
WO (1) WO2022061495A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116539068A (zh) * 2023-07-03 2023-08-04 国网山西省电力公司电力科学研究院 一种视觉测量系统柔性自检调节装置及方法

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103245335A (zh) * 2013-05-21 2013-08-14 北京理工大学 一种自主在轨服务航天器超近距离视觉位姿测量方法
JP2014145734A (ja) * 2013-01-30 2014-08-14 Nikon Corp 情報入出力装置、及び情報入出力方法
CN106406540A (zh) * 2016-10-14 2017-02-15 北京小鸟看看科技有限公司 一种姿态传感装置及一种虚拟现实系统
CN107270900A (zh) * 2017-07-25 2017-10-20 广州阿路比电子科技有限公司 一种6自由度空间位置和姿态的检测系统和方法
CN207439417U (zh) * 2017-07-26 2018-06-01 潍坊歌尔电子有限公司 姿态传感装置
CN109166181A (zh) * 2018-08-12 2019-01-08 苏州炫感信息科技有限公司 一种基于深度学习的混合动作捕捉系统
CN110296702A (zh) * 2019-07-30 2019-10-01 清华大学 视觉传感器与惯导紧耦合的位姿估计方法及装置
CN110879662A (zh) * 2019-11-27 2020-03-13 云南电网有限责任公司电力科学研究院 一种基于ahrs算法的动作识别装置及方法
CN210616536U (zh) * 2019-10-14 2020-05-26 西安交通工程学院 一种基于vr技术的机器人控制装置
CN111462229A (zh) * 2020-03-31 2020-07-28 普宙飞行器科技(深圳)有限公司 基于无人机的目标拍摄方法、拍摄装置及无人机

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014145734A (ja) * 2013-01-30 2014-08-14 Nikon Corp 情報入出力装置、及び情報入出力方法
CN103245335A (zh) * 2013-05-21 2013-08-14 北京理工大学 一种自主在轨服务航天器超近距离视觉位姿测量方法
CN106406540A (zh) * 2016-10-14 2017-02-15 北京小鸟看看科技有限公司 一种姿态传感装置及一种虚拟现实系统
CN107270900A (zh) * 2017-07-25 2017-10-20 广州阿路比电子科技有限公司 一种6自由度空间位置和姿态的检测系统和方法
CN207439417U (zh) * 2017-07-26 2018-06-01 潍坊歌尔电子有限公司 姿态传感装置
CN109166181A (zh) * 2018-08-12 2019-01-08 苏州炫感信息科技有限公司 一种基于深度学习的混合动作捕捉系统
CN110296702A (zh) * 2019-07-30 2019-10-01 清华大学 视觉传感器与惯导紧耦合的位姿估计方法及装置
CN210616536U (zh) * 2019-10-14 2020-05-26 西安交通工程学院 一种基于vr技术的机器人控制装置
CN110879662A (zh) * 2019-11-27 2020-03-13 云南电网有限责任公司电力科学研究院 一种基于ahrs算法的动作识别装置及方法
CN111462229A (zh) * 2020-03-31 2020-07-28 普宙飞行器科技(深圳)有限公司 基于无人机的目标拍摄方法、拍摄装置及无人机

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116539068A (zh) * 2023-07-03 2023-08-04 国网山西省电力公司电力科学研究院 一种视觉测量系统柔性自检调节装置及方法
CN116539068B (zh) * 2023-07-03 2023-09-08 国网山西省电力公司电力科学研究院 一种视觉测量系统柔性自检调节装置及方法

Similar Documents

Publication Publication Date Title
CN109540126B (zh) 一种基于光流法的惯性视觉组合导航方法
CN111795686B (zh) 一种移动机器人定位与建图的方法
CN109544630B (zh) 位姿信息确定方法和装置、视觉点云构建方法和装置
WO2020253260A1 (zh) 时间同步处理方法、电子设备及存储介质
TW201832185A (zh) 利用陀螺儀的相機自動校準
CN112230242A (zh) 位姿估计系统和方法
CN112116651B (zh) 一种基于无人机单目视觉的地面目标定位方法和系统
CN110954134B (zh) 陀螺仪偏差校正方法、校正系统、电子设备及存储介质
CN109238277B (zh) 视觉惯性数据深度融合的定位方法及装置
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
Hamel et al. Homography estimation on the special linear group based on direct point correspondence
CN110068326B (zh) 姿态计算方法、装置、电子设备以及存储介质
CN105324792A (zh) 用于估计移动元件相对于参考方向的角偏差的方法
CN111524194A (zh) 一种激光雷达和双目视觉相互融合的定位方法及终端
CN112712565A (zh) 基于视觉与imu融合的飞机蒙皮损伤无人机绕检定位方法
WO2022061495A1 (zh) 参数标定方法、装置及可移动平台
CN113436267B (zh) 视觉惯导标定方法、装置、计算机设备和存储介质
Ge et al. Binocular vision calibration and 3D re-construction with an orthogonal learning neural network
CN112284381B (zh) 视觉惯性实时初始化对准方法及系统
CN110136168B (zh) 一种基于特征点匹配和光流法的多旋翼速度测量方法
CN113405532B (zh) 基于视觉系统结构参数的前方交会测量方法及系统
CN116721166A (zh) 双目相机和imu旋转外参在线标定方法、装置及存储介质
CN108322698B (zh) 基于多摄像机和惯性测量单元融合的系统和方法
CN115880328A (zh) 一种双星协同观测的空中运动目标实时定位方法
WO2019058487A1 (ja) 3次元復元画像処理装置、3次元復元画像処理方法及び3次元復元画像処理プログラムを記憶したコンピュータ読み取り可能な記憶媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20954349

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20954349

Country of ref document: EP

Kind code of ref document: A1