WO2021081707A1 - 数据处理方法、装置、可移动平台及计算机可读存储介质 - Google Patents

数据处理方法、装置、可移动平台及计算机可读存储介质 Download PDF

Info

Publication number
WO2021081707A1
WO2021081707A1 PCT/CN2019/113687 CN2019113687W WO2021081707A1 WO 2021081707 A1 WO2021081707 A1 WO 2021081707A1 CN 2019113687 W CN2019113687 W CN 2019113687W WO 2021081707 A1 WO2021081707 A1 WO 2021081707A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
measurement unit
inertial
inertial measurement
movable platform
Prior art date
Application number
PCT/CN2019/113687
Other languages
English (en)
French (fr)
Inventor
高文良
周游
叶长春
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980034224.5A priority Critical patent/CN112204946A/zh
Priority to PCT/CN2019/113687 priority patent/WO2021081707A1/zh
Publication of WO2021081707A1 publication Critical patent/WO2021081707A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Definitions

  • This application relates to the field of automatic control, and in particular to a data processing method, a data processing device, a removable platform and a computer-readable storage medium.
  • Movable platforms such as unmanned aerial vehicles (UAVs) can be used to perform surveillance, reconnaissance, and exploration missions for a variety of civilian, commercial, and military applications.
  • the movable platform can be manually controlled by a remote user, or can be operated in a semi-autonomous or fully autonomous manner.
  • the movable platform may include sensors configured to collect data that can be used during the operation of the unmanned aerial vehicle, such as image data, inertial data.
  • the movable platform can use the visual inertial odometer for positioning, attitude measurement, speed measurement, etc.
  • Visual inertial odometry (VIO, visual-inertial odometry) is an algorithm that combines the image frames collected by the camera and the inertial data measured by the IMU inertial measurement unit to perform positioning, attitude measurement, and speed measurement.
  • To implement visual inertial odometry usually a camera that triggers a global shutter exposure with better synchronization performance is used.
  • the downward vision positioning machine is a camera with global shutter exposure, used for positioning, attitude measurement, speed measurement, etc. to stabilize the movable platform; however, ,
  • the main high-quality camera is a rolling shutter camera for image capture.
  • the downward vision positioning camera is close to the ground and there is no effective observation. Therefore, the visual odometer cannot be used to stabilize the movable platform during the take-off phase.
  • each embodiment of the present application provides a data processing method, a data processing device, a removable platform, and a computer-readable storage medium.
  • a data processing method is provided, which is applied to a movable platform on which a first camera and a first inertial measurement unit are provided, and the first camera is used to Image frames are collected by means of a rolling shutter; the first inertial measurement unit is used to measure the inertial data of the movable platform, and the method includes the steps:
  • the estimated posture is optimized through the optimization equation, and the optimized posture of the first camera is generated.
  • a data processing method is provided, which is applied to a movable platform, and the movable platform is provided with a first camera, a second camera, and a first inertial measurement unit;
  • a camera is used to collect image frames in a rolling shutter mode;
  • the second camera is used to collect image frames in a global shutter mode;
  • the first inertial measurement unit is used to measure inertial data of the movable platform;
  • the method includes the steps:
  • the estimated pose is optimized and calculated to obtain the optimized pose.
  • a data processing device which is applied to a movable platform, and the movable platform is provided with a first camera and a first inertial measurement unit, and the first camera is used for Collecting image frames by means of a rolling shutter; the first inertial measurement unit is used to measure the inertial data of the movable platform;
  • the data processing device includes a processor and a memory; wherein,
  • the memory is used to store programs
  • the processor is coupled with the memory, and is configured to execute the program stored in the memory for:
  • the estimated posture is optimized through the optimization equation, and the optimized posture of the first camera is generated.
  • a data processing device which is applied to a movable platform, and the movable platform is provided with a first camera, a second camera, and a first inertial measurement unit;
  • a camera is used to collect image frames in a rolling shutter mode;
  • the second camera is used to collect image frames in a global shutter mode;
  • the first inertial measurement unit is used to measure inertial data of the movable platform;
  • the data processing device includes a processor and a memory; wherein,
  • the memory is used to store programs
  • the processor is coupled with the memory, and is configured to execute the program stored in the memory for:
  • the estimated pose is optimized and calculated to obtain the optimized pose.
  • a movable platform in another embodiment of the present application, includes:
  • the first camera is used to collect image frames in a rolling shutter mode
  • the first inertial measurement unit is used to measure the inertial data of the movable platform
  • One or more processors individually or collectively configured to:
  • the estimated posture is optimized through the optimization equation, and the optimized posture of the first camera is generated.
  • one or more non-transitory computer-readable storage media which has executable instructions stored thereon, and the executable instructions are executed by one or more processors.
  • the movable platform is provided with the first camera and the first inertial measurement unit; the first camera is used to collect image frames by means of a rolling shutter; the first inertial measurement unit is used to measure the Describe the inertial data of the movable platform.
  • a movable platform in another embodiment of the present application, includes:
  • the first camera is used to collect image frames in a rolling shutter mode
  • the second camera is used to collect image frames in a global shutter mode
  • the first inertial measurement unit is used to measure the inertial data of the movable platform
  • One or more processors individually or collectively configured to:
  • the estimated pose is optimized and calculated to obtain the optimized pose.
  • one or more non-transitory computer-readable storage media which has executable instructions stored thereon, and the executable instructions are executed by one or more processors.
  • the movable platform is provided with the first camera, the second camera, and the first inertial measurement unit; the first camera is used to collect image frames in a rolling shutter mode; the second camera It is used to collect image frames in a global shutter mode; the first inertial measurement unit is used to measure the inertial data of the movable platform.
  • the movable platform is provided with a first inertial measurement unit and a first camera; wherein, the first camera is used to collect image frames in a rolling shutter mode; the first inertial measurement unit is used to measure The obtained inertial data and the image frames collected by the first camera are used to construct an optimization equation; the estimated posture of the first camera is optimized through the optimization equation to obtain the optimized posture of the first camera; the problem of downward vision in the prior art is solved
  • the positioning camera does not effectively observe the problem that the visual odometer cannot be used to stabilize the movable platform; on the other hand, the embodiment of the present application provides a solution to accurately obtain the first camera posture, which is suitable for the movable platform with stabilization.
  • the stabilization gimbal directly uses the posture information of the first camera for motion compensation, which will not cause the picture to be skewed, which helps to improve the shooting quality of the first camera; on the other hand, it is for the stabilization of the mechanical structure.
  • the platform since it is not necessary to determine the first camera posture by combining the posture of the movable platform and each mechanical joint angle in the mechanical structure, but to obtain the first camera posture accurately, the requirements for the installation accuracy and deformation of the mechanical structure of the gimbal are required. It can also be lowered.
  • Figure 1 is an example of a skewed image in the image
  • FIG. 2 is a schematic flowchart of a data processing method provided by an embodiment of this application.
  • FIG. 3 is a schematic diagram of down-sampling some pixels in a color image into one pixel in an embodiment of the application
  • FIG. 4 is a schematic diagram of the exposure delay of the first camera in an embodiment of the application.
  • FIG. 5 is a schematic structural diagram of a movable platform provided by an embodiment of this application.
  • FIG. 6 is a schematic diagram of a data processing method provided by an embodiment of this application.
  • FIG. 7 is a schematic flowchart of a data processing method provided by another embodiment of this application.
  • FIG. 8 is a schematic structural diagram of a data processing device provided by an embodiment of the application.
  • FIG. 9 is a schematic structural diagram of a data processing device provided by another embodiment of the application.
  • Visual-inertial odometry is an algorithm that combines camera and IMU (Inertial Measurement Unit) data to perform positioning, attitude measurement, speed measurement, etc.
  • IMU Inertial Measurement Unit
  • visual inertial odometers using cameras and IMU inertial measurement units are mainly divided into two types: loosely coupled and tightly coupled.
  • the loosely coupled visual inertial odometer uses two relatively independent visual motion modules and inertial navigation motion estimation modules to estimate the state of motion respectively, and then merge the output results of each module to obtain the final pose information.
  • the tightly coupled visual inertial odometer directly fuses the original information of the two sensors, estimates together, restricts each other and complements each other, makes full use of the characteristics of the sensor, and has high accuracy, but the algorithm has a large amount of calculation and the module coupling is obvious.
  • Monocular vision itself has great limitations and cannot directly output information for control.
  • Monocular vision systems usually have no scale. Due to the lack of stereo information and depth information output by the picture, the position and posture information of the camera calculated through the basic matrix, essential matrix or homography matrix constraint calculation does not have a real physical scale, that is, it has a relatively accurate The relative posture, but the scale of the relative position is scaled; the position information of the three-dimensional point in space calculated by epipolar constraint and triangulation has no real physical scale, that is, it has a similar shape, but the size is scaled.
  • the IMU inertial measurement unit is a sensor that measures linear acceleration and angular velocity, and has good instantaneous characteristics. Moreover, the acceleration output by the accelerometer of the IMU inertial measurement unit directly has the scale of gravity, which can better assist the monocular vision to restore the scale. However, due to the bias, the output of the IMU inertial measurement unit cannot directly integrate the time to obtain more accurate speed, position and attitude.
  • an image sensor that triggers a global shutter exposure with better synchronization performance is usually used as an input. Since all the pixels of the camera exposed by the global shutter are exposed at the same time, the image information of all pixels is collected at the same time, which is closer to the traditional vision system (especially the optical flow optical flow, etc., which requires all pixels to describe It is the scene information at the same time, and the illumination level is similar, and the movement is roughly the same) Assumptions about the scene being photographed.
  • High-quality cameras used for shooting usually have a larger photosensitive area, a wider sensitivity range, and a larger latitude, which can adapt to a larger dynamic range and less noise. These have greater advantages for the visual positioning system, compared to the additional positioning camera.
  • high-quality main cameras usually have the following problems:
  • a high-quality camera must have a rolling shutter exposure, that is, all pixels are exposed one by one in order and read out in series.
  • the image information of these pixels is not at the same time, but in sequence.
  • the deviation caused by the camera exposure sequence of the rolling shutter (Rolling Shutter) exposure will affect the information collection of the visual inertial measurement unit, for example, relative to the ground level on a moving train With a moving camera, the pole on the roadside is tilted, see the image shown in Figure 1.
  • the exposure of a high-quality camera is controlled by the user.
  • the user can artificially modify the camera's exposure shutter time, gain, and exposure compensation, which is disadvantageous for the vision positioning system with high real-time requirements. of.
  • the user has artificially adjusted the exposure parameters of the main camera's image sensor, the brightness values between adjacent image frames are inconsistent, and the illumination invariance required by the traditional vision system is destroyed;
  • the high-quality main camera is a color sensor.
  • the output of the three RGB red, green and blue channels of each pixel of the color sensor is usually realized by using Bayer filters.
  • the pixel values output by the color image sensor only the pixel value of the corresponding channel is its true value, and the values of other channels are obtained by interpolating the corresponding pixel values of their neighbors.
  • the difference between interpolation and true value is indistinguishable, but for a visual positioning system that requires sub-pixel accuracy, the accuracy will be reduced when color pictures are used directly for visual positioning.
  • the image When a small camera is handheld or aerial photography, the image will shake and blur due to the unstable attitude of the carrier. Through a certain stabilization method, the camera can be stabilized, so that the captured image is stable and the movement is smooth.
  • the commonly used methods of stabilization include mechanical gimbal stabilization, electronic image stabilization and optical stabilization.
  • Optical image stabilization is realized by stabilizing the image sensor, and the range of stabilization is small; electronic stabilization stabilization is realized by rotating the cropped image to achieve stabilization, which will crop the frame and cause the image quality to decrease.
  • Mechanical pan/tilt stabilization detects the movement of the camera in real time, and controls the motor to drive the mechanical frame adjustment to compensate for the camera movement that will cause image jitter, achieving lossless anti-shake, and obtaining a smooth and stable image.
  • Three-axis stabilization gimbal refers to a mechanical stabilization gimbal that has three rotating frames: outer-middle-inner, and can perform motion compensation and stabilization in three rotation degrees of freedom (yaw-pitch-roll).
  • the three-axis stabilization gimbal uses an IMU inertial measurement unit composed of a three-axis gyroscope and a three-axis acceleration sensor to measure and feedback the attitude of the camera.
  • the filtering algorithm is integrated with the external posture.
  • the IMU inertial measurement unit is fixedly connected to the high-definition camera to measure the inertial data of the high-definition camera.
  • the gimbal on the aerial photography UAV usually uses a filtering algorithm that integrates the IMU inertial measurement unit and the external attitude for attitude feedback. Since the posture obtained by directly using the IMU inertial measurement unit will drift during long-term use, it is necessary to perform algorithm filtering with the posture that is not easy to drift, such as the external posture, to obtain a more accurate posture.
  • the external attitude usually comes from the measurement of gravity by the three-axis accelerometer, or the attitude of the aerial drone body combined with each mechanical joint angle, but it is not the actual attitude of the main camera image sensor, and it often appears in use. Errors in the feedback posture lead to skewed images and other phenomena. At the same time, this requires high installation accuracy for the IMU inertial measurement unit and the image sensor, and high requirements for the measurement accuracy of the joint angle.
  • the gimbal on the aerial photography drone usually uses attitude feedback that is not the attitude of the actual main camera, it will cause the image to be skewed during aerial photography, which has a great impact on the aerial photography process, and usually uses post-compensation corrections.
  • Fig. 2 shows a schematic flowchart of a data processing method provided by an embodiment of the present application. As shown in the figure, the method provided in this embodiment is applied to a movable platform on which a first camera and a first inertial measurement unit are provided, and the first camera is used for rolling shutters. Image frames are collected in a manner; the first inertial measurement unit is used to measure the inertial data of the movable platform.
  • the method includes the steps:
  • the movable platform is further provided with a pan/tilt, and the first camera is arranged on the pan/tilt; correspondingly, in the above step 101, “obtain the preset of the first camera”
  • the "estimated posture” can be specifically:
  • the estimated posture is determined through the action parameters of the pan/tilt.
  • the pan/tilt is a three-axis pan/tilt; the three-axis pan/tilt includes: an outer frame, a middle frame, and an inner frame; and the motion parameters of the pan/tilt include: a movable platform body relative to The first rotation parameter of the outer frame; the second rotation parameter of the outer frame relative to the middle frame; the third rotation parameter of the middle frame relative to the inner frame.
  • the zero-point pose can be understood as a design parameter, that is, determined by design.
  • the first camera is installed on a three-axis stabilization gimbal. Because the gimbal has a stabilization and compensation rotation action, there may be other external control operations (such as the control operation of the operator of the movable platform) to control the attitude and viewing angle of the gimbal (Such as the pitch angle), causing the relative rotation of the first camera and the body of the movable platform (that is, the first inertial measurement unit) to be not fixed. Therefore, it is necessary to roughly estimate the estimated posture of the first camera: R ex and t ex through the joint angle of the gimbal mechanical frame.
  • the R ex in the estimated pose of the first camera is equal to the relative rotation R iC of the first camera with respect to the first inertial measurement unit, that is, the above step 1012 can use the following formula (1) to calculate the R ex in the estimated pose.
  • R iC0 is when the joint angle is 0, the relative rotation of the first camera with respect to the first inertial measurement unit is determined by design and is a known quantity;
  • R O movable platform body is rotated relative to the outer frame of the head
  • R M is the rotation of the outer frame of the gimbal relative to the middle frame
  • R I is the rotation of the central frame of the pan/tilt relative to the inner frame
  • the pan/tilt also includes a first motor for driving the outer frame to rotate relative to the movable platform body, a second motor for driving the middle frame to rotate relative to the outer frame, and a third motor for driving the inner frame to rotate relative to the middle frame.
  • the above R O, R M, and R I may be, the second motor and the third motor corresponding to each motor encoder counts determined by reading the first motor.
  • the t ex in the estimated posture of the first camera (that is, the relative displacement transformation of the first camera with respect to the first inertial measurement unit) is usually determined during mechanical design, that is, when the movable platform is designed and installed, the first camera can be The position on the mobile platform is determined and will not change. Therefore, t ex in the estimated pose of the first camera is a known quantity.
  • an optimization algorithm can be used to construct an optimization equation.
  • the optimization equation can be understood as: contains the constructed objective function and determines a set of solution parameters; the solution parameter minimizes the value of the objective function.
  • the solution parameters include the optimized pose of the first camera:
  • the objective function may be a linear function or a nonlinear function, and an iterative solution technique may be used to minimize the objective function, so as to obtain the optimized pose of the first camera.
  • the estimated pose is taken as the input parameter of the optimization equation, and the optimized pose of the first camera can be obtained after the optimization iterative process.
  • the inertial measurement unit mentioned in each embodiment of this article refers to an electronic device that uses one or more acceleration sensors to measure the current Speed, acceleration (for example, including translational speed and acceleration in the direction of each coordinate axis; angular velocity, angular acceleration around each coordinate axis, etc.), etc.; use one or more gyroscopes to detect changes in direction, roll angle, and tilt posture .
  • some inertial measurement units can also include a magnetometer, which is mainly used to assist in calibrating the direction offset.
  • the inertial data measured by the inertial measurement unit in the embodiments herein may include, but are not limited to: angular velocity around each coordinate axis, translation speed along each coordinate axis, angular acceleration around each coordinate axis, and angle along each coordinate axis. Acceleration.
  • the coordinate system mentioned here may be the coordinate system of the movable platform, of course, it may also be other coordinate systems, which is not specifically limited in this embodiment.
  • the movable platform is provided with a first inertial measurement unit and a first camera; wherein, the first camera is used to collect image frames in a rolling shutter mode; the first inertial measurement unit is used to measure The inertial data and the image frames collected by the first camera are used to construct the optimization equation; the estimated posture of the first camera is optimized through the optimization equation to obtain the optimized posture of the first camera; it solves the problem of downward vision positioning in the prior art
  • the camera does not effectively observe the problem that the visual odometer cannot be used to stabilize the movable platform; on the other hand, this embodiment provides a solution that can directly and accurately obtain the first camera attitude.
  • the stabilized gimbal directly uses the posture information of the first camera for motion compensation, which helps to improve the shooting quality of the first camera; on the other hand, for the mechanical structure of the stabilized gimbal, the current
  • the posture of the movable platform is optimized by using the images collected by the downward positioning camera and the image frames collected by the first inertial measurement unit; then the posture of the movable platform and the angle of each mechanical joint in the mechanical structure are combined to determine the first camera Attitude: Since the first camera attitude is obtained indirectly, the accuracy of the first camera attitude obtained in this way may be low due to the low installation accuracy of the mechanical structure of the gimbal, unknown deformations, etc., and the first feedback will appear.
  • the camera attitude error leads to phenomena such as skew of the image collected by the first camera; in the prior art, one way to improve the accuracy is to improve the installation accuracy of the mechanical structure of the pan/tilt, control deformation, etc.; and the technical solution provided in this embodiment is because it is Using the image frames collected by the first camera and the inertial data measured by the first inertial measurement unit, the attitude of the first camera can be directly and accurately obtained, and the requirements for the installation accuracy and deformation of the mechanical structure of the pan/tilt can be reduced.
  • the output of the three RGB red, green, and blue channels of each pixel in the color image is usually realized by using Bayer filters.
  • the pixel values of each pixel in a color image only the pixel value of the corresponding channel is its true value, while the values of other channels are obtained by interpolating the values of their neighboring corresponding pixels.
  • the difference between the interpolation and the true value is indistinguishable, but for a visual positioning system that requires sub-pixel accuracy, when the color image is directly used for visual positioning, the accuracy will be reduced. Therefore, in the method provided in the foregoing embodiment, before constructing the optimization equation, the method further includes the following steps:
  • the color image is converted into a grayscale image by performing down-sampling and filtering on each channel of the first camera separately and then adding, which can eliminate the influence of the Bayer filter on the tracking accuracy of feature points.
  • down-sampling 4x4 that is, 16 pixels into 1 pixel, can eliminate the influence of the Bayer filter.
  • the above step 102 "constructs an optimization equation based on the inertial data measured by the first inertial measurement unit and the image frames collected by the first camera" includes:
  • the feature point algorithm can be used to extract one or more feature point information from the grayscale image.
  • the feature point information can be a part of the grayscale image, such as edges, corners, points of interest, spots, folds, etc. ; These feature points can be distinguished from other points of the grayscale image.
  • Feature point algorithms may include, but are not limited to: edge detection algorithms, corner detection algorithms, blob detection algorithms, or wrinkle detection algorithms, and so on.
  • the method provided in this embodiment can extract one or more feature point information from a grayscale image by using a corner detection algorithm.
  • the above step S12 may be based on the tightly coupled visual inertial odometer system theory, using feature point information and inertial data measured by an inertial measurement unit to construct an optimization equation.
  • the method step 102 “constructs the said method based on the inertial data measured by the first inertial measurement unit and the image frame collected by the first camera. Optimization equation": It can specifically include:
  • the setting requirements include but are not limited to at least one of the following:
  • the rotation angle change between the current image frame and the previous key frame is greater than a first threshold
  • the displacement change between the current image frame and the previous key frame is greater than a second threshold
  • the number of matching feature points between the current image frame and the previous key frame is less than a third threshold
  • the number of matching feature points of the current image frame and the previous key frame distributed in different image regions is less than the fourth threshold.
  • the image frames collected by the rolling door camera still have the problem of time alignment. Since the first camera is a rolling shutter exposure camera, the exposure of the first camera is not triggered by the IMU hardware. Due to the inconsistent exposure time, the first camera will have a delay, as shown in Figure 4. Because the image exposure delay will affect the time alignment of the inertial data, which in turn affects the positioning accuracy. Therefore, it is necessary to estimate the time difference t d of the exposure. Among them, the time difference t d can be realized by the following method:
  • the empirical value and the engineering measurement value are used as the input of the filter, and the time difference t d is obtained through the processing of the filter.
  • step 102 the construction of an optimization equation based on the inertial data measured by the first inertial measurement unit and the image frames collected by the first camera.
  • a second inertial measurement unit is also provided on the movable platform, and the second inertial measurement unit is used to measure the inertial data of the first camera.
  • the method provided in this embodiment may further include the following steps:
  • the first requirement may include but is not limited to:
  • the angular velocity of rotation around a preset coordinate axis contained in the first camera inertial data is all less than a fifth threshold; wherein, the coordinate axis may be an axis in the first camera coordinate system;
  • the displacement speeds along the coordinate axis direction contained in the first camera inertial data are all less than a sixth threshold.
  • the above steps 106 to 107 can be understood as: when the visual inertial odometer of the first camera works stably, the relative rotation R iC of the latest frame IMU inertial measurement unit and the first camera can be used for the reference attitude feedback of the pan/tilt.
  • the strategy for judging the stability of the first camera is as follows:
  • the reference attitude of the main camera can be output.
  • the judgment method is as follows:
  • the output of the gyroscope in the second inertial measurement unit is recorded, and the maximum angular velocity of the rotation around the y-axis of the first camera coordinate system is less than the set fifth threshold ⁇ thd ;
  • the translational speed along the y-axis of the first camera coordinate system measured by the second inertial measurement unit is less than the set sixth threshold
  • the above steps 105 to 107 can be executed before the method provided in the above embodiment is executed, that is, when it is determined that the first camera is working stable through the above 105 to 107, the steps provided in the method in the above embodiment are executed again, such as steps 101 to 103.
  • the method provided in this embodiment may further include the following steps:
  • the characteristic points that meet the second requirement include at least one of the following:
  • the reprojection error refers to the error between the projection point and the actual 2D corresponding point when the 3D point of one frame of image is projected into another frame of image by using the rotation matrix R and the translation matrix t. Due to the unknown camera pose and the noise of the observation point, the reprojection error is bound to exist.
  • the above steps 108 to 109 can determine whether the optimized pose should be fed back to the gimbal after the method provided in the foregoing embodiment obtains the optimized posture of the first camera, so that the gimbal can make corresponding increases. Stable action response.
  • a second camera is also provided on the movable platform, and the second camera is used to collect image frames in a global shutter mode; the second camera is fixedly connected to the first inertial measurement unit; corresponding
  • the method provided in this embodiment may further include the following steps:
  • the image frame collected by the second camera is not clear, or the feature points cannot be extracted from it (for example, the ground image taken at the time of take-off, the image has too few feature points that are different from other points, then it will be considered as the first The second camera has no effective observation), etc., are considered unavailable.
  • the method provided in this embodiment may further include the following steps:
  • the method provided in this embodiment may further include the following steps:
  • the movable platform 800 includes a body, a second camera 840 arranged on the body, a pan/tilt 810 arranged on the body, and the first camera 820 is arranged on the pan/tilt 810;
  • the first camera 820 can move relative to the body through the pan/tilt 810.
  • the fuselage is also provided with a first inertial measurement unit 860 and a second inertial measurement unit 821.
  • the first camera 820 is used to capture image frames in a rolling shutter mode; the second camera 840 is used to capture image frames in a global shutter mode; the first inertial measurement unit 860 is used to measure the movable The inertial data of the platform; the second inertial measurement unit 821 is used to measure the inertial data of the first camera 820.
  • the movable platform may further include: a power system 830.
  • the power system may include an electronic governor (referred to as an ESC for short), one or more propellers, and one or more motors corresponding to the one or more propellers.
  • the technical solution provided by the present application is a method for optimizing the pose of the first camera by using the image frames collected by the first camera and the inertial data measured by the first inertial measurement unit.
  • this solution can include the following major parts. They are: data preparation stage, data processing stage and output stage; see Figure 6.
  • the first part the data preparation phase
  • the data preparation stage includes: image processing, time alignment.
  • Down-sampling 4x4 that is, 16 pixels in the image frame collected by the first camera, into 1 pixel, that is, down-sampling the image frame can eliminate the influence of the Bayer filter.
  • the specific implementation plan is:
  • the first time stamp and the second time stamp are corrected by a preset exposure delay
  • the data processing stage includes: estimation of the estimated pose of the first camera; feature point extraction; tightly coupled pose optimization.
  • the estimated posture of the first camera includes: R ex and t ex .
  • R ex can be obtained by using the following formula:
  • R ex R O R M R I R iC0
  • R iC0 is when the joint angle is 0, the relative rotation of the main camera with respect to the IMU is determined by the design
  • R O is the rotation of the body relative to the outer frame of the gimbal
  • R M is the rotation of the outer frame of the gimbal relative to the middle frame
  • R I is the rotation of the central frame of the pan/tilt relative to the inner frame
  • each frame namely R O , R M , R I can be determined by reading the corresponding motor encoder.
  • t ex is usually determined during mechanical design and will not change afterwards, so this value is a known quantity that can be obtained.
  • first extract feature points from them for example, using Harris Corner detection algorithms, corner detection algorithms
  • corner detection algorithms for example, using Harris Corner detection algorithms, corner detection algorithms
  • tracking and matching of feature points between multiple images Kanade-Lucas -Tomasi feature tracker
  • P i is the three-dimensional coordinates of a certain feature point
  • p i is the pixel coordinates of this feature point on the i-th frame of image, Represents the rotation and translation transformation of the current frame relative to the previous frame
  • R ex , t ex represent the external parameters between the first camera and the first inertial measurement unit
  • R iex represents the relative rotation between the first camera and the first inertial measurement unit in the i-th frame of image. Since the first camera and the first inertial measurement unit are connected by a pan-tilt instead of a fixed connection, the relative rotation needs to be estimated for each frame in the sliding window; what needs to be added here is: tightly coupled visual inertial odometer system theory
  • the content of the sliding window please refer to the related content in the prior art, which will not be repeated here.
  • R ex2 and t ex2 represent external parameters between the second camera and the first inertial measurement unit. Since the second camera is fixedly connected to the second inertial measurement unit, only one relative rotation needs to be estimated;
  • t d represents the exposure delay of the first camera, that is, the time difference
  • r b ( ⁇ ) represents the residual function of the first inertial measurement unit
  • arg represents the parameter (target) of this optimization is P i , R iex , t ex , R ex2 , t ex2 , t d .
  • r b ( ⁇ ) indicates that the residual function of the first inertial measurement unit is obtained during the pre-integration process of the inertial measurement unit.
  • the function of the IMU pre-integration is to calculate the observed value of the IMU data (ie Pre-integrated value) and residual covariance matrix and Jacobian matrix.
  • the residual can be expressed as:
  • X can contain the above R iex , t ex , R ex2 , t ex2 , t d .
  • the above residual The specific mathematical expression of can refer to the content of the visual inertial odometer theory in the prior art, such as the content described in VINS-MONO, which will not be repeated here.
  • the relative rotation R iC of the latest frame of the first inertial measurement unit and the first camera can be used for the feedback of the pan/tilt head.
  • the strategy for the first camera to work stably is as follows:
  • the optimized posture of the first camera can be output. Use key frames to remove unreliable feature points and filter out reliable feature points.
  • the strategy is as follows:
  • the technical solution provided by the embodiments of this application uses the second camera, the first camera, and the first inertial measurement unit to form a visual inertial odometer.
  • the second camera When the second camera is not working, it can continuously output the position, attitude, speed, etc. Information;
  • the automatic estimation of the external parameters (relative attitude) of the first camera and the body IMU inertial measurement unit is directly used for the attitude feedback of the gimbal, so that the relative attitude of the gimbal is accurate, and there is no phenomenon such as skew.
  • the installation accuracy and deformation requirements of the mechanical structure of the table can also be reduced.
  • FIG. 7 shows a schematic flowchart of a data processing method provided by another embodiment of the present application.
  • the method provided in this embodiment is applied to a movable platform on which a first camera, a second camera, and a first inertial measurement unit are provided; the first camera is used for rolling shutters Image frames are collected; the second camera is used to collect image frames in a global shutter mode; the first inertial measurement unit is used to measure inertial data of the movable platform.
  • the method includes:
  • the above step 302: "based on the image frames collected by the first camera, determining an available camera among the first camera and the second camera" may include:
  • the above step 302 "determine an available camera among the first camera and the second camera based on the image frames collected by the first camera" may also include the following steps:
  • step 303 optimizes the estimated pose based on the inertial data measured by the first inertial measurement unit and the image frames collected by the available camera Calculate the optimized pose, specifically:
  • the estimated posture is optimized by the optimization equation, and an optimized posture of the first camera is generated.
  • step 303 optimizes the estimated pose based on the inertial data measured by the first inertial measurement unit and the image frames collected by the available camera Calculate the optimized pose, specifically:
  • step 3031' please refer to the visual inertial odometer technology realized by integrating the image frames collected by the camera with global shutter exposure and the inertial data measured by the inertial measurement unit in the prior art, which will not be described in detail here.
  • the specific implementation process of the above step 3032' may be specifically: combining the pose information of the body of the movable platform with the estimated pose to obtain the optimized pose after optimization.
  • FIG. 8 shows a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • the solution provided by this embodiment is applied to a movable platform on which a first camera and a first inertial measurement unit are provided, and the first camera is used to collect data in a rolling shutter mode. Image frame; the first inertial measurement unit is used to measure the inertial data of the movable platform.
  • the data processing device includes:
  • the obtaining module 11 is used to obtain the estimated posture of the first camera
  • the construction module 12 is configured to construct an optimization equation based on the inertial data measured by the first inertial measurement unit and the image frames collected by the first camera;
  • the optimization module 13 is used to optimize the estimated posture through the optimization equation to generate an optimized posture of the first camera.
  • the data processing device may further include: a down-sampling module.
  • the down-sampling module is used to down-sample the current image frame collected by the first camera to obtain a grayscale image.
  • building module 12 is also used for:
  • the optimization equation is constructed.
  • building module 12 is also used for:
  • the optimization equation is constructed according to the inertial data measured by the first inertial measurement unit and the current image frame and historical key frames collected by the first camera.
  • setting requirements include at least one of the following:
  • the rotation angle change between the current image frame and the previous key frame is greater than a first threshold
  • the displacement change between the current image frame and the previous key frame is greater than a second threshold
  • the number of matching feature points between the current image frame and the previous key frame is less than a third threshold
  • the number of matching feature points of the current image frame and the previous key frame distributed in different image regions is less than the fourth threshold.
  • building module 12 is also used for:
  • a second inertial measurement unit is further provided on the movable platform, and the second inertial measurement unit is used to measure the inertial data of the first camera; correspondingly, the data processing device further includes:
  • the acquiring module 11 is also configured to acquire the inertial data measured by the second inertial measurement unit during the process of acquiring the current image frame by the first camera to obtain the inertial data of the first camera;
  • a first determining module configured to determine whether the working state of the first camera meets the first requirement based on the inertial data of the first camera
  • the feedback module is used to feed back the optimized pose to the movable platform when the working state of the first camera meets the first requirement, so that the movable platform makes a response based on the optimized pose The corresponding control response.
  • the first requirement includes:
  • the angular velocity of the rotation around a preset coordinate axis contained in the inertial data of the first camera is all less than a fifth threshold
  • the displacement velocity along the coordinate axis direction contained in the first camera inertial data is all less than a sixth threshold.
  • the acquiring module 11 is also configured to acquire the number of feature points that meet the second requirement among all the feature points of the current image frame and the historical key frame;
  • the feedback module is further configured to feed back the optimized pose to the movable platform when the number of feature points is greater than the seventh threshold, so that the movable platform can make a response based on the optimized pose The corresponding control response.
  • characteristic points that meet the second requirement include at least one of the following:
  • the movable platform is also provided with a pan/tilt, and the first camera is set on the pan/tilt; the acquisition module 11 is also used to determine the estimated posture based on the motion parameters of the pan/tilt .
  • the pan/tilt is a three-axis pan/tilt; the three-axis pan/tilt includes: an outer frame, a middle frame, and an inner frame; and the motion parameters of the pan/tilt include:
  • the acquisition module 11 is also used to acquire the zero-point pose of the first camera relative to the body of the movable platform; wherein, the zero-point pose is said when the motion parameters of the pan/tilt are all zero.
  • the pose of the first camera relative to the movable platform body; the estimated position is determined according to the zero-point pose, the first rotation parameter, the second rotation parameter, and the third rotation parameter posture.
  • a second camera is also provided on the movable platform, and the second camera is used to collect image frames in a global shutter mode; the second camera is fixedly connected to the first inertial measurement unit;
  • the data processing device further includes:
  • the trigger module is configured to trigger the construction module to execute the inertial data measured by the first inertial measurement unit and the inertial data collected by the first camera when the image frames collected by the second camera are not available. The steps of constructing the optimization equation for the image frames.
  • the data processing device may further include:
  • the second determining module is configured to determine the pose information of the body of the movable platform according to the optimized pose of the second camera when the image frames collected by the second camera are not available.
  • the data processing device further includes:
  • the second determining module is further configured to jointly optimize the inertial data measured by the first inertial measurement unit and the image frames collected by the second camera when the image frames collected by the second camera are available , Obtain the pose information of the fuselage of the movable platform.
  • FIG. 9 shows a schematic structural diagram of a data processing device provided by another embodiment of the present application.
  • the technical solution provided by this embodiment is applied to a movable platform on which a first camera, a second camera, a first inertial measurement unit, and a second inertial measurement unit are provided; the first camera is used to Image frames are collected in a rolling shutter mode; the second camera is used to collect image frames in a global shutter mode; the first inertial measurement unit is used to measure inertial data of the movable platform.
  • the data processing device includes:
  • the obtaining module 21 is used to obtain the estimated pose of the first camera
  • the determining module 22 is configured to determine an available camera among the first camera and the second camera based on the image frames collected by the second camera;
  • the optimization module 23 is configured to perform optimization calculation on the estimated pose to obtain an optimized pose according to the inertial data measured by the first inertial measurement unit and the image frames collected by the available camera.
  • determining module 22 is also used for:
  • the second camera is an available camera.
  • determining module 22 is also used for:
  • the first camera is an available camera.
  • the optimization module 23 is configured to:
  • the estimated posture is optimized through the optimization equation, and the optimized posture of the first camera is generated.
  • the optimization module 23 is configured to:
  • the estimated pose is optimized to obtain the optimized pose.
  • An embodiment of the present application also provides a data processing device, which is applied to a movable platform on which a first camera and a first inertial measurement unit are provided, and the first camera is used for rolling shutter Image frames are collected in a manner; the first inertial measurement unit is used to measure inertial data of the movable platform.
  • the data processing device includes a processor and a memory; wherein,
  • the memory is used to store programs
  • the processor is coupled with the memory, and is configured to execute the program stored in the memory for:
  • the estimated posture is optimized through the optimization equation, and the optimized posture of the first camera is generated.
  • the processor is also used for:
  • the processor is specifically configured to construct an optimization equation based on the inertial data measured by the first inertial measurement unit and the image frames collected by the first camera:
  • the optimization equation is constructed.
  • the processor constructs the optimization equation based on the inertial data measured by the first inertial measurement unit and the image frames collected by the first camera, it is specifically configured to:
  • the optimization equation is constructed according to the inertial data measured by the first inertial measurement unit and the current image frame and historical key frames collected by the first camera.
  • the processor is specifically configured to construct an optimization equation based on the inertial data measured by the first inertial measurement unit and the image frames collected by the first camera:
  • a second inertial measurement unit is further provided on the movable platform, and the second inertial measurement unit is used to measure the inertial data of the first camera;
  • the processor is also used for:
  • the optimized pose is fed back to the movable platform, so that the movable platform makes a corresponding control response based on the optimized pose.
  • processor is also used for:
  • the optimized pose is fed back to the movable platform, so that the movable platform can make a corresponding control response based on the optimized pose.
  • Another embodiment of the present application also provides a data processing device, which is applied to a movable platform.
  • the movable platform is provided with a first camera, a second camera, and a first inertial measurement unit; the first camera is used for The image frames are collected in a rolling shutter mode; the second camera is used to collect image frames in a global shutter mode; the first inertial measurement unit is used to measure inertial data of the movable platform.
  • the data processing device includes a processor and a memory; wherein,
  • the memory is used to store programs
  • the processor is coupled with the memory, and is configured to execute the program stored in the memory for:
  • the estimated pose is optimized and calculated to obtain the optimized pose.
  • the processor is specifically configured to determine an available camera among the first camera and the second camera based on the image frames collected by the first camera:
  • the second camera is an available camera.
  • processor is also used for:
  • the first camera is an available camera.
  • the processor calculates the estimated position according to the inertial data measured by the first inertial measurement unit and the image frame collected by the available camera.
  • the optimized pose is obtained by the optimization calculation, it is specifically used for:
  • the estimated posture is optimized through the optimization equation, and the optimized posture of the first camera is generated.
  • the data processing device may include a plurality of different components.
  • These components may be integrated circuits (ICs), or parts of integrated circuits, discrete electronic devices, or other suitable circuits.
  • a module of a board (such as a motherboard or an additional board) can also be used as a component incorporated into the computer system.
  • the processor may include one or more general-purpose processors, such as a central processing unit (central processing unit, CPU) or processing device.
  • the processor may be a microprocessor, or one or more dedicated processors, such as application specific integrated circuit (ASIC), field programmable gate array (FPGA), Digital signal processor (digital signal processor, DSP).
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP Digital signal processor
  • the processor can communicate with the memory.
  • the memory may be a magnetic disk, an optical disk, a read only memory (ROM), flash memory, etc.
  • the memory may store instructions stored by the processor, and/or may cache some information stored from an external storage device.
  • Fig. 5 shows a schematic structural diagram of a movable platform provided by an embodiment of the present application.
  • the movable platform includes:
  • the first camera 820 is used to collect image frames in a rolling shutter mode
  • the first inertial measurement unit 860 is used to measure the inertial data of the movable platform
  • One or more processors which are individually or collectively configured to:
  • the estimated posture is optimized through the optimization equation, and the optimized posture of the first camera is generated.
  • the movable platform is an unmanned aerial vehicle, an unmanned vehicle, and the like.
  • processor in the foregoing embodiment may also be the technical solution described in the foregoing method embodiments, and for specific implementation principles, please refer to the corresponding content in the foregoing method embodiments, which will not be repeated here.
  • embodiments of the present application also provide one or more non-transitory computer-readable storage media, which have executable instructions stored thereon, which, when executed by one or more processors, cause all The said computer system is at least:
  • the estimated posture is optimized through the optimization equation, and the optimized posture of the first camera is generated.
  • the movable platform is provided with the first camera and the first inertial measurement unit; the first camera is used to collect image frames by means of a rolling shutter; the first inertial measurement unit is used to measure the Describe the inertial data of the movable platform.
  • Fig. 5 shows a schematic structural diagram of a movable platform provided by another embodiment of the present application.
  • the movable platform includes:
  • the first camera 820 is used to collect image frames in a rolling shutter mode
  • the second camera 840 is used to collect image frames in a global shutter mode
  • the first inertial measurement unit 860 is used to measure the inertial data of the movable platform
  • One or more processors which are individually or collectively configured to:
  • the estimated pose is optimized and calculated to obtain the optimized pose.
  • processor in the foregoing embodiment may also be the technical solution described in the foregoing method embodiments, and for specific implementation principles, please refer to the corresponding content in the foregoing method embodiments, which will not be repeated here.
  • the movable platform may also include a display controller and/or display device unit, a transceiver, an audio input and output unit, other input and output units, and so on.
  • a display controller and/or display device unit may also include a transceiver, an audio input and output unit, other input and output units, and so on.
  • these components included in the mobile platform can be interconnected via a bus or internal connections.
  • the transceiver may be a wired transceiver or a wireless transceiver, such as a WIFI transceiver, a satellite transceiver, a Bluetooth transceiver, a 3G/4G/5G wireless communication signal transceiver, or a combination thereof.
  • a wireless transceiver such as a WIFI transceiver, a satellite transceiver, a Bluetooth transceiver, a 3G/4G/5G wireless communication signal transceiver, or a combination thereof.
  • the audio input and output unit may include a speaker, a microphone, an earpiece, and the like.
  • other input and output devices 770 may include USB ports, serial ports, parallel ports, printers, network interfaces, and so on.
  • an embodiment of the present application also provides one or more non-transitory computer-readable storage media having executable instructions stored thereon, and when the executable instructions are executed by one or more processors, Make the computer system at least:
  • the movable platform is provided with the first camera, the second camera, and the first inertial measurement unit; the first camera is used to collect image frames in a rolling shutter mode; the second camera It is used to collect image frames in a global shutter mode; the first inertial measurement unit is used to measure the inertial data of the movable platform.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement without creative work.
  • each implementation manner can be implemented by software plus a necessary general hardware platform, and of course, it can also be implemented by hardware.
  • the above technical solution essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic A disc, an optical disc, etc., include a number of instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute the methods described in each embodiment or some parts of the embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供一种数据处理方法、数据处理装置、可移动平台及计算机可读存储介质。其中,数据处理方法,应用于一可移动平台,所述可移动平台上设有第一相机和第一惯性测量单元,所述第一相机用于以卷帘快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据,所述方法包括步骤:获取所述第一相机的预估姿态;根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。采用本申请实施例提供的技术方案,在定位相机(即全局快门曝光的相机)无法工作时,也能够持续地、准确地输出优化后的位姿信息。

Description

数据处理方法、装置、可移动平台及计算机可读存储介质 技术领域
本申请涉及自动控制领域,尤其涉及一种数据处理方法、数据处理装置、可移动平台及计算机可读存储介质。
背景技术
可移动平台,诸如无人飞行器(UAV)等,可以用于执行监视、侦察和勘探任务以供多种民用、商用和军事应用。可移动平台可以由远程用户手动控制,或者能够以半自主或全自主方式操作。可移动平台可以包括传感器,所述传感器被配置成用于收集可在无人飞行器操作期间使用的数据,如图像数据、惯性数据。
目前,可移动平台可使用视觉惯性里程计进行定位、姿态测量、速度测量等。视觉惯性里程计(VIO,visual-inertial odometry)是融合相机采集到的图像帧和IMU惯性测量单元测量得的惯性数据,进行定位、姿态测量、速度测量等的算法。实施视觉惯性里程计时,通常使用触发同步性能较好的全局快门曝光的相机。
对于设置有下视视觉定位相机以及高画质主相机的可移动平台来说,下视视觉定位机为全局快门曝光的相机,用于定位、姿态测量、速度测量等以稳定可移动平台;然而,高画质主相机为卷帘快门相机,用于影像拍摄。但在一些特定情况下,比如可移动平台起飞阶段,下视视觉定位相机贴近地面,没有有效观测,因此无法在起飞阶段使用视觉里程计稳定可移动平台。
发明内容
为应对现有技术中存在的问题,本申请各实施例提供了一种数据处理方法、数据处理装置、可移动平台及计算机可读存储介质。
在本申请的一个实施例中,提供了一种数据处理方法,应用于一可移动平台,所述可移动平台上设有第一相机和第一惯性测量单元,所述第一相机用于以卷帘快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据,所述方法包括步骤:
获取所述第一相机的预估姿态;
根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
在本申请的另一实施例中,提供了一种数据处理方法,应用于一可移动平台,所述可移动平台上设有第一相机、第二相机及第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第二相机用于以全局快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据;
所述方法包括步骤:
获取所述第一相机的预估位姿;
基于所述第二相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机;
根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿。
在本申请的又一实施例中,提供了一种数据处理装置,应用于一可移动平台,所述可移动平台上设有第一相机和第一惯性测量单元,所述第一相机用于以卷帘快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可 移动平台的惯性数据;
所述数据处理装置包括处理器及存储器;其中,
所述存储器,用于存储程序;
所述处理器,与所述存储器耦合,用于执行所述存储器中存储的所述程序,以用于:
获取所述第一相机的预估姿态;
根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
在本申请的又一实施例中,提供了一种数据处理装置,应用于一可移动平台,所述可移动平台上设有第一相机、第二相机及第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第二相机用于以全局快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据;
所述数据处理装置包括处理器及存储器;其中,
所述存储器,用于存储程序;
所述处理器,与所述存储器耦合,用于执行所述存储器中存储的所述程序,以用于:
获取所述第一相机的预估位姿;
基于所述第二相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机;
根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿。
在本申请的又一实施例中,提供了一种可移动平台。该可移动平台包括:
第一相机,用于以卷帘快门的方式采集图像帧;
第一惯性测量单元,用于测量所述可移动平台的惯性数据;
以及
一个或多个处理器,其单独地或共同地被配置成用于:
获取所述第一相机的预估姿态;
根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
在本申请的又一实施例中,提供了一个或多个非暂时性计算机可读存储介质,其具有储存于其上的可执行指令,所述可执行指令在一个或多个处理器执行时,使所述计算机系统至少:
获取第一相机的预估姿态;
根据第一惯性测量单元测量得到的惯性数据以及第一相机采集到的图像帧,构建优化方程;
通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态;
其中,可移动平台上设有所述第一相机及所述第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。
在本申请的又一实施例中,提供了一种可移动平台。该可移动平台包括:
第一相机,用于以卷帘快门的方式采集图像帧;
第二相机,用于以全局快门的方式采集图像帧;
第一惯性测量单元,用于测量所述可移动平台的惯性数据;
以及
一个或多个处理器,其单独地或共同地被配置成用于:
获取所述第一相机的预估位姿;
基于所述第二相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机;
根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿。
在本申请的又一实施例中,提供了一个或多个非暂时性计算机可读存储介质,其具有储存于其上的可执行指令,所述可执行指令在一个或多个处理器执行时,使所述计算机系统至少:
获取第一相机的预估位姿;
基于第一相机采集到的图像帧,在所述第一相机与第二相机中确定一可用相机;
根据第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿;
其中,可移动平台上设有所述第一相机、所述第二相机及所述第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第二相机用于以全局快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。
本申请实施例提供的技术方案中,可移动平台上设有第一惯性测量单元及第一相机;其中,第一相机用于以卷帘快门的方式采集图像帧;利用第一惯性测量单元测量得到的惯性数据以及第一相机采集到的图像帧,构建优化方程;通过优化方程对第一相机的预估姿态进行优化,得到第一相机的优化姿态;解决了现有技术中因下视视觉定位相机没有有效观测而无法使用视觉里程计稳定可移动平台的问题;另一方面,本申请实施例提供了一种准确地得到第一相机姿态的方案,对于存在有增稳云台的可移动平台来说,增稳云台直接使用第一相机的姿态信息来进行运动补偿,不会导致画面歪斜,有助于提高第一相机的拍摄画质;又一方面,对于机械结构的增稳云台来说,因无需结合可移动平台的姿态及机械结构中各个机械关节角来确定第一相机姿态,而是准确地得到第一相机姿态,所以对于云台的机械结构安装精度、变形的要求也可以降低。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为图像中出现歪斜影像的示例图;
图2为本申请一实施例提供的数据处理方法的流程示意图;
图3为本申请实施例中将彩色图像中部分像素降采样为一个像素的示意图;
图4为本申请实施例中第一相机的曝光延迟示意图;
图5为本申请一实施例提供的可移动平台的结构示意图;
图6为本申请一实施例提供的数据处理方法的原理性示意图;
图7为本申请另一实施例提供的数据处理方法的流程示意图;
图8为本申请一实施例提供的数据处理装置的结构示意图;
图9为本申请另一实施例提供的数据处理装置的结构示意图。
具体实施方式
下面对本文中涉及到的相关内容进行简要说明。
1、视觉惯性里程计
视觉惯性里程计(VIO,visual-inertial odometry),是融合相机和IMU(Inertial measurement unit,惯性测量单元)的数据,进行定位、姿态测量、速度测量等的算法。目前使用相机和IMU惯性测量单元的视觉惯性里程计主要分为松耦合和紧耦合两类。松耦合的视觉惯性里程计通过两个较为独立的视觉运动模块和惯性导航运动估计模块,分别进行运动的状态估计后,再进行 对各个模块输出结果进行融合,得到最终的位姿信息。紧耦合的视觉惯性里程计直接融合两个传感器的原始信息,共同估计,相互制约相互补充,充分利用了传感器的特性,精度较高,但算法的计算量较大,模块耦合较明显。
单目视觉本身具有较大的局限,无法直接输出用于控制的信息。单目视觉系统通常没有尺度,由于对于图片输出的立体信息、深度信息的缺失,通过基本矩阵、本质矩阵或单应矩阵约束计算得到的相机的位置姿态信息没有真实的物理尺度,即具有较为准确的相对姿态,但相对位置的尺度有缩放;通过对极约束和三角化计算得到的空间三维点的位置信息没有真实的物理尺度,即具有相似的形状,但尺寸有所缩放。
IMU惯性测量单元是测量线加速度与角速度的传感器,具有较好的瞬时特性。并且,IMU惯性测量单元的加速度计输出的加速度是直接具有重力尺度的,可以较好的辅助单目视觉恢复尺度。但由于有偏置,IMU惯性测量单元的输出无法直接通过对时间进行积分的方式得到较为准确的速度、位置和姿态。
同时使用视觉观测与IMU惯性测量单元的输出,通过视觉惯性里程计的技术,可以融合得到较为准确的位姿信息,并且IMU惯性测量单元的偏置不容易导致测量的漂移。
2、视觉惯性里程计中的相机
视觉惯性里程计中,通常使用触发同步性能较好的全局快门曝光的图像传感器作为输入。由于全局快门曝光的相机所有的像素在同一时刻进行曝光,所有像素的图像信息是在同一时刻进行采集的,这更加接近传统视觉系统(尤其是光流Optical Flow之类的,要求所有像素描述的是同一时刻的场景信息,并且光照程度相似,移动大致相同)对于被拍摄场景的假设。
而用于拍摄的高画质相机,通常具有较大的感光面积,较为宽广的感光度范围,较大的宽容度,能够适应更大的动态范围,更小的噪声。这些对于视觉定位系统,相对于额外的定位相机来说,都具有更大的优势。但是,高画质的主相机通常有以下问题:
高画质的相机一定是卷帘快门曝光,即所有的像素是按照顺序一一进行曝光,并串行读出的。这些像素的图像信息并非是同一时刻的,而是依次顺序的。在系统运动,即相机曝光时有运动的时候,卷帘快门(Rolling Shutter)曝光的相机曝光顺序带来的偏差会影响视觉惯性测量单元的信息采集,例如,在行进的火车上相对于地面水平运动的相机,拍摄的路旁电线杆是倾斜的,参见图1所示图像。
同时,高画质相机的曝光是由用户进行控制的,在拍摄过程中,用户可以人为的修改相机的曝光快门时间、增益以及曝光补偿,这对于实时性要求较高的视觉定位系统是较为不利的。例如,由于用户人为的调节了主相机图像传感器的曝光参数,造成相邻图像帧之间的亮度值不一致、破坏了传统视觉系统要求的光照不变性;
同时,高画质的主相机是彩色传感器。彩色传感器每个像素的三个RGB红绿蓝通道输出通常是采用拜尔滤镜实现的。彩色图像传感器输出的像素值中,只有相应通道的像素值是其真实值,而其他通道的值是又其相邻的相应像素值进行插值得到。对于人眼来说,插值与真实值得差异无法区分,但对于就要亚像素精度的视觉定位系统而言,直接使用彩色图片进行视觉定位时,会降低精度。
3、机械增稳云台
小型摄像机在手持拍摄或是航拍时,由于载体姿态不稳定,会导致画面晃动、模糊。通过一定的增稳方法,可以增稳摄像机,使拍摄的画面稳定,运动光滑。通常使用的增稳方法有机械云台增稳,电子防抖增稳和光学防抖增稳。光学防抖通过增稳图像传感器实现,增稳的范围较小;电子防抖增稳通过旋转剪裁图像实现增稳,会裁剪画幅并导致画质降低。机械云台增稳通过实时检测摄像机的运动,并控制电机驱动机械框架调整以补偿会带来画面抖动的摄像机运动,实现无损防抖,获得流畅稳定的画面。
三轴增稳云台是指有外-中-内三个旋转框架,可以在三个旋转自由度(偏航-俯仰-滚转)上进行运动补偿增稳的机械增稳云台。三轴增稳云台使用由三 轴陀螺仪和三轴加速度传感器构成的IMU惯性测量单元对摄像机的姿态进行测量和反馈,但由于IMU惯性测量单元长时间使用时存在漂移的特性,需要通过姿态滤波算法与外界姿态相融合。IMU惯性测量单元与高画质相机固联,用以测量高画质相机的惯性数据。
航拍无人机上的云台通常使用IMU惯性测量单元与外界姿态相融合的滤波算法进行姿态反馈。由于直接使用IMU惯性测量单元获取的姿态在长时间使用时会有漂移,需要与外界姿态等不易漂移的姿态进行算法滤波后得到较为准确的姿态。外界姿态通常来自三轴加速度计对重力的测量,或者是航拍无人机机体的姿态与各个机械关节角结合后的姿态,但都不是主相机图像传感器的实际姿态,在使用中,常常会出现反馈姿态错误导致图像画面歪斜等现象。同时,这对于IMU惯性测量单元与图像传感器的安装精度要求较高,对于关节角的测量精度要求也较高。
由上述内容可知:
由于航拍无人机上的云台通常使用姿态反馈并非实际主相机的姿态,会导致在航拍使用过程中出现图像画面歪斜的现象,对于航拍过程有很大影响,通常使用后期进行补偿修正。
对于设有定位相机与主相机的航拍无人机来说,由于在起飞前下视相机贴近地面,没有有效观测,无法在起飞阶段使用视觉惯性里程计稳定航拍无人机。
为解决或改善上述内容中提及的问题,提出了如下各实施例。为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
在本申请的说明书、权利要求书及上述附图中描述的一些流程中,包含了按照特定顺序出现的多个操作,这些操作可以不按照其在本文中出现的顺序来执行或并行执行。操作的序号如101、102等,仅仅是用于区分各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。需要说明的是,本 文中的“第一”、“第二”等描述,是用于区分不同的消息、设备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。下文所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
图2示出了本申请一实施例提供的数据处理方法的流程示意图。如图所示,本实施例提供的所述方法应用于一可移动平台,所述可移动平台上设有第一相机和第一惯性测量单元,所述第一相机用于以卷帘快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。
具体的,如图1所示,所述方法包括步骤:
101、获取所述第一相机的预估姿态;
102、根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
103、通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
在一种可实现的技术方案中,所述可移动平台上还设有云台,所述第一相机设置在所述云台上;相应的,上述步骤101“获取所述第一相机的预估姿态”可具体为:
通过云台的动作参数,确定所述预估姿态。
在一具体实施方案中,所述云台为三轴云台;所述三轴云台包括:外框架、中框架及内框架;以及所述云台的动作参数包括:可移动平台机身相对于所述外框架的第一旋转参数;所述外框架相对于所述中框架的第二旋转参数;所述中框架相对于所述内框架的第三旋转参数。
相应的,上述步骤“通过云台的动作参数,确定所述预估姿态”包括:
1011、获取所述第一相机相对于所述可移动平台机身的零点位姿;其中,所述零点位姿为所述云台的动作参数均为零时所述第一相机相对于所述可移动平台机身的位姿;
1012、根据所述零点位姿、所述第一旋转参数、所述第二旋转参数及所述第三旋转参数,确定所述预估位姿。
上述1011中,所述零点位姿可以理解为是设计参数,即由设计决定。
第一相机安装在三轴增稳云台上,由于云台存在增稳补偿的旋转动作,同时有可能存在其他外部控制操作(如可移动平台的操纵者的控制操作)控制云台的姿态视角(如俯仰角),导致第一相机与可移动平台的机身(即第一惯性测量单元)的相对旋转并不固定。因此,需要通过云台机械框架的关节角粗略估计第一相机的预估姿态:R ex与t ex。第一相机的预估姿态中的R ex等于第一相机相对于第一惯性测量单元的相对旋转R iC,即上述步骤1012可采用如下公式(1)计算得到预估位姿中的R ex
R ex=R iC=R OR MR IR iC0     (1)
其中:R iC0为关节角为0时,第一相机相对于第一惯性测量单元的相对旋转,由设计决定,是一个已知量;
R O为可移动平台机身相对于云台外框架的旋转;
R M为云台外框架相对于中框架的旋转;
R I为云台中框架相对于内框架的旋转;
云台上还包含有:用于驱动外框架相对可移动平台机身旋转的第一电机、用于驱动中框架相对外框架旋转的第二电机以及驱动内框架相对中框架旋转的第三电机。上述R O、R M及R I可通过读取第一电机、第二电机及第三电机各自对应的电机编码器的计数来确定。
第一相机的预估姿态中的t ex(即第一相机相对于第一惯性测量单元的相对位移变换)通常在机械设计时确定,即在可移动平台设计及安装时,第一相机在可移动平台上的位置就已确定,且不会再发生变化。因此,第一相机的预估姿态中的t ex是一个已知量。
上述102中,可利用优化算法来构建优化方程。该优化方程可理解为:包含有构建出的目标函数,并确定了一组求解参数;所述求解参数使目标函 数的值最小。其中,求解参数中包含有第一相机的优化位姿:
Figure PCTCN2019113687-appb-000001
当然,在具体实施时,还可基于实际需要增加其他参数,比如:图像帧上某一个特征点的三维坐标等。在一些实施方式中,目标函数可以是线性函数或非线性函数,可以使用迭代求解技术使目标函数最小化,以便获得第一相机的优化位姿。下文将结合具体实例对目标函数的具体实现进行说明。
上述103中,将预估姿态作为优化方程的入参,通过优化迭代过程后即可得到所述第一相机的优化位姿。
这里需要补充的是,本文各实施例中提及的惯性测量单元是指一种电子装置,其使用一个或多个加速度感应器测量被测体(即本实施中提及的可移动平台)当前的速度、加速度(例如包括各坐标轴方向上的平移速度、加速度;绕各坐标轴的角速度、角加速度等)等;使用一个或多个陀螺仪检测在方向、翻滚角度和倾斜姿态上的变化。当然,有一些惯性测量单元还可包括磁力计,主要用于协助校准方向偏移。本文中各实施例中的惯性测量单元测得的惯性数据可包括但不限于:绕各坐标轴的角速度、沿各坐标轴的平移速度、绕各坐标轴的角加速度、沿各坐标轴的角加速度。这里提到的坐标系可以是可移动平台的坐标系,当然也可以是其他坐标系,本实施例对此不作具体限定。
本实施例提供的技术方案中,可移动平台上设有第一惯性测量单元及第一相机;其中,第一相机用于以卷帘快门的方式采集图像帧;利用第一惯性测量单元测量得到的惯性数据以及第一相机采集到的图像帧,构建优化方程;通过优化方程对第一相机的预估姿态进行优化,得到第一相机的优化姿态;解决了现有技术中因下视视觉定位相机没有有效观测而无法使用视觉里程计稳定可移动平台的问题;另一方面,本实施例提供了一种可直接、准确地得到第一相机姿态的方案,对于存在有增稳云台的可移动平台来说,增稳云台直接使用第一相机的姿态信息来进行运动补偿,有助于提高第一相机的拍摄画质;又一方面,对于机械结构的增稳云台来说,现有技术中是利用下视定 位相机采集的图像及第一惯性测量单元采集的图像帧优化出可移动平台的姿态;然后再结合可移动平台姿态及机械结构中各个机械关节角来确定第一相机姿态;由于第一相机姿态是间接得到的,可能会因为云台的机械结构安装精度低、发生的未知形变等等,使得这样得到的第一相机姿态准确度低,进而会出现反馈的第一相机姿态错误导致第一相机采集的图像歪斜等现象;现有技术,要想提高精度一种方式是提高云台机械结构的安装精度、控制形变等;而本实施例提供的技术方案,因为是利用第一相机采集的图像帧及第一惯性测量单元测得的惯性数据,直接、准确地得到第一相机姿态,对于云台的机械结构安装精度、变形的要求也就可降低。
在第一相机拍摄出的图像是彩色图像时,彩色图像中每个像素的三个RGB红绿蓝通道输出通常是采用拜尔滤镜实现的。彩色图像各像素的像素值中,只有相应通道的像素值是其真实值,而其他通道的值是又其相邻的相应像素值进行插值得到。对于人眼来说,插值与真实值得差异无法区分,但对于就要亚像素精度的视觉定位系统而言,直接使用彩色图像进行视觉定位时,会降低精度。因此,上述实施例提供的所述方法中,在构建优化方程前,还包括如下步骤:
104、对所述第一相机采集到的当前图像帧进行降采样,以得到灰度图像。
通过对第一相机各通道进行分别降采样滤波后相加的方式,将彩色图片转换为灰度图,可以消除拜尔滤镜对于特征点追踪精度带来的影响。例如,图3示的将4x4即16个像素降采样为1个像素,即可消除拜尔滤镜带来的影响。
在一种可实现的技术方案中,上述步骤102“根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程”,包括:
S11、提取所述灰度图像的特征点信息。
S12、根据所述惯性测量单元测量得到的惯性数据以及所述特征点信息,构建所述优化方程。
上述步骤S11中,可利用特征点算法从灰度图像中提取一个或多个特征点信息,特征点信息可以是灰度图像中的一部分,例如:边缘、角点、兴趣点、斑点、褶皱等;这些特征点可与所述灰度图像的其他点区别开来。特征点算法可包括但不限于:边缘检测算法、角点检测算法、斑点检测算法或褶皱检测算法等等。在具体实施,本实施例提供的所述方法可利用角点检测算法从灰度图像中提取一个或多个特征点信息。
在一种可实现的技术方案中,上述步骤S12可基于紧耦合的视觉惯性里程计系统理论,使用特征点信息及惯性测量单元测量得到的惯性数据构建优化方程。
在另一种可实现的技术方案中,上述实施例提供的所述方法步骤102“根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建所述优化方程”:可具体包括:
S21、当所述当前图像帧符合设定要求时,确定所述当前图像帧为关键帧;
S22、当确定所述当前图像帧为关键帧时,根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的当前图像帧和历史关键帧,构建所述优化方程。
其中,所述设定要求包括但不限于如下中的至少一种:
所述当前图像帧与前一关键帧之间的旋转角变化大于第一阈值;
所述当前图像帧与前一关键帧之间的位移变化大于第二阈值;
所述当前图像帧与前一关键帧的匹配特征点数量小于第三阈值;
所述当前图像帧与前一关键帧的匹配特征点分布在不同图像区域的数量小于第四阈值。
采用卷帘门相机采集的图像帧还存在时间对齐的问题。由于第一相机是卷帘快门曝光相机,第一相机的曝光并非由IMU硬件触发。由于曝光时刻不一致,第一相机会有一个延迟,参见图4所示。因为图像曝光延迟会影响到惯性数据的时间对准,进而影响了定位精度。因此,需要估计出曝光的时间差t d。其中,时间差t d可采用如下方法实现:
首先,获取经验值及工程测量值;
然后,将经验值与工程测量值作为滤波器的输入,经过滤波器的处理得到时间差t d
视觉惯性里程计系统工作时,相机和IMU获取数据速率如图4所示。由图4可知,IMU工作速率远远大于相机图像获取频率,因此在两帧视觉图像之间,存在许多IMU数据。为此,需要利用前述时间差t d来获取两帧图像之间的所有惯性数据,以利用获取到的所有惯性数据及两帧图像来构建优化方程。即本实施例提供的所述方法中,步骤102“所述根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程”,包括:
S31、获取所述当前图像帧对应的第一时间戳和一历史图像帧对应的第二时间戳;
S32、通过预设的曝光延迟对所述第一时间戳和所述第二时间戳进行修正;
S33、获取修正后的所述第一时间戳和第二时间戳之间的惯性数据;
S34、根据所述修正后的所述第一时间戳和第二时间戳之间的惯性数据,以及所述第一相机在所述第一时间戳和所述第二时间戳间采集到的图像帧,构建优化方程。
进一步的,所述可移动平台上还设有第二惯性测量单元,所述第二惯性测量单元用于测量所述第一相机的惯性数据。相应的,本实施例提供的所述方法还可包括如下步骤:
105、获取所述第一相机采集当前图像帧过程中所述第二惯性测量单元测量得到的惯性数据,得到所述第一相机的惯性数据;
106、基于所述第一相机的惯性数据,确定所述第一相机的工作状态是否满足第一要求;
107、在所述第一相机的工作状态符合第一要求的情况下,将所述优化位 姿反馈至所述可移动平台,以便所述可移动平台基于所述优化位姿做出相应的控制响应。
具体的,所述第一要求可包括但不限于:
所述第一相机惯性数据中含有的绕一预设的坐标轴旋转的角速度均小于第五阈值;其中,所述坐标轴可以第一相机坐标系中的轴;
所述第一相机惯性数据中含有的沿所述坐标轴方向上的位移速度均小于第六阈值。
上述步骤106~107可理解为:当第一相机视觉惯性里程计工作稳定时,即可将最新帧IMU惯性测量单元与第一相机的相对旋转R iC用于云台的参考姿态反馈。判断第一相机工作稳定的策略如下:
卷帘快门效应较小时,可以输出主相机参考姿态。判断方法如下:
本帧图像采集时,记录第二惯性测量单元中陀螺仪的输出,绕第一相机坐标系y轴旋转的最大角速度小于设定的第五阈值ω thd
ω y<ω thd
本帧图像采集时,第二惯性测量单元测量得到的沿第一相机坐标系的y轴的平移速度小于设定的第六阈值;
v y<v thd
上述步骤105~107可在上述实施例提供的所述方法执行前执行,即通过上述105~107判断第一相机工作稳定时,再执行上述实施例方法提供的各步骤,如步骤101~103。
进一步的,除判定第一相机工作稳定,来确定是否可将最新帧IMU惯性测量单元与主相机的相对旋转R iC用于云台的参考姿态反馈之外,还可通过判定可靠特征点较多时,也可输出采用上述实施例提供的方法得到的第一相机的优化姿态。这里在确定可靠特征点时,需利用历史关键帧,从中筛选出可靠的特征点。即本实施例提供的所述方法还可包括如下步骤:
108、获取所述当前图像帧和历史关键帧的所有特征点中符合第二要求的 特征点数量;
109、当所述特征点数量大于第七阈值时,将所述优化位姿反馈至所述可移动平台,以使得所述可移动平台能够基于所述优化位姿做出相应的控制响应。
具体实施时,满足所述第二要求的特征点包括如下中的至少一项:
最大重投影误差小于第八阈值的特征点;
在所述当前图像帧和历史关键帧中出现次数大于第九阈值的特征点;
当前优化得到的三维坐标与前次优化中的三维坐标差异小于第十阈值的特征点。
在一具体实现方案中,上述步骤108~109可理解为如下过程:
(1)遍历所有当前图像帧和历史关键帧中匹配的特征点,判断其中最大的重投影误差是否足够小(小于第八阈值):
Figure PCTCN2019113687-appb-000002
其中,重投影误差指利用旋转矩阵R和平移矩阵t将一帧图像3D点投影到另一帧图像中,投影点和实际2D对应点之间的误差。由于相机位姿未知以及观测点的噪声,重投影误差是必然存在的。
(2)判断在关键帧keyframe中出现的次数足够多(大于第九阈值,比如在80%的关键帧中都跟踪匹配成功)
(2)遍历所有特征点的三维坐标P i,判断其中每个点的三维位置在前次优化中的变化足够小(小于第十阈值)
error=max||P i-P i_last||
由上述内容可知,上述步骤108~109可在上述实施例提供的所述方法得到第一相机的优化姿态后,再判断该优化位姿是否要反馈给云台,以便云台做出相应的增稳动作响应。
进一步的,所述可移动平台上还设有第二相机,所述第二相机用于以全局快门的方式采集图像帧;所述第二相机与所述第一惯性测量单元固联;相 应的,本实施例提供的所述方法还可包括如下步骤:
110、在所述第二相机采集的图像帧不可用的情况下,触发所述根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧构建优化方程的步骤。
具体的,第二相机采集的图像帧不清楚、或从中无法提取出特征点(如刚起飞时拍摄到的地面图像,该图像中具有区别于其他点的特征点太少,此时会认为第二相机无有效观测)等,均认为不可用。具体实施时,可通过图像识别方法或其他设定的图像判定规则,来判断所述第二相机采集的图像帧是否可用。
进一步的,本实施例提供的所述方法还可包括如下步骤:
111、在所述第二相机采集的图像帧不可用的情况下,根据所述第二相机的优化位姿,确定所述可移动平台机身的位姿信息。
进一步的,本实施例提供的所述方法还可包括如下步骤:
112、在所述第二相机采集的图像帧可用的情况下,对所述第一惯性测量单元测量得到的惯性数据以及所述第二相机采集到的图像帧进行联合优化,得到所述可移动平台机身的位姿信息。
为了便于理解,下面结合一具体可移动平台,对本申请实施例提供的技术方案进行说明。
如图5所示,所述可移动平台800包括:机身、设置在机身上的第二相机840、设置在机身上的云台810,第一相机820设置在所述云台810;第一相机820通过所述云台810可相对机身移动。所述机身上还设有第一惯性测量单元860及第二惯性测量单元821。所述第一相机820用于以卷帘快门的方式采集图像帧;所述第二相机840用于以全局快门的方式采集图像帧;所述第一惯性测量单元860用于测量所述可移动平台的惯性数据;第二惯性测量单元821用于测量所述第一相机820的惯性数据。进一步的,如图5所示, 该可移动平台还可包括:动力系统830。该动力系统可以包括电子调速器(简称为电调)、一个或多个螺旋桨以及与一个或多个螺旋桨相对应的一个或多个电机。
本申请提供的技术方案为利用第一相机采集的图像帧及第一惯性测量单元测量的惯性数据完成第一相机位姿优化的方法。具体的,本方案可包括如下几个大部分。分别为:数据准备阶段、数据处理阶段及输出阶段;参见图6所示。
第一部分,数据准备阶段
数据准备阶段包括:图像处理,时间对齐。
1.1、图像处理
将第一相机采集到的图像帧中4x4即16个像素降采样为1个像素,即对图像帧进行降采样处理可消除拜尔滤镜带来的影响。
1.2、时间差估计及时间戳修正
1.2.1估计曝光的时间差,具体可参见上文中的相应内容,此处不作赘述1.2.2时间戳修正
具体实现方案为:
首先、获取所述当前图像帧对应的第一时间戳和一历史图像帧对应的第二时间戳;
然后、通过预设的曝光延迟对所述第一时间戳和所述第二时间戳进行修正;
随后、获取修正后的所述第一时间戳和第二时间戳之间的惯性数据。
第二部分,数据处理阶段
数据处理阶段包括:估计第一相机预估姿态;特征点提取;紧耦合位姿优化。
2.1第一相机预估姿态
第一相机的预估姿态包括:R ex与t ex。其中,R ex可采用如下公式得到:
R ex=R OR MR IR iC0
其中:
R iC0为关节角为0时,主相机相对于IMU的相对旋转,由设计决定;
R O为机体相对于云台外框架的旋转;
R M为云台外框架相对于中框架的旋转;
R I为云台中框架相对于内框架的旋转;
其中,各框架的旋转,即R O、R M、R I可通过读取对应电机编码器即可确定得到。
另外,t ex通常在机械设计时确定,之后不会发生改变,因此该值是一个可获得的已知量。
2.2特征点提取
判断第一相机当前采集的图像帧是否为关键帧;
针对当前图像帧及第一相机采集到的历史关键帧,先从中提取特征点(比如采用Harris Corner detection algorithms,角点检测算法),并做多帧图像之间特征点的跟踪匹配(Kanade–Lucas–Tomasi feature tracker)。
2.3、紧耦合位姿优化
基于紧耦合的视觉惯性里程计系统理论,紧耦合的使用所述特征点信息及修正时间戳后的惯性数据,构建类似如下的优化方程:
Figure PCTCN2019113687-appb-000003
其中,投影变换过程简写为p′=π(RP i+t),π代表投影函数;注意,此处的第一相机投影过程使用的第一相机内参为默认值,并非实际值,实际值要在后面的优化步骤优化得到。
P i为某个特征点的三维坐标,p i是此特征点在第i帧图像上的像素坐标,
Figure PCTCN2019113687-appb-000004
表示当前帧相对于前一帧的旋转平移变换;
R ex,t ex表示第一相机与第一惯性测量单元之间的外参,R iex即表示在第i帧图像时,第一相机与第一惯性测量单元之间的相对旋转。由于第一相机与第一惯性测量单元由云台连接,并非固联,故对于滑动窗口中的每一帧,都需要估计相对旋转;这里需要补充的是:紧耦合的视觉惯性里程计系统理论中,存在一个滑动窗口机制,滑动窗口中有固定数量的图像帧,当有新采集到的图像帧时,滑动窗口就会移动以添加新采集的图像帧,并剔除滑动窗口中的一个历史图像帧。滑动窗口内的图像帧数量可以是5个、10个或者其他数量。有关滑动窗口的内容,可参见现有技术中的相关内容,本文不作赘述。
R ex2,t ex2表示第二相机与第一惯性测量单元之间的外参。由于第二相机与第二惯性测量单元固联,故仅需要估计一个相对旋转;
t d表示第一相机的曝光延迟,即时间差;
r b(·)表示第一惯性测量单元的残差函数;
arg代表本次优化的参数(目标)是
Figure PCTCN2019113687-appb-000005
P i,R iex,t ex,R ex2,t ex2,t d.
这里需要说明的是:r b(·)表示第一惯性测量单元的残差函数是在惯性测量单元预积分过程中得到的,其中,IMU预积分的作用是计算IMU数据的观测值(即IMU预积分值)以及残差的协方差矩阵和雅各比矩阵。残差可以表示为:
Figure PCTCN2019113687-appb-000006
其中,X可包含上述的
Figure PCTCN2019113687-appb-000007
R iex,t ex,R ex2,t ex2,t d。具体的,上述残差
Figure PCTCN2019113687-appb-000008
的具体数学表达可参见现有技术中的视觉惯性里程计理论的内容, 如VINS-MONO中描述的内容,此处不作赘述。
第三部分、输出阶段
当第一相机工作稳定时,即可将最新帧第一惯性测量单元与第一相机的相对旋转R iC用于云台的反馈。第一相机工作稳定的策略如下:
1、卷帘快门效应较小时,可以输出第一相机的优化姿态。判断方法如下:
本帧图像采集时,记录第二惯性测量单元中陀螺仪的输出,绕第一相机坐标系的y轴旋转的最大角速度小于设定的ω thd
ω v<ω thd
本帧图像采集时,沿第一相机坐标系y轴的平移速度较小;
v v<v thd
2、可靠地特征点较多时,可以输出第一相机的优化姿态。利用关键帧,去除不可靠的feature特征点,筛选出可靠的feature特征点。策略如下:
a)遍历所有特征点,判断其中最大的重投影误差是否足够小(小于某个阈值):
Figure PCTCN2019113687-appb-000009
b)判断在keyframe中出现的次数足够多(大于某个阈值,比如在80%的关键帧中都跟踪匹配成功)
c)遍历所有特征点的三维坐标P i,判断其中每个点的三维位置在前次优化中的变化足够小(小于某个阈值)
error=max||P i-P i_last||
本申请实施例提供的技术方案,使用第二相机、第一相机和第一惯性测量单元组成视觉惯性里程计,在第二相机无法工作的时候,也能够持续输出位置、姿态、速度等位姿信息;同时,进行第一相机与机体IMU惯性测量单元的外参(相对姿态)的自动估计,直接用于云台的姿态反馈,使得云台的相对姿态准确,不出现歪斜等现象,对于云台的机械结构安装精度、变形的 要求也可以降低。
图7示出了本申请另一实施例提供的数据处理方法的流程示意图。本实施例提供的所述方法应用于一可移动平台,所述可移动平台上设有第一相机、第二相机和第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第二相机用于以全局快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。具体的,参见图所示,所述方法包括:
301、获取所述第一相机的预估位姿;
302、基于所述第二相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机;
303、根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿。
在一种可实现的技术方案中,上述步骤302:“基于所述第一相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机”可包括:
3021、判定所述第二相机采集的图像帧是否符合图像要求;
3022、符合的情况下,确定所述第二相机为可用相机。
承接上述技术方案,上述步骤302“基于所述第一相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机”,还可包括如下步骤:
3023、不符合的情况下,确定所述第一相机为可用相机。
假设所述可用相机为第一相机;相应的,上述步骤303“根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿”,具体为:
3031、根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
3032、通过所述优化方程对所述预估姿态进行优化,生成所述第一相机 的优化姿态。
这里需要补充的是:有关上述步骤3031~3032更详尽的内容,可参见上文中相应的描述,此次不再赘述。
假设所述可用相机为第二相机;相应的,上述步骤303“根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿”,具体为:
3031’、对所述第一惯性测量单元测量得到的惯性数据以及所述第二相机采集到的图像帧进行联合优化,得到所述可移动平台机身的位姿信息;
3032’、根据所述可移动平台机身的位姿信息,对所述预估位姿进行优化得到所述优化位姿。
上述步骤3031’的具体实现过程,可参见现有技术中融合全局快门曝光的相机采集的图像帧及惯性测量单元测量得到的惯性数据实现的视觉惯性里程计技术,本文对此不作赘述。
上述步骤3032’的具体实现过程,可具体为:将可移动平台机身的位姿信息与所述预估位姿进行结合,得到优化后的所述优化位姿。
图8示出了本申请一实施例提供的数据处理装置的结构示意图。如图所示,本实施例提供的方案应用于一可移动平台,所述可移动平台上设有第一相机和第一惯性测量单元,所述第一相机用于以卷帘快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。相应的,所述数据处理装置包括:
获取模块11,用于获取所述第一相机的预估姿态;
构建模块12,用于根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
优化模块13,用于通过所述优化方程对所述预估姿态进行优化,生成所 述第一相机的优化姿态。
进一步的,本实施例提供的所述数据处理装置还可包括:降采样模块。所述降采样模块,用于对所述第一相机采集到的当前图像帧进行降采样,以得到灰度图像。
进一步的,所述构建模块12还用于:
提取所述灰度图像的特征点信息;
根据所述惯性测量单元测量得到的惯性数据以及所述特征点信息,构建所述优化方程。
进一步的,所述构建模块12还用于:
当所述当前图像帧图像符合设定要求时,确定所述当前图像帧为关键帧;
当确定所述当前图像帧为关键帧时,根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的当前图像帧和历史关键帧,构建所述优化方程。
进一步的,所述设定要求包括如下中的至少一种:
所述当前图像帧与前一关键帧之间的旋转角变化大于第一阈值;
所述当前图像帧与前一关键帧之间的位移变化大于第二阈值;
所述当前图像帧与前一关键帧的匹配特征点数量小于第三阈值;
所述当前图像帧与前一关键帧的匹配特征点分布在不同图像区域的数量小于第四阈值。
进一步的,所述构建模块12还用于:
获取所述当前图像帧对应的第一时间戳和一历史图像帧对应的第二时间戳;
通过预设的曝光延迟对所述第一时间戳和所述第二时间戳进行修正;
获取修正后的所述第一时间戳和第二时间戳之间的惯性数据;
根据所述修正后的所述第一时间戳和第二时间戳之间的惯性数据,以及所述第一相机在所述第一时间戳和所述第二时间戳间采集到的图像帧,构建优化方程。
进一步的,所述可移动平台上还设有第二惯性测量单元,所述第二惯性测量单元用于测量所述第一相机的惯性数据;相应的,所述数据处理装置还包括:
所述获取模块11,还用于获取所述第一相机采集当前图像帧过程中所述第二惯性测量单元测量得到的惯性数据,得到所述第一相机的惯性数据;
第一确定模块,用于基于所述第一相机的惯性数据,确定所述第一相机的工作状态是否满足第一要求;
反馈模块,用于在所述第一相机的工作状态符合第一要求的情况下,将所述优化位姿反馈至所述可移动平台,以便所述可移动平台基于所述优化位姿做出相应的控制响应。
进一步的,所述第一要求包括:
所述第一相机惯性数据中含有的绕一预设的坐标轴旋转的角速度均小于第五阈值;
所述第一相机惯性数据中含有的沿所述坐标轴方向上的位移速度均小于第六阈值。
进一步的,所述获取模块11,还用于获取所述当前图像帧和历史关键帧的所有特征点中符合第二要求的特征点数量;
所述反馈模块,还用于当所述特征点数量大于第七阈值时,将所述优化位姿反馈至所述可移动平台,以使得所述可移动平台能够基于所述优化位姿做出相应的控制响应。
进一步的,满足所述第二要求的特征点包括如下中的至少一项:
最大重投影误差小于第八阈值的特征点;
在所述当前图像帧和历史关键帧中出现次数大于第九阈值的特征点;
当前优化得到的三维坐标与前次优化中的三维坐标差异小于第十阈值的特征点。
进一步的,所述可移动平台上还设有云台,所述第一相机设置在所述云台上;所述获取模块11,还用于通过云台的动作参数,确定所述预估姿态。
进一步的,所述云台为三轴云台;所述三轴云台包括:外框架、中框架及内框架;以及所述云台的动作参数包括:
可移动平台机身相对于所述外框架的第一旋转参数;
所述外框架相对于所述中框架的第二旋转参数;
所述中框架相对于所述内框架的第三旋转参数;
所述获取模块11,还用于获取所述第一相机相对于所述可移动平台机身的零点位姿;其中,所述零点位姿为所述云台的动作参数均为零时所述第一相机相对于所述可移动平台机身的位姿;根据所述零点位姿、所述第一旋转参数、所述第二旋转参数及所述第三旋转参数,确定所述预估位姿。
进一步的,所述可移动平台上还设有第二相机,所述第二相机用于以全局快门的方式采集图像帧;所述第二相机与所述第一惯性测量单元固联;
所述数据处理装置还包括:
触发模块,用于在所述第二相机采集的图像帧不可用的情况下,触发所述构建模块执行所述根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧构建优化方程的步骤。
进一步的,所述的数据处理装置还可包括:
第二确定模块,用于在所述第二相机采集的图像帧不可用的情况下,根据所述第二相机的优化位姿,确定所述可移动平台机身的位姿信息。
进一步的,所述的数据处理装置还包括:
第二确定模块,还用于在所述第二相机采集的图像帧可用的情况下,对所述第一惯性测量单元测量得到的惯性数据以及所述第二相机采集到的图像帧进行联合优化,得到所述可移动平台机身的位姿信息。
这里需要说明的是:上述实施例提供的数据处理装置可实现上述各方法实施例中描述的技术方案,上述各模块或单元具体实现的原理可参见上述各方法实施例中的相应内容,此处不再赘述。
图9示出了本申请另一实施例提供的数据处理装置的结构示意图。本实施例提供的技术方案应用于一可移动平台,所述可移动平台上设有第一相机、第二相机、第一惯性测量单元及第二惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第二相机用于以全局快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。具体的,所述数据处理装置,包括:
获取模块21,用于获取所述第一相机的预估位姿;
确定模块22,用于基于所述第二相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机;
优化模块23,用于根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿。
进一步的,所述确定模块22还用于:
判定所述第二相机采集的图像帧是否符合图像要求;
符合的情况下,确定所述第二相机为可用相机。
进一步的,所述确定模块22还用于:
不符合的情况下,确定所述第一相机为可用相机。
进一步的,所述可用相机为第一相机时,所述优化模块23用于:
根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集 到的图像帧,构建优化方程;
通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
进一步的,所述可用相机为第二相机时,所述优化模块23用于:
对所述第一惯性测量单元测量得到的惯性数据以及所述第二相机采集到的图像帧进行联合优化,得到所述可移动平台机身的位姿信息;
根据所述可移动平台机身的位姿信息,对所述预估位姿进行优化得到所述优化位姿。
这里需要说明的是:上述实施例提供的数据处理装置可实现上述各方法实施例中描述的技术方案,上述各模块或单元具体实现的原理可参见上述各方法实施例中的相应内容,此处不再赘述。
本申请一实施例还提供一种数据处理装置,应用于一可移动平台,所述可移动平台上设有第一相机和第一惯性测量单元,所述第一相机用于以卷帘快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。具体的,所述数据处理装置包括处理器及存储器;其中,
所述存储器,用于存储程序;
所述处理器,与所述存储器耦合,用于执行所述存储器中存储的所述程序,以用于:
获取所述第一相机的预估姿态;
根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
进一步的,所述处理器在执行构建优化方程动作之前,还用于:
对所述第一相机采集到的当前图像帧进行降采样,以得到灰度图像。
进一步的,所述处理器在根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧构建优化方程时,具体用于:
提取所述灰度图像的特征点信息;
根据所述惯性测量单元测量得到的惯性数据以及所述特征点信息,构建所述优化方程。
进一步的,所述处理器在根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧构建所述优化方程时,具体用于:
当所述当前图像帧符合设定要求时,确定所述当前图像帧为关键帧;
当确定所述当前图像帧为关键帧时,根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的当前图像帧和历史关键帧,构建所述优化方程。
进一步的,所述处理器在所述根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧构建优化方程时,具体用于:
获取所述当前图像帧对应的第一时间戳和一历史图像帧对应的第二时间戳;
通过预设的曝光延迟对所述第一时间戳和所述第二时间戳进行修正;
获取修正后的所述第一时间戳和第二时间戳之间的惯性数据;
根据所述修正后的所述第一时间戳和第二时间戳之间的惯性数据,以及所述第一相机在所述第一时间戳和所述第二时间戳间采集到的图像帧,构建优化方程。
进一步的,所述可移动平台上还设有第二惯性测量单元,所述第二惯性测量单元用于测量所述第一相机的惯性数据;以及
所述处理器还用于:
获取所述第一相机采集当前图像帧过程中所述第二惯性测量单元测量得到的惯性数据,得到所述第一相机的惯性数据;
基于所述第一相机的惯性数据,确定所述第一相机的工作状态是否满足第一要求;
在所述第一相机的工作状态符合第一要求的情况下,将所述优化位姿反馈至所述可移动平台,以便所述可移动平台基于所述优化位姿做出相应的控制响应。
进一步的,所述处理器还用于:
获取所述当前图像帧和历史关键帧的所有特征点中符合第二要求的特征点数量;
当所述特征点数量大于第七阈值时,将所述优化位姿反馈至所述可移动平台,以使得所述可移动平台能够基于所述优化位姿做出相应的控制响应。
本申请又一实施例还提供一种数据处理装置,应用于一可移动平台,所述可移动平台上设有第一相机、第二相机及第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第二相机用于以全局快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。具体的,所述数据处理装置包括处理器及存储器;其中,
所述存储器,用于存储程序;
所述处理器,与所述存储器耦合,用于执行所述存储器中存储的所述程序,以用于:
获取所述第一相机的预估位姿;
基于所述第二相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机;
根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿。
进一步的,所述处理器基于所述第一相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机时具体用于:
判定所述第二相机采集的图像帧是否符合图像要求;
符合的情况下,确定所述第二相机为可用相机。
进一步的,所述处理器还用于:
不符合的情况下,确定所述第一相机为可用相机。
进一步的,所述处理器在所述可用相机为第一相机的情况下,根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿时,具体用于:
根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
可选地,上述实施例中提供的数据处理装置可以包括多个不同的部件这些部件可以作为集成电路(integrated circui ts,ICs),或集成电路的部分,离散的电子设备,或其它适用于电路板(诸如主板,或附加板)的模块,也可以作为并入计算机系统的部件。
可选地,处理器可以包括一个或多个通用处理器,诸如中央处理单元(central processing unit,CPU)或处理设备等。具体地,该处理器可以微处理器,也可以是一个或多个专用处理器,诸如应用专用集成电路(application specific integrated circuit,ASIC),现场可编程门阵列(field programmable gate array,FPGA),数字信号处理器(digital signal processor,DSP)。
处理器可以与存储器通信。该存储器可以为磁盘、光盘、只读存储器(read only memory,ROM),闪存等。该存储器可以存储有处理器存储的指令,和/或,可以缓存一些从外部存储设备存储的信息。
图5示出了本申请一实施例提供的可移动平台的结构示意图。如图5所示,所述可移动平台包括:
第一相机820,用于以卷帘快门的方式采集图像帧;
第一惯性测量单元860,用于测量所述可移动平台的惯性数据;
以及
一个或多个处理器(图中未示出),其单独地或共同地被配置成用于:
获取所述第一相机的预估姿态;
根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
具体的,所述可移动平台为无人飞行器、无人驾驶车辆等。
这里需要说明的是:上述实施例中处理器还可上述各方法实施例中描述的技术方案,具体实现的原理可参见上述各方法实施例中的相应内容,此处不再赘述。
进一步的,本申请实施例还提供一个或多个非暂时性计算机可读存储介质,其具有储存于其上的可执行指令,所述可执行指令在一个或多个处理器执行时,使所述计算机系统至少:
获取第一相机的预估姿态;
根据第一惯性测量单元测量得到的惯性数据以及第一相机采集到的图像帧,构建优化方程;
通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
其中,可移动平台上设有所述第一相机及所述第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。
图5示出了本申请另一实施例提供的可移动平台的结构示意图。具体的,如图5所示,所述可移动平台包括:
第一相机820,用于以卷帘快门的方式采集图像帧;
第二相机840,用于以全局快门的方式采集图像帧;
第一惯性测量单元860,用于测量所述可移动平台的惯性数据;
以及
一个或多个处理器(图中未示出),其单独地或共同地被配置成用于:
获取所述第一相机的预估位姿;
基于所述第一相机采集到的图像帧、所述第二相机采集到的图像帧及所述第二惯性测量单元测量得到的惯性数据,从所述第一相机与所述第二相机中确定一可用相机;
根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿。
这里需要说明的是:上述实施例中处理器还可上述各方法实施例中描述的技术方案,具体实现的原理可参见上述各方法实施例中的相应内容,此处不再赘述。
可选地,除了处理器,本申请各实施例提供的可移动平台还可包括显示控制器和/或显示设备单元,收发器,音频输入输出单元,其他输入输出单元等。可移动平台包括的这些部件可以通过总线或内部连接互联。
可选地,收发器可以是有线收发器或无线收发器,诸如,WIFI收发器,卫星收发器,蓝牙收发器,3G/4G/5G无线通信信号收发器或其组合等。
可选地,该音频输入输出单元可以包括扬声器,话筒,听筒等。
可选地,其他输入输出设备770可以包括USB端口,串行端口,并行端口,打印机,网络接口等。
进一步的,本申请一实施例还提供了一个或多个非暂时性计算机可读存储介质,其具有储存于其上的可执行指令,所述可执行指令在一个或多个处理器执行时,使所述计算机系统至少:
获取第一相机的预估位姿;
基于第一相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机;
根据第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿;
其中,可移动平台上设有所述第一相机、所述第二相机及所述第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第二相机用于以全局快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其 限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (37)

  1. 一种数据处理方法,其特征在于,应用于一可移动平台,所述可移动平台上设有第一相机和第一惯性测量单元,所述第一相机用于以卷帘快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据,所述方法包括步骤:
    获取所述第一相机的预估姿态;
    根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
    通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
  2. 根据权利要求1所述的方法,其特征在于,在构建优化方程前,还包括:
    对所述第一相机采集到的当前图像帧进行降采样,以得到灰度图像。
  3. 根据权利要求2所述的方法,其特征在于,根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程,包括:
    提取所述灰度图像的特征点信息;
    根据所述惯性测量单元测量得到的惯性数据以及所述特征点信息,构建所述优化方程。
  4. 根据权利要求1所述的方法,其特征在于,根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建所述优化方程具体包括:
    当所述当前图像帧符合设定要求时,确定所述当前图像帧为关键帧;
    当确定所述当前图像帧为关键帧时,根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的当前图像帧和历史关键帧,构建所述优化方程。
  5. 根据权利要求4所述的方法,其特征在于,所述设定要求包括如下中的至少一种:
    所述当前图像帧与前一关键帧之间的旋转角变化大于第一阈值;
    所述当前图像帧与前一关键帧之间的位移变化大于第二阈值;
    所述当前图像帧与前一关键帧的匹配特征点数量小于第三阈值;
    所述当前图像帧与前一关键帧的匹配特征点分布在不同图像区域的数量小于第四阈值。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程具体为:
    获取所述当前图像帧对应的第一时间戳和一历史图像帧对应的第二时间戳;
    通过预设的曝光延迟对所述第一时间戳和所述第二时间戳进行修正;
    获取修正后的所述第一时间戳和第二时间戳之间的惯性数据;
    根据所述修正后的所述第一时间戳和第二时间戳之间的惯性数据,以及所述第一相机在所述第一时间戳和所述第二时间戳间采集到的图像帧,构建优化方程。
  7. 根据权利要求1至5中任一项所述的方法,其特征在于,所述可移动平台上还设有第二惯性测量单元,所述第二惯性测量单元用于测量所述第一相机的惯性数据;
    所述方法还包括步骤:
    获取所述第一相机采集当前图像帧过程中所述第二惯性测量单元测量得到的惯性数据,得到所述第一相机的惯性数据;
    基于所述第一相机的惯性数据,确定所述第一相机的工作状态是否满足第一要求;
    在所述第一相机的工作状态符合第一要求的情况下,将所述优化位姿反馈至所述可移动平台,以便所述可移动平台基于所述优化位姿做出相应的控 制响应。
  8. 根据权利要求7所述的方法,其特征在于,所述第一要求包括:
    所述第一相机惯性数据中含有的绕一预设的坐标轴旋转的角速度均小于第五阈值;
    所述第一相机惯性数据中含有的沿所述坐标轴方向上的位移速度均小于第六阈值。
  9. 根据权利要求7所述的方法,其特征在于,还包括:
    获取所述当前图像帧和历史关键帧的所有特征点中符合第二要求的特征点数量;
    当所述特征点数量大于第七阈值时,将所述优化位姿反馈至所述可移动平台,以使得所述可移动平台能够基于所述优化位姿做出相应的控制响应。
  10. 根据权利要求9所述的方法,其特征在于,满足所述第二要求的特征点包括如下中的至少一项:
    最大重投影误差小于第八阈值的特征点;
    在所述当前图像帧和历史关键帧中出现次数大于第九阈值的特征点;
    当前优化得到的三维坐标与前次优化中的三维坐标差异小于第十阈值的特征点。
  11. 根据权利要求1至5中任一所述的方法,其特征在于,所述可移动平台上还设有云台,所述第一相机设置在所述云台上,获取所述第一相机的预估姿态,包括:
    通过云台的动作参数,确定所述预估姿态。
  12. 根据权利要求11所述的方法,其特征在于,所述云台为三轴云台;所述三轴云台包括:外框架、中框架及内框架;以及
    所述云台的动作参数包括:
    可移动平台机身相对于所述外框架的第一旋转参数;
    所述外框架相对于所述中框架的第二旋转参数;
    所述中框架相对于所述内框架的第三旋转参数。
  13. 根据权利要求12所述的方法,其特征在于,通过云台的动作参数,确定所述预估姿态,包括:
    获取所述第一相机相对于所述可移动平台机身的零点位姿;其中,所述零点位姿为所述云台的动作参数均为零时所述第一相机相对于所述可移动平台机身的位姿;
    根据所述零点位姿、所述第一旋转参数、所述第二旋转参数及所述第三旋转参数,确定所述预估位姿。
  14. 根据权利要求1至5中任一所述的方法,其特征在于,所述可移动平台上还设有第二相机,所述第二相机用于以全局快门的方式采集图像帧;所述第二相机与所述第一惯性测量单元固联;
    所述方法还包括步骤:
    在所述第二相机采集的图像帧不可用的情况下,触发所述根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧构建优化方程的步骤。
  15. 根据权利要求14所述的方法,其特征在于,还包括:
    在所述第二相机采集的图像帧不可用的情况下,根据所述第二相机的优化位姿,确定所述可移动平台机身的位姿信息。
  16. 根据权利要求14所述的方法,其特征在于,还包括:
    在所述第二相机采集的图像帧可用的情况下,对所述第一惯性测量单元测量得到的惯性数据以及所述第二相机采集到的图像帧进行联合优化,得到所述可移动平台机身的位姿信息。
  17. 一种数据处理方法,其特征在于在,应用于一可移动平台,所述可移动平台上设有第一相机、第二相机及第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第二相机用于以全局快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据;
    所述方法包括步骤:
    获取所述第一相机的预估位姿;
    基于所述第二相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机;
    根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿。
  18. 根据权利要求17所述的方法,其特征在于,基于所述第一相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机,包括:
    判定所述第二相机采集的图像帧是否符合图像要求;
    符合的情况下,确定所述第二相机为可用相机。
  19. 根据权利要求18所述的方法,其特征在于,基于所述第一相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机,还包括:
    不符合的情况下,确定所述第一相机为可用相机。
  20. 根据权利要求17至19中任一项所述的方法,其特征在于,所述可用相机为第一相机时,根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿,具体为:
    根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
    通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
  21. 根据权利要求17至19中任一项所述的方法,其特征在于,所述可用相机为第二相机时,根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿,具体为:
    对所述第一惯性测量单元测量得到的惯性数据以及所述第二相机采集到的图像帧进行联合优化,得到所述可移动平台的机身的位姿信息;
    根据所述可移动平台的机身的位姿信息,对所述预估位姿进行优化得到所述优化位姿。
  22. 一种数据处理装置,其特征在于,应用于一可移动平台,所述可移动平台上设有第一相机和第一惯性测量单元,所述第一相机用于以卷帘快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据;
    所述数据处理装置包括处理器及存储器;其中,
    所述存储器,用于存储程序;
    所述处理器,与所述存储器耦合,用于执行所述存储器中存储的所述程序,以用于:
    获取所述第一相机的预估姿态;
    根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
    通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
  23. 根据权利要求22所述的数据处理装置,其特征在于,所述处理器在执行构建优化方程动作之前,还用于:
    对所述第一相机采集到的当前图像帧进行降采样,以得到灰度图像。
  24. 根据权利要求23所述的数据处理装置,其特征在于,所述处理器在根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧构建优化方程时,具体用于:
    提取所述灰度图像的特征点信息;
    根据所述惯性测量单元测量得到的惯性数据以及所述特征点信息,构建所述优化方程。
  25. 根据权利要求22所述的数据处理装置,其特征在于,所述处理器在根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧构建所述优化方程时,具体用于:
    当所述当前图像帧符合设定要求时,确定所述当前图像帧为关键帧;
    当确定所述当前图像帧为关键帧时,根据所述第一惯性测量单元测量得 到的惯性数据以及所述第一相机采集到的当前图像帧和历史关键帧,构建所述优化方程。
  26. 根据权利要求22至25中任一项所述的数据处理装置,其特征在于,所述处理器在所述根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧构建优化方程时,具体用于:
    获取所述当前图像帧对应的第一时间戳和一历史图像帧对应的第二时间戳;
    通过预设的曝光延迟对所述第一时间戳和所述第二时间戳进行修正;
    获取修正后的所述第一时间戳和第二时间戳之间的惯性数据;
    根据所述修正后的所述第一时间戳和第二时间戳之间的惯性数据,以及所述第一相机在所述第一时间戳和所述第二时间戳间采集到的图像帧,构建优化方程。
  27. 根据权利要求22至25中任一项所述的数据处理装置,其特征在于,所述可移动平台上还设有第二惯性测量单元,所述第二惯性测量单元用于测量所述第一相机的惯性数据;以及
    所述处理器还用于:
    获取所述第一相机采集当前图像帧过程中所述第二惯性测量单元测量得到的惯性数据,得到所述第一相机的惯性数据;
    基于所述第一相机的惯性数据,确定所述第一相机的工作状态是否满足第一要求;
    在所述第一相机的工作状态符合第一要求的情况下,将所述优化位姿反馈至所述可移动平台,以便所述可移动平台基于所述优化位姿做出相应的控制响应。
  28. 根据权利要求27所述的数据处理装置,其特征在于,所述处理器还用于:
    获取所述当前图像帧和历史关键帧的所有特征点中符合第二要求的特征点数量;
    当所述特征点数量大于第七阈值时,将所述优化位姿反馈至所述可移动平台,以使得所述可移动平台能够基于所述优化位姿做出相应的控制响应。
  29. 一种数据处理装置,其特征在于,应用于一可移动平台,所述可移动平台上设有第一相机、第二相机及第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第二相机用于以全局快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据;
    所述数据处理装置包括处理器及存储器;其中,
    所述存储器,用于存储程序;
    所述处理器,与所述存储器耦合,用于执行所述存储器中存储的所述程序,以用于:
    获取所述第一相机的预估位姿;
    基于所述第二相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机;
    根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿。
  30. 根据权利要求29所述的数据处理装置,其特征在于,所述处理器基于所述第一相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机时具体用于:
    判定所述第二相机采集的图像帧是否符合图像要求;
    符合的情况下,确定所述第二相机为可用相机。
  31. 根据权利要求30所述的数据处理装置,其特征在于,所述处理器还用于:
    不符合的情况下,确定所述第一相机为可用相机。
  32. 根据权利要求29至31中任一项所述的数据处理装置,其特征在于,所述处理器在所述可用相机为第一相机的情况下,根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿时,具体用于:
    根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
    通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
  33. 一种可移动平台,其特征在于,包括:
    第一相机,用于以卷帘快门的方式采集图像帧;
    第一惯性测量单元,用于测量所述可移动平台的惯性数据;
    以及
    一个或多个处理器,其单独地或共同地被配置成用于:
    获取所述第一相机的预估姿态;
    根据所述第一惯性测量单元测量得到的惯性数据以及所述第一相机采集到的图像帧,构建优化方程;
    通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态。
  34. 根据权利要求33所述的可移动平台,其特征在于,所述可移动平台为无人飞行器。
  35. 一个或多个非暂时性计算机可读存储介质,其具有储存于其上的可执行指令,所述可执行指令在一个或多个处理器执行时,使所述计算机系统至少:
    获取第一相机的预估姿态;
    根据第一惯性测量单元测量得到的惯性数据以及第一相机采集到的图像帧,构建优化方程;
    通过所述优化方程对所述预估姿态进行优化,生成所述第一相机的优化姿态;
    其中,可移动平台上设有所述第一相机及所述第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。
  36. 一种可移动平台,其特征在于,包括:
    第一相机,用于以卷帘快门的方式采集图像帧;
    第二相机,用于以全局快门的方式采集图像帧;
    第一惯性测量单元,用于测量所述可移动平台的惯性数据;
    以及
    一个或多个处理器,其单独地或共同地被配置成用于:
    获取所述第一相机的预估位姿;
    基于所述第二相机采集到的图像帧,在所述第一相机与所述第二相机中确定一可用相机;
    根据所述第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿。
  37. 一个或多个非暂时性计算机可读存储介质,其具有储存于其上的可执行指令,所述可执行指令在一个或多个处理器执行时,使所述计算机系统至少:
    获取第一相机的预估位姿;
    基于第一相机采集到的图像帧,在所述第一相机与第二相机中确定一可用相机;
    根据第一惯性测量单元测量得到的惯性数据及所述可用相机采集到的图像帧,对所述预估位姿进行优化计算得到优化位姿;
    其中,可移动平台上设有所述第一相机、所述第二相机及所述第一惯性测量单元;所述第一相机用于以卷帘快门的方式采集图像帧;所述第二相机用于以全局快门的方式采集图像帧;所述第一惯性测量单元用于测量所述可移动平台的惯性数据。
PCT/CN2019/113687 2019-10-28 2019-10-28 数据处理方法、装置、可移动平台及计算机可读存储介质 WO2021081707A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980034224.5A CN112204946A (zh) 2019-10-28 2019-10-28 数据处理方法、装置、可移动平台及计算机可读存储介质
PCT/CN2019/113687 WO2021081707A1 (zh) 2019-10-28 2019-10-28 数据处理方法、装置、可移动平台及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/113687 WO2021081707A1 (zh) 2019-10-28 2019-10-28 数据处理方法、装置、可移动平台及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021081707A1 true WO2021081707A1 (zh) 2021-05-06

Family

ID=74004607

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113687 WO2021081707A1 (zh) 2019-10-28 2019-10-28 数据处理方法、装置、可移动平台及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN112204946A (zh)
WO (1) WO2021081707A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113739819A (zh) * 2021-08-05 2021-12-03 上海高仙自动化科技发展有限公司 校验方法、装置、电子设备、存储介质及芯片

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950715B (zh) * 2021-03-04 2024-04-30 杭州迅蚁网络科技有限公司 无人机的视觉定位方法、装置、计算机设备和存储介质
CN113034594A (zh) * 2021-03-16 2021-06-25 浙江商汤科技开发有限公司 位姿优化方法、装置、电子设备及存储介质
CN114500842A (zh) * 2022-01-25 2022-05-13 维沃移动通信有限公司 视觉惯性标定方法及其装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140333741A1 (en) * 2013-05-08 2014-11-13 Regents Of The University Of Minnesota Constrained key frame localization and mapping for vision-aided inertial navigation
CN107888828A (zh) * 2017-11-22 2018-04-06 网易(杭州)网络有限公司 空间定位方法及装置、电子设备、以及存储介质
WO2018182524A1 (en) * 2017-03-29 2018-10-04 Agency For Science, Technology And Research Real time robust localization via visual inertial odometry
CN108780577A (zh) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 图像处理方法和设备
CN109074664A (zh) * 2017-10-26 2018-12-21 深圳市大疆创新科技有限公司 姿态标定方法、设备及无人飞行器
CN110246147A (zh) * 2019-05-14 2019-09-17 中国科学院深圳先进技术研究院 视觉惯性里程计方法、视觉惯性里程计装置及移动设备
CN110378968A (zh) * 2019-06-24 2019-10-25 深圳奥比中光科技有限公司 相机和惯性测量单元相对姿态的标定方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9967463B2 (en) * 2013-07-24 2018-05-08 The Regents Of The University Of California Method for camera motion estimation and correction
FR3038431B1 (fr) * 2015-06-30 2017-07-21 Parrot Bloc camera haute resolution pour drone, avec correction des instabilites de type oscillations ondulantes
CN108605098B (zh) * 2016-05-20 2020-12-11 深圳市大疆创新科技有限公司 用于卷帘快门校正的系统和方法
CN109186592B (zh) * 2018-08-31 2022-05-20 腾讯科技(深圳)有限公司 用于视觉惯导信息融合的方法和装置以及存储介质
CN110375738B (zh) * 2019-06-21 2023-03-14 西安电子科技大学 一种融合惯性测量单元的单目同步定位与建图位姿解算方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140333741A1 (en) * 2013-05-08 2014-11-13 Regents Of The University Of Minnesota Constrained key frame localization and mapping for vision-aided inertial navigation
WO2018182524A1 (en) * 2017-03-29 2018-10-04 Agency For Science, Technology And Research Real time robust localization via visual inertial odometry
CN109074664A (zh) * 2017-10-26 2018-12-21 深圳市大疆创新科技有限公司 姿态标定方法、设备及无人飞行器
CN107888828A (zh) * 2017-11-22 2018-04-06 网易(杭州)网络有限公司 空间定位方法及装置、电子设备、以及存储介质
CN108780577A (zh) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 图像处理方法和设备
CN110246147A (zh) * 2019-05-14 2019-09-17 中国科学院深圳先进技术研究院 视觉惯性里程计方法、视觉惯性里程计装置及移动设备
CN110378968A (zh) * 2019-06-24 2019-10-25 深圳奥比中光科技有限公司 相机和惯性测量单元相对姿态的标定方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113739819A (zh) * 2021-08-05 2021-12-03 上海高仙自动化科技发展有限公司 校验方法、装置、电子设备、存储介质及芯片
CN113739819B (zh) * 2021-08-05 2024-04-16 上海高仙自动化科技发展有限公司 校验方法、装置、电子设备、存储介质及芯片

Also Published As

Publication number Publication date
CN112204946A (zh) 2021-01-08

Similar Documents

Publication Publication Date Title
WO2021081707A1 (zh) 数据处理方法、装置、可移动平台及计算机可读存储介质
WO2020014909A1 (zh) 拍摄方法、装置和无人机
US20190178436A1 (en) Method and system for controlling gimbal
US11073389B2 (en) Hover control
CN106873619B (zh) 一种无人机飞行路径的处理方法
WO2019113966A1 (zh) 一种避障方法、装置和无人机
WO2017000876A1 (zh) 对地定位或导航用相机、飞行器及其导航方法
US20190098217A1 (en) Systems and methods for rolling shutter correction
WO2019227441A1 (zh) 可移动平台的拍摄控制方法和设备
WO2017045326A1 (zh) 一种无人飞行器的摄像处理方法
CN108235815B (zh) 摄像控制装置、摄像装置、摄像系统、移动体、摄像控制方法及介质
WO2019104571A1 (zh) 图像处理方法和设备
US20220086362A1 (en) Focusing method and apparatus, aerial camera and unmanned aerial vehicle
CN110622091A (zh) 云台的控制方法、装置、系统、计算机存储介质及无人机
EP3531375A1 (en) Method and apparatus for measuring distance, and unmanned aerial vehicle
WO2019205087A1 (zh) 图像增稳方法和装置
WO2020227998A1 (zh) 图像增稳控制方法、拍摄设备和可移动平台
WO2021043214A1 (zh) 一种标定方法、装置及飞行器
WO2021217371A1 (zh) 可移动平台的控制方法和装置
WO2020181409A1 (zh) 拍摄装置参数标定方法、设备及存储介质
CN111801548A (zh) 用于后场生成的图像的沃罗诺伊裁剪
WO2020198963A1 (zh) 关于拍摄设备的数据处理方法、装置及图像处理设备
WO2019205103A1 (zh) 云台姿态修正方法、云台姿态修正装置、云台、云台系统和无人机
TWI726536B (zh) 影像擷取方法及影像擷取設備
WO2021056411A1 (zh) 航线调整方法、地面端设备、无人机、系统和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19951139

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19951139

Country of ref document: EP

Kind code of ref document: A1