WO2022040940A1 - 标定方法、装置、可移动平台及存储介质 - Google Patents

标定方法、装置、可移动平台及存储介质 Download PDF

Info

Publication number
WO2022040940A1
WO2022040940A1 PCT/CN2020/111155 CN2020111155W WO2022040940A1 WO 2022040940 A1 WO2022040940 A1 WO 2022040940A1 CN 2020111155 W CN2020111155 W CN 2020111155W WO 2022040940 A1 WO2022040940 A1 WO 2022040940A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
relative pose
grayscale image
photographing device
tof ranging
Prior art date
Application number
PCT/CN2020/111155
Other languages
English (en)
French (fr)
Inventor
李恺
徐彬
熊策
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/111155 priority Critical patent/WO2022040940A1/zh
Publication of WO2022040940A1 publication Critical patent/WO2022040940A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present invention relates to the technical field of vision applications, and in particular, to a calibration method, a device, a movable platform and a storage medium.
  • TOF Time of Flight
  • VR Virtual Reality
  • the relative pose parameter between the TOF ranging device and the photographing device may represent the spatial relationship between the TOF ranging device and the photographing device, and the spatial relationship may include translational and/or rotational relationships.
  • the relative pose of the TOF ranging device and the photographing device changes due to installation errors.
  • the relative pose parameters between the TOF ranging device and the photographing device can be calibrated offline to ensure that the depth image obtained by the TOF ranging device and the image obtained by the photographing device registration.
  • the embodiments of the present application provide a calibration method, a device, a movable platform and a storage medium, which can automatically correct the relative pose parameters between the TOF ranging device and the photographing device, so as to improve the accuracy of the relative pose parameters sex.
  • a first aspect of the embodiments of the present application provides a calibration method, which is applied to a movable platform, and the movable platform is configured with a photographing device and a time of flight (Time of flight, TOF) ranging device, and the method includes:
  • the pre-stored relative pose parameters between the TOF ranging device and the photographing device are corrected to obtain the corrected relative pose parameters.
  • a second aspect of an embodiment of the present application provides a calibration device, including a memory and a processor, the calibration device is applied to a movable platform, and the movable platform is configured with a photographing device and a TOF ranging device, wherein,
  • the memory for storing program codes
  • the processor calls the program code in the memory, and when the program code is executed, is used to perform the following operations:
  • the pre-stored relative pose parameters between the TOF ranging device and the photographing device are corrected to obtain the corrected relative pose parameters.
  • a third aspect of the embodiments of the present application provides a movable platform, and the movable platform includes:
  • a fourth aspect of the embodiments of the present application provides a computer storage medium, where computer program instructions are stored in the computer storage medium, and when the computer program instructions are executed by a processor, are used to execute the calibration method according to the first aspect .
  • the movable platform obtains the depth image output by the TOF ranging device and the first grayscale image corresponding to the depth image, and obtains the image output by the photographing device, and obtains the second grayscale image according to the image, Or acquire the second grayscale image output by the photographing device, and then correct the pre-stored relative pose parameters between the TOF ranging device and the photographing device according to the depth image, the first grayscale image and the second grayscale image, to Obtaining the corrected relative pose parameters can improve the accuracy of the relative pose parameters.
  • FIG. 1 is a schematic flowchart of a calibration method according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a relative pose parameter correction method according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a projection image according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a second feature object image according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a movable platform according to an embodiment of the present application.
  • the calibration method provided in the embodiment of the present application can be applied to a mobile platform.
  • the movable platform may include: TOF ranging device, photographing device, etc.
  • the movable platform provided in the embodiment of the present invention may include a handheld stabilization system for stabilization of a photographing device, an unmanned aerial vehicle, a smart phone, a VR head-mounted display device, an autonomous vehicle, etc. with a computer vision module of smart devices.
  • the TOF ranging device can be a 3D-TOF ranging device, a lidar, a high-resolution millimeter-wave radar, etc.
  • the TOF ranging device is used to output a depth image.
  • the photographing device can be a camera, a mobile phone, a camera, etc.
  • the photographing device is used to output an image, such as an image obtained by photographing the surrounding environment of the movable platform, and the image can be a grayscale image, an infrared image or a color image.
  • the relative pose parameter between the TOF ranging device and the photographing device may be used to indicate the spatial relationship between the TOF ranging device and the photographing device, and the spatial relationship may include a translational relationship and a rotational relationship.
  • the TOF ranging device may include a transmitting device and a receiving device, the transmitting device can transmit an optical signal, the optical signal is reflected by the target object in the environment, and the receiving device can receive the transmitted optical signal, and generate a depth image and a depth image according to the received optical signal.
  • the transmitting device may be a light emitting diode (Light Emitting Diode, LED for short) or a laser diode (Laser Diode, LD for short), wherein the transmitting device is driven by the driving module of the TOF ranging device, and the driving module is processed by the TOF ranging device.
  • the processing module controls the driving module to output the driving signal to drive the transmitting device, in which the frequency and duty cycle of the driving signal output by the driving module can be controlled by the processing module, the driving signal is used to drive the transmitting device, and the transmitting device sends out after modulation
  • the light signal hits the target object.
  • Target objects can be users, buildings, cars, and so on.
  • the receiving device may include a photodiode, an avalanche photodiode, and a charge-coupled element.
  • the receiving device may be a photodiode, avalanche photodiode, or a receiving array of charge-coupled elements.
  • the receiving device converts the optical signal into an electrical signal
  • the signal processing module of the TOF ranging device processes the electrical signal output by the receiving device, such as amplifying, filtering, etc.
  • the signal processed by the signal processing module is input into the processing module, and the processing module can The electrical signals are converted into depth images and grayscale images.
  • FIG. 1 is a schematic flowchart of a calibration method proposed by an embodiment of the present application. As shown in FIG. 1 , the method may include:
  • S101 Acquire a depth image output by the TOF ranging device and a first grayscale image corresponding to the depth image.
  • the TOF ranging device can emit light signals to the surrounding environment of the movable platform to obtain a depth image and a first grayscale image corresponding to the depth image.
  • the TOF ranging device can then send the depth image and the first grayscale image to the movable platform.
  • the depth value of the pixels in the depth image in the space can be obtained through the depth image.
  • the grayscale value of the pixel point in the first grayscale image can be obtained through the first grayscale image.
  • There is a one-to-one correspondence between the pixels in the depth image and the pixels in the first grayscale image for example, the pixels located in the first row and first column in the depth image and the pixels located in the first row and first column in the first grayscale image Points are at the same location in space.
  • S102 Acquire an image output by the photographing device, and obtain a second grayscale image according to the image, or obtain a second grayscale image output by the photographing device.
  • the movable platform can acquire the image output by the photographing device, and then perform grayscale processing on the image to obtain a grayscale image corresponding to the image , the grayscale image is the second grayscale image.
  • the movable platform can directly acquire the grayscale image output by the photographing device, and the grayscale image is the second grayscale image.
  • the embodiment of the present application does not limit the execution order of step S101 and step S102.
  • the movable platform can simultaneously acquire the depth image output by the TOF ranging device, the image output by the photographing device or the second grayscale image, and the For example, the movable platform acquires the depth image output by the TOF ranging device, the image output by the photographing device or the second grayscale image within a preset time period, which is not specifically limited by the embodiments of the present application.
  • the TOF ranging device and the photographing device may be images obtained by collecting the same object.
  • the TOF ranging device collects an image of a specified object to obtain a depth image of the specified object, and the photographing device performs image acquisition on the specified object. , get the image of the specified object or the second grayscale image.
  • S103 Correct the pre-stored relative pose parameters between the TOF ranging device and the photographing device according to the depth image, the first grayscale image and the second grayscale image, to obtain corrected relative pose parameters.
  • the movable platform can perform the pre-stored relative pose parameters between the TOF ranging device and the photographing device according to the depth image, the first grayscale image and the second grayscale image in the translation direction and the rotation direction. Correction.
  • the movable platform can measure the six parameters of the rotation direction (that is, the rotation parameters of the three axes) and/or the translation direction (that is, the translation parameters of the three axes) between the TOF ranging device and the photographing device. Correction is performed, that is, according to the translation direction and/or rotation direction of the depth image, the first grayscale image and the second grayscale image, the pre-stored relative pose parameters between the TOF ranging device and the photographing device are corrected to ensure that The corrected relative pose parameters have high accuracy, and can further ensure the rotation direction of each axis/or the translation direction of each axis, and the accuracy has reached the best.
  • the movable platform can detect the pre-stored TOF ranging device and the photographing device in at least one target translation direction and/or at least one target rotation direction according to the depth image, the first grayscale image and the second grayscale image.
  • the relative pose parameters between them are corrected.
  • the target translation direction can be the X-axis translation direction, the Y-axis translation direction, or the Z-axis translation direction.
  • the target rotation direction can be the X-axis rotation direction, the Y-axis rotation direction, or the Z-axis rotation direction.
  • the at least one target translation direction and/or the at least one target rotation direction may be determined by detecting a user's selection operation of a correction direction.
  • the at least one target translation direction and/or the at least one target rotation direction is determined according to detecting the user's translation direction selection operation and/or rotation direction selection operation.
  • the movable platform may be configured with an interaction device, the interaction device may detect a user's correction direction selection operation, and determine the at least one target translation direction and/or at least one target rotation direction according to the detected correction direction selection operation.
  • the control terminal may include an interaction device, the interaction device detects a user's selection operation of a correction direction, determines the at least one target translation direction and/or at least one target rotation direction according to the detected correction direction selection operation, and controls the The terminal sends the information for indicating the at least one target translation direction and/or the at least one target rotation direction to the movable platform.
  • the movable platform can only correct parameters with large errors, such as the rotation parameters of the TOF ranging device and the photographing device in any axis, and/or the TOF ranging device and the photographing device in any axis.
  • the translation parameter is corrected.
  • only parameters with large errors are corrected, and under the condition that the accuracy of the relative pose parameters of the TOF ranging device and the photographing device is ensured to be improved, system resources, such as CPU resources, input/output (Input/ Output, I/O) resources, etc.
  • the movable platform can measure the relative position between the pre-stored TOF ranging device and the photographing device with a preset step size in the translation direction according to the depth image, the first grayscale image and the second grayscale image.
  • the pose parameters are corrected.
  • the movable platform can perform pre-stored relative pose parameters between the TOF ranging device and the photographing device with a preset step size in the rotation direction according to the depth image, the first grayscale image and the second grayscale image. Correction.
  • the movable platform performs the pre-stored TOF ranging device and the photographing device according to the depth image, the first grayscale image and the second grayscale image in the translation direction and/or the rotation direction. Correcting the relative pose parameters between them can save system resources while ensuring that the accuracy of the relative pose parameters of the TOF ranging device and the photographing device is improved.
  • the movable translation can detect whether the relative pose parameters between the pre-stored TOF ranging device and the photographing device are corrected according to the depth image, the first grayscale image and the second grayscale image.
  • the preset calibration start condition is met, and when it is determined that the calibration start condition is met, triggering the relative pose parameters between the pre-stored TOF ranging device and the photographing device according to the depth image, the first grayscale image and the second grayscale image Make corrections.
  • the relative pose parameters between the pre-stored TOF ranging device and the photographing device are corrected in relatively real time, and in the embodiment of the present application, the relative position between the pre-stored TOF ranging device and the photographing device is corrected only when the calibration start condition is satisfied. Attitude parameters are corrected, which can save system resources.
  • the ways for the movable platform to detect whether the preset correction start conditions are met may include the following two:
  • the movable platform detects whether the start-up signal of the movable platform is obtained. When the start-up signal is obtained, it is determined that the calibration start condition is satisfied, otherwise, the calibration start condition is not satisfied.
  • the movable platform does not need to correct the relative pose parameters between the pre-stored TOF ranging device and the photographing device in real time, but every time the movable platform is powered on, the pre-stored TOF ranging device and The relative pose parameters between the photographing devices are corrected, so that the movable platform can use the corrected relative pose parameters at the next time, for example, to control the photographing devices according to the corrected relative pose parameters.
  • the movable platform detects whether the user's calibration start operation is obtained, and when the calibration start operation is obtained, it is determined that the calibration start condition is satisfied, otherwise, the calibration start condition is not satisfied.
  • the movable platform does not need to correct the relative pose parameters between the pre-stored TOF ranging device and the photographing device in real time.
  • the relative pose parameters between the camera and the camera are corrected.
  • the movable platform can acquire at least one historical image output by the photographing device at a historical moment, and display the at least one historical image, and the user learns the relative relationship between the TOF ranging device and the photographing device by observing the above-mentioned at least one historical image. If the pose parameters are inaccurate, the user can submit a correction start operation to the movable platform, and when the movable platform detects the correction start operation, it is determined that the correction start conditions are met.
  • the movable platform may store the corrected relative pose parameters.
  • the movable platform can store the corrected relative pose parameters locally, or store the corrected relative pose parameters in the cloud, or store the corrected relative pose parameters in other devices, such as the control of the movable platform Equipment, the control equipment can be a ground station, a remote control or a mobile phone.
  • the movable platform after acquiring the corrected relative pose parameters, can control the photographing device according to the corrected relative pose parameters.
  • the manner in which the movable platform controls the photographing device according to the corrected relative pose parameters may be as follows: the movable platform obtains depth information from the depth image output by the TOF ranging device according to the corrected relative pose parameters, according to The depth information controls the camera.
  • the movable platform can control one or more of focus, follow focus, zoom and sliding zoom of the photographing device according to the depth information.
  • the movable platform can control the photographing device to perform target tracking, visual positioning or face recognition according to the depth information.
  • the movable platform may also determine the first target area in the photographed picture of the photographing device, and then determine the first target area in the depth image according to the corrected relative pose parameters and the position of the first target area in the photographed picture
  • the depth information is obtained from the second target area in the depth image.
  • the movable platform needs to obtain the depth information of the face, then the movable platform can determine the first target area containing the face in the shooting picture of the shooting device, and then according to the corrected relative pose
  • the parameters and the position of the first target area in the shooting picture determine the second target area in the depth image, and obtain the depth information from the second target area in the depth image, the depth information is the depth information of the face.
  • one or more of focus, follow focus, zoom and sliding zoom of the photographing device can be controlled according to the depth information of the human face.
  • the manner in which the movable platform determines the first target area in the photographed image of the photographing device may include one or more of the following:
  • the movable platform can determine the first target area image of the target object in the shooting picture in the surrounding environment according to the image tracking algorithm.
  • the movable platform can run an image tracking algorithm to determine the first target area image of the target object in the shot frame.
  • the image tracking algorithm may be a KLT tracking algorithm.
  • the obtained shooting picture can determine the image area that is most similar to the regional image of the target object in the historical shooting picture, and the regional image can be determined as the target object in the surrounding environment is shooting The first target area image in the frame.
  • the historical shooting picture may be a previous frame of the acquired shooting picture.
  • the target object can be selected by the user.
  • the movable platform can detect the user's operation of selecting a target area, and determine the first target area in the photographing screen of the photographing device according to the detected operation.
  • the user may submit the target area selection operation by clicking on the area of the photographing image where the tree to be focused is located.
  • the movable platform determines the first target area in the photographing screen of the photographing device according to the detected target area selection operation of the user, that is, the area where the tree that needs to be focused is located in the photographing screen.
  • the movable platform can determine the second target area in the depth image according to the corrected relative pose parameters and the position of the first target area in the shooting picture, and obtain depth information from the second target area in the depth image, That is, the depth information of the tree that needs to be focused.
  • the movable platform can control the photographing device to focus on the tree according to the depth information of the tree.
  • the movable platform can send the photographing image of the photographing device to the control terminal, then receive the area indication information sent by the control terminal, and determine the first target area in the photographing image of the photographing device according to the area indication information.
  • the control terminal after receiving the shooting picture of the shooting device sent by the movable platform, can display the shooting picture on the display screen of the control terminal, and then can detect the motion data of the user for indicating the motion of the control terminal, according to the The motion data generates area indication information, and then the control terminal sends the area indication information to the movable platform, and then the movable platform determines the first target area according to the area indication information; or, the control terminal receives the photographing device sent by the movable platform.
  • the shooting picture After the shooting picture, the shooting picture can be displayed on the display screen of the control terminal, and then the information of the user for indicating the rotation of the eyeball of the user wearing the control terminal can be obtained, and the area indication information can be generated according to the information, and then the control terminal can be controlled.
  • the shooting screen can be displayed in the display screen, and then the user's click or frame selection operation on the display screen can be obtained (the user can select the object of interest in the image by clicking or frame selection), and the area indication information can be generated according to the operation, and then The control terminal sends the area indication information to the movable platform, and then the movable platform determines the first target area according to the area indication information.
  • the motion data indicating the motion of the control terminal may be head rotation data of the user wearing the control terminal, etc.
  • the control terminal may be video glasses. By wearing the video glasses, the video glasses can pass the built-in video glasses.
  • the motion sensor detects the head rotation of the user to generate the head rotation data.
  • the control terminal may detect information about the rotation of the eyeball of the user wearing the control terminal, and the worn control terminal may be, for example, video glasses or the like.
  • the movable platform can be based on the depth image output by the TOF ranging device, the first grayscale image corresponding to the depth image, and the second grayscale image corresponding to the image output by the photographing device or the first grayscale image output by the photographing device. Two grayscale images, the pre-stored relative pose parameters between the TOF ranging device and the shooting device are corrected to obtain the corrected relative pose parameters, which can improve the relative pose between the TOF ranging device and the shooting device Accuracy of parameters.
  • a relative pose parameter correction method proposed by the application embodiment includes:
  • the movable platform After acquiring the depth image output by the TOF ranging device and the first grayscale image corresponding to the depth image, the movable platform can run the feature object detection algorithm on the first grayscale image to obtain the first feature object image.
  • the feature object may include an object edge
  • the feature object detection algorithm may be an edge extraction algorithm.
  • the movable platform may run an edge extraction algorithm on the first grayscale image to obtain the first feature object image, that is, The edge image of the first grayscale image.
  • the movable platform can run a characteristic object detection algorithm on the first grayscale image to obtain the specified object containing the specified object.
  • the first characteristic object image of the object For example, specifying that the object is a building, the movable platform may run a building detection algorithm on the first grayscale image to obtain a first characteristic object image containing the building.
  • the movable platform can run a human body detection algorithm on the first grayscale image to obtain a first characteristic object image including a human body, such as a human face or hands and feet.
  • the depth image output by the TOF ranging device can be as shown in the lower left image in FIG. 3
  • the first grayscale image output by the TOF ranging device can be as shown in FIG. 3 . shown in the upper left image.
  • a feature object detection algorithm can be run on the first grayscale image to obtain the first feature object image, and the first feature object image can be shown in the upper right image in FIG. 3 . .
  • the movable platform runs a feature object detection algorithm on the first grayscale image, so that before acquiring the first feature object image, the first grayscale image may be preprocessed to obtain a processed first grayscale image . Then, the movable platform may run a characteristic object detection algorithm on the processed first grayscale image to obtain a first characteristic object image.
  • the manner in which the movable platform preprocesses the first grayscale image may be: performing histogram equalization processing and/or Gaussian smoothing processing on the first grayscale image.
  • the movable platform can make the processed first grayscale image have a reasonable contrast and clearly display objects within a preset distance range, so as to improve the first grayscale image.
  • the image quality of the feature object image is not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to display objects.
  • the movable platform can determine the position of the feature object in the first feature object image in space, that is, the position of the feature object in space, according to the internal reference of the TOF ranging device, the first feature object image, and the depth image. three-dimensional coordinates.
  • S203 Run a feature object detection algorithm on the second grayscale image to obtain a second feature object image.
  • the movable platform may run a characteristic object detection algorithm on the second grayscale image to acquire the second characteristic object image.
  • the movable platform may run a characteristic object detection algorithm on the second grayscale image to acquire the second characteristic object image.
  • the second feature object image may be as shown in FIG. 4 .
  • the feature object may include an object edge
  • the feature object detection algorithm may be an edge extraction algorithm.
  • the movable platform may run an edge extraction algorithm on the second grayscale image to obtain the second feature object image, that is, The edge image of the second grayscale image.
  • the photographing device captures a specified object in the surrounding environment of the movable platform to obtain a second grayscale image
  • the movable platform can run a characteristic object detection algorithm on the second grayscale image to obtain the image containing the second grayscale image.
  • a second feature object image of the specified object For example, specifying that the object is a building, the movable platform may run a building detection algorithm on the second grayscale image to obtain a second characteristic object image containing the building.
  • the movable platform can run a human body detection algorithm on the second grayscale image to obtain a second characteristic object image including a human body, such as a human face or hands and feet.
  • the projected image is obtained by projecting the pixels in the first characteristic object image onto the imaging surface of the photographing device according to the position of the characteristic object in space and the pre-stored relative pose parameters.
  • the projected image can be as shown in the lower right image in Figure 3.
  • the pixel distribution of the projected image and the image of the second feature object should be consistent, that is, the pixel points of the feature object in the projected image and the pixels of the feature object in the second feature object image The distance between points is 0. If the pixel distributions of the projected image and the second feature object image are inconsistent, that is, the distance between the pixel points of the feature object in the projected image and the pixel points of the feature object in the second feature object image is greater than 0, it indicates that the pre-stored relative pose parameters If it is not accurate, the movable platform can run an optimization algorithm according to the projected image and the second characteristic object image to correct the relative pose parameters between the pre-stored TOF ranging device and the photographing device.
  • the movable platform can run an optimization algorithm according to the projected image and the second feature object image by traversing a search or constructing a search tree, so as to determine the relative pose between the pre-stored TOF ranging device and the photographing device. parameters are corrected.
  • the movable platform can obtain at least one relative pose parameter whose difference from the pre-stored relative pose parameter is less than a preset threshold; according to the target relative pose parameter and the position of the feature object in space , project the pixel points in the first feature object image to the imaging surface of the shooting device to obtain a target projection image, and the target relative pose parameter is any relative pose parameter in the at least one relative pose parameter; obtain all the relative pose parameters. Describe the target distance between the pixel points of the target projection image and the feature object between the pixel points of the second feature object image; determine each target distance and the pixel point of the above-mentioned projected image and the feature object between the pixel points of the second feature object image The minimum distance among the distances of parameter.
  • the search tree constructed by the mobile platform can be an N-ary search tree, where N is a natural number greater than or equal to two.
  • the movable platform can obtain the first relative pose parameter and the second relative pose parameter whose difference from the pre-stored relative pose parameter is less than the preset threshold.
  • a relative pose parameter is smaller than the pre-stored relative pose parameter, and the second relative pose parameter is greater than the pre-stored relative pose parameter; according to the first relative pose parameter, and the feature object in space position, project the pixel points in the first characteristic object image to the imaging surface of the photographing device to obtain the first projection image; obtain the difference between the pixel points of the first projection image and the characteristic object in the second characteristic object image.
  • a first distance according to the second relative pose parameter and the position of the feature object in space, project the pixels in the image of the first feature object to the imaging surface of the photographing device to obtain a second projection image; obtain the first Two second distances between the pixels of the projected image and the pixel points of the feature object in the second feature object image; determine the first distance, the second distance and the distance between the pixels of the projection image and the feature object in the second feature object image
  • the minimum distance among the distances between pixel points; the relative pose parameter corresponding to the minimum distance is used as the pre-stored relative pose parameter, triggering the execution of the first one where the difference between the acquisition and the pre-stored relative pose parameter is less than the preset threshold
  • the relative pose parameter and the second relative pose parameter until the relative pose parameter corresponding to the minimum distance is the pre-stored relative pose parameter; the newly determined pre-stored relative pose parameter is demarcated as the shooting device and the TOF ranging
  • the relative pose parameters of the device according to the second relative pose parameter and the position of the feature object in
  • an optimization algorithm is run according to the projected image and the second characteristic object image to correct the pre-stored relative pose parameters between the TOF ranging device and the photographing device, which can speed up the search. efficiency, and improve the effectiveness of obtaining the corrected relative pose parameters.
  • the optimization object of the optimization operation is the pre-stored relative pose parameter, and the optimization goal of the optimization object is to minimize the distance between the pixel points of the feature object in the projected image and the pixel points of the feature object in the second feature object image.
  • the optimization goal of the optimization object is to minimize the number of non-zero pixels after the XOR operation between the projected image and the second feature object image.
  • the movable platform may perform XOR calculation on the first pixel in the projected image and the second pixel in the second feature object image to obtain a calculation result, where the first pixel is the first pixel in the projected image. Any pixel point, the second pixel point is the pixel point corresponding to the first pixel point in the second feature object image; obtain the number of first pixel points whose calculation result is 1; determine the feature object in the projected image
  • the distance between the pixel point of and the pixel point of the feature object in the second feature object image is the number of the first pixel point whose calculation result is 1.
  • the movable platform can determine the pixels of the feature object in the projected image and the pixels of the feature object in the second feature object image by the number of non-zero pixels after the XOR operation between the projected image and the second feature object image.
  • the distance between points can also be calculated by running the sum of square distance (SSD) algorithm on the projected image and the second feature object image, and the sum of absolute distances can be normalized cross-correlation (sum of absolute distance, SAD) Algorithm, (normalized cross corelation, NCC) algorithm, distance measurement algorithm or similarity algorithm, to obtain the distance between the pixel points of the feature object in the projected image and the pixel points of the feature object in the second feature object image.
  • SSD sum of square distance
  • NCC normalized cross corelation
  • the smaller the distance between the pixel points of the feature object in the projected image and the pixel points of the feature object in the second feature object image the closer the pixel distributions representing the projected image and the second feature object image are;
  • the greater the distance between the pixel points of the projected image and the pixel points of the feature object in the second feature object image the greater the difference in pixel distributions representing the projected image and the second feature object image.
  • the movable platform may run an optimization algorithm according to at least one of the translation direction and the rotation direction of the projected image and the second feature object image, so as to determine the relative position between the pre-stored TOF ranging device and the photographing device.
  • the pose parameters are corrected.
  • the movable platform can run an optimization algorithm with a preset step size in the translation direction according to the projected image and the second feature object image, so as to perform a pre-stored relative pose parameter between the TOF ranging device and the photographing device. Make corrections.
  • the movable platform may run an optimization algorithm with a preset step size in the rotation direction according to the projected image and the second feature object image, so as to perform a pre-stored relative pose parameter between the TOF ranging device and the photographing device. Make corrections.
  • the movable platform runs the feature object detection algorithm on the first grayscale image to obtain the first feature object image, and determines the feature object in the first feature object image according to the first feature object image and the depth image position in space, and run the feature object detection algorithm on the second grayscale image to obtain the second feature object image, and then run the optimization algorithm based on the projected image and the second feature object image to detect the pre-stored TOF ranging device and Correcting the relative pose parameters between the photographing devices can improve the accuracy of the relative pose parameters.
  • FIG. 5 is a structural diagram of a calibration device applied to a movable platform provided by an embodiment of the present application.
  • the calibration device 500 of the mobile platform includes a memory 501 and a processor 502, and the mobile platform is configured with a photographing device and a TOF ranging device, wherein the memory 502 stores program codes, and the processor 502 calls the program codes in the memory. When executed, processor 502 performs the following operations:
  • the pre-stored relative pose parameters between the TOF ranging device and the photographing device are corrected to obtain the corrected relative pose parameters.
  • the processor 502 is further configured to perform the following operation: storing the corrected relative pose parameters.
  • the processor 502 is further configured to perform the following operation: control the photographing device according to the corrected relative pose parameter.
  • the processor 502 when the processor 502 controls the photographing device according to the corrected relative pose parameters, the processor 502 specifically performs the following operations:
  • the corrected relative pose parameter obtain depth information from the depth image output by the TOF ranging device
  • the photographing device is controlled according to the depth information.
  • the processor 502 when controlling the photographing device according to the depth information, specifically performs the following operations:
  • One or more of focus, follow focus, zoom and sliding zoom of the photographing device are controlled according to the depth information.
  • processor 502 is further configured to perform the following operations:
  • the depth information is obtained from the depth image output by the TOF ranging device, including:
  • the depth information is obtained from the second target region in the depth image.
  • the processor 502 specifically performs the following operations when determining the first target area in the shooting picture of the shooting device:
  • a user's target area selection operation is detected, and the first target area is determined in the photographing screen of the photographing device according to the detected operation.
  • processor 502 is further configured to perform the following operations:
  • Correcting the pre-stored relative pose parameters between the TOF ranging device and the photographing device according to the depth image, the first grayscale image and the second grayscale image including:
  • the pre-stored relative pose parameters between the TOF ranging device and the photographing device are corrected according to the depth image, the first grayscale image and the second grayscale image.
  • the processor 502 specifically performs the following operations when detecting whether a preset calibration start condition is met:
  • the processor 502 performs a pre-stored relative pose parameter between the TOF ranging device and the photographing device according to the depth image, the first grayscale image and the second grayscale image.
  • calibrating perform the following operations:
  • the processor 502 performs a pre-stored relative pose parameter between the TOF ranging device and the photographing device according to the depth image, the first grayscale image and the second grayscale image.
  • calibrating perform the following operations:
  • the pre-stored relative pose parameters between the TOF ranging device and the photographing device are corrected according to the depth image, the first grayscale image and the second grayscale image with a preset step size in the translation direction.
  • the processor 502 performs a pre-stored relative pose parameter between the TOF ranging device and the photographing device according to the depth image, the first grayscale image and the second grayscale image.
  • calibrating perform the following operations:
  • An optimization algorithm is run according to the projected image and the second characteristic object image to correct the pre-stored relative pose parameters between the TOF ranging device and the photographing device, wherein the projected image is based on the characteristic object
  • the position in space and the pre-stored relative pose parameters are obtained by projecting the pixels in the first characteristic object image onto the imaging surface of the photographing device.
  • the optimization object of the optimization operation is a pre-stored relative pose parameter
  • the optimization object of the optimization object is the difference between the pixel points of the feature object in the projected image and the pixel points of the feature object in the second feature object image. the smallest distance.
  • the optimization goal of the optimization object is to minimize the number of non-zero pixels after the XOR operation between the projected image and the second feature object image.
  • the feature objects include object edges.
  • the calibration device applied to the movable platform provided in this embodiment can execute the calibration method as shown in FIG. 1 or FIG. 2 provided in the foregoing embodiment, and the execution manner and beneficial effects are similar, which will not be repeated here.
  • An embodiment of the present application provides a movable platform, including a body, a power system, a photographing device, a TOF ranging device, and the aforementioned calibration device.
  • the operation of the deep computing device of the movable platform is the same as or similar to the above, and will not be repeated here.
  • a power system, mounted on the fuselage is used to power the movable platform.
  • a photographing device, mounted on the body is used for outputting an image or a second grayscale image.
  • a TOF ranging device, mounted on the body is used to output depth images.
  • its power system may include a rotor, a motor that drives the rotor to rotate, and its electric regulator.
  • the UAV can be a quad-rotor, hexa-rotor, octa-rotor or other multi-rotor UAV, and the UAV takes off and lands vertically at this time. It is understood that the UAV may also be a fixed-wing movable platform or a hybrid-wing movable platform.
  • the movable platform further includes a display screen, the display screen is installed on the shooting device, and the display screen is used for displaying a shooting picture or detecting a user's operation of selecting a target area.
  • the movable platform further includes a communication device, the communication device being installed on the body, the communication device being used to obtain the user's corrective activation operation.
  • the movable platform includes at least one of the following: a handheld stabilization system or an unmanned aerial vehicle for stabilization of the photographing device.
  • Embodiments of the present application further provide a computer storage medium, where computer program instructions are stored in the computer storage medium, and when the computer program instructions are executed by a processor, are used to execute the calibration method shown in FIG. 1 or FIG. 2 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

提供了标定方法、装置、可移动平台及存储介质,该方法应用于可移动平台,可移动平台配置有拍摄装置和TOF测距装置,该方法包括:获取TOF测距装置输出的深度图像,以及与深度图像对应的第一灰度图像(S101);获取拍摄装置输出的图像,根据图像获取第二灰度图像,或者获取拍摄装置输出的第二灰度图像(S102);根据深度图像、第一灰度图像和第二灰度图像,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,以获取校正后的相对位姿参数(S103),可提高相对位姿参数的准确性。

Description

标定方法、装置、可移动平台及存储介质 技术领域
本发明涉及视觉应用技术领域,尤其涉及标定方法、装置、可移动平台及存储介质。
背景技术
随着科学的进步与技术的发展,越来越多的智能设备使用了计算机视觉技术,并配有飞行时间(Time of flight,TOF)测距装置和拍摄装置,如无人飞行器、智能手机、虚拟现实(Virtual Reality,VR)头戴式显示设备、自动驾驶车辆等。
计算机视觉系统中,TOF测距装置和拍摄装置之间的相对位姿参数可以表示TOF测距装置和拍摄装置的空间关系,空间关系可以包括平移关系和/或旋转关系。在实际生产阶段,对智能设备的TOF测距装置和拍摄装置进行安装的过程中,由于存在安装误差,导致TOF测距装置和拍摄装置的相对位姿发生变化。在这种情况下,在智能设备出厂之前,可以对TOF测距装置和拍摄装置之间的相对位姿参数进行离线标定,以确保TOF测距装置获取到的深度图像和拍摄装置获取到的图像配准。但是,智能设备中的TOF测距装置和拍摄装置在出厂标定后,会由于热胀冷缩,碰撞震动,或者拍摄装置的镜头被反复拆装等原因,导致出厂标定的TOF测距装置和拍摄装置之间的相对位姿参数不准确。因此,如何提高TOF测距装置和拍摄装置之间的相对位姿参数的准确性,是目前亟需解决的技术问题。
发明内容
有鉴于此,本申请实施例提供了一种标定方法、装置、可移动平台及存储介质,可自动校正TOF测距装置和拍摄装置之间的相对位姿参数,以提高相对位姿参数的准确性。
本申请实施例第一方面提供了一种标定方法,该方法应用于可移动平台,所述可移动平台配置有拍摄装置和飞行时间(Time of flight,TOF)测距装置, 所述方法包括:
获取所述TOF测距装置输出的深度图像,以及与所述深度图像对应的第一灰度图像;
获取所述拍摄装置输出的图像,根据所述图像获取第二灰度图像,或者获取所述拍摄装置输出的第二灰度图像;
根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,以获取校正后的相对位姿参数。
本申请实施例第二方面提供了一种标定装置,包括存储器和处理器,所述标定装置应用于可移动平台,所述可移动平台配置有拍摄装置和TOF测距装置,其中,
所述存储器,用于存储有程序代码;
所述处理器,调用存储器中的程序代码,当程序代码被执行时,用于执行如下操作:
获取所述TOF测距装置输出的深度图像,以及与所述深度图像对应的第一灰度图像;
获取所述拍摄装置输出的图像,根据所述图像获取第二灰度图像,或者获取所述拍摄装置输出的第二灰度图像;
根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,以获取校正后的相对位姿参数。
本申请实施例第三方面提供了一种可移动平台,该可移动平台包括:
拍摄装置;
TOF测距装置;
以及如第二方面所述的标定装置。
本申请实施例第四方面提供了一种计算机存储介质,所述计算机存储介质中存储有计算机程序指令,所述计算机程序指令被处理器执行时,用于执行如第一方面所述的标定方法。
在本申请实施例中,可移动平台通过获取TOF测距装置输出的深度图像,以及与深度图像对应的第一灰度图像,并获取拍摄装置输出的图像,根据图像 获取第二灰度图像,或者获取拍摄装置输出的第二灰度图像,然后根据深度图像、第一灰度图像和第二灰度图像,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,以获取校正后的相对位姿参数,可提高相对位姿参数的准确性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例的一种标定方法的流程示意图;
图2是本申请实施例的一种相对位姿参数校正方法的流程示意图;
图3是本发明实施例的一种投影图像的示意图;
图4是本申请实施例的一种第二特征对象图像的示意图;
图5是本申请实施例的一种可移动平台的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本申请实施例提供的标定方法可以应用在可移动平台中。可移动平台可以包括:TOF测距装置,拍摄装置等。其中,本发明实施例中提供的可移动平 台可以包括用于对拍摄装置增稳的手持增稳系统,无人飞行器,智能手机,VR头戴式显示设备,自动驾驶车辆等带有计算机视觉模块的智能设备。TOF测距装置可以为3D-TOF测距装置,激光雷达,高分辨率毫米波雷达等,TOF测距装置用于输出深度图像。拍摄装置可以为相机,手机,摄像头等,拍摄装置用于输出图像,例如对可移动平台周围环境进行拍摄得到的图像,该图像可以为灰度图像、红外图像或者彩色图像。
其中,TOF测距装置和拍摄装置之间的相对位姿参数可以用于指示TOF测距装置和拍摄装置的空间关系,所述空间关系可以包括平移关系和旋转关系。
其中,TOF测距装置可以包括发射装置和接收装置,发射装置可以发射光信号,光信号经过环境中的目标对象反射,接收装置可以接收发射的光信号,根据接收到的光信号生成深度图像和与深度图像对应的灰度图像。其中发射装置可以为发光二极管(Light Emitting Diode,简称LED)或激光二极管(Laser Diode,简称LD),其中,发射装置由TOF测距装置的驱动模块来驱动,驱动模块由TOF测距装置的处理模块控制,处理模块控制驱动模块输出驱动信号来驱动发射装置,其中驱动模块输出的驱动信号的频率、占空比等都可以由处理模块控制,利用驱动信号驱动发射装置,发射装置发出经过调制以后的光信号,光信号打到目标对象上。目标对象可以是用户、建筑、汽车等等。其中,接收装置可以包括光电二极管、雪崩光电二极管、电荷耦合元件。所述接收装置可以是一种光电二极管、雪崩光电二极管、电荷耦合元件的接收阵列。接收装置将光信号转化成电信号,TOF测距装置信号处理模块对接收装置输出的电信号进行处理,例如放大、滤波等,经过信号处理模块处理的信号输入到处理模块中,处理模块可以将电信号转换成深度图像和灰度图像。
请参见图1,是本申请实施例提出的一种标定方法的流程示意图,如图1所示,该方法可包括:
S101,获取TOF测距装置输出的深度图像,以及与深度图像对应的第一灰度图像。
TOF测距装置可以发射光信号至可移动平台周围环境,得到深度图像和深度图像对应的第一灰度图像。然后,TOF测距装置可以将该深度图像和第 一灰度图像发送至可移动平台。其中,通过深度图像可以获取深度图像中像素点在空间中的深度值。通过第一灰度图像可以获取第一灰度图像中像素点的灰度值。深度图像中的像素点和第一灰度图像中的像素点一一对应,例如深度图像中位于第一行第一列的像素点和第一灰度图像中位于第一行第一列的像素点在空间中为同一位置点。
S102,获取拍摄装置输出的图像,根据图像获取第二灰度图像,或者获取拍摄装置输出的第二灰度图像。
在一个实施例中,假设拍摄装置输出的图像不是灰度图像,例如彩色图像,那么可移动平台可以获取拍摄装置输出的图像,然后对该图像进行灰度处理,得到该图像对应的灰度图像,该灰度图像为第二灰度图像。
在另一个实施例中,假设拍摄装置输出的图像是灰度图像,那么可移动平台可以直接获取拍摄装置输出的灰度图像,该灰度图像为第二灰度图像。
示例性的,本申请实施例并不限定步骤S101和步骤S102的执行顺序,例如可移动平台可以同时获取TOF测距装置输出的深度图像,以及拍摄装置输出的图像或者第二灰度图像,又如可移动平台在预设时间段内获取TOF测距装置输出的深度图像,以及拍摄装置输出的图像或者第二灰度图像,具体不受本申请实施例的限定。
示例性的,TOF测距装置和拍摄装置可以是对同一对象进行采集得到的图像,例如TOF测距装置对指定对象进行图像采集,得到指定对象的深度图像,拍摄装置对该指定对象进行图像采集,得到指定对象的图像或者第二灰度图像。
S103,根据深度图像、第一灰度图像和第二灰度图像,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,以获取校正后的相对位姿参数。
在一个实施例中,可移动平台可以根据深度图像、第一灰度图像和第二灰度图像在平移方向和旋转方向,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
在该实施例中,可移动平台可以对TOF测距装置和拍摄装置之间的旋转方向(即三个轴的旋转参数)和/或平移方向(即三个轴的平移参数)这六个参数进行校正,即根据深度图像、第一灰度图像和第二灰度图像在平移方向和 /或旋转方向,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,以确保校正后的相对位姿参数有较高的准确度,进一步地可以保证各个轴的旋转方向/或各个轴的平移方向,准确度均达到了最佳。
在一个实施例中,可移动平台可以根据深度图像、第一灰度图像和第二灰度图像在至少一个目标平移方向和/或至少一个目标旋转方向,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。目标平移方向可以为X轴平移方向、Y轴平移方向或Z轴平移方向。目标旋转方向可以为X轴旋转方向、Y轴旋转方向或Z轴旋转方向。所述至少一个目标平移方向和/或至少一个目标旋转方向可以检测用户的校正方向选择操作来确定的。例如所述至少一个目标平移方向和/或至少一个目标旋转方向是根据检测用户的平移方向选择操作和/或旋转方向选择操作确定的。所述可移动平台可以配置交互装置,所述交互装置可以检测用户的校正方向选择操作,根据所述检测到的校正方向选择操作确定所述至少一个目标平移方向和/或至少一个目标旋转方向。再例如,控制终端可以包括交互装置,交互装置检测用户的校正方向选择操作,根据所述检测到的校正方向选择操作确定所述至少一个目标平移方向和/或至少一个目标旋转方向,所述控制终端将用于指示所述至少一个目标平移方向和/或至少一个目标旋转方向的信息发送给可移动平台。
在该实施例中,可移动平台可以仅对误差较大的参数进行校正,例如对TOF测距装置和拍摄装置在任意轴的旋转参数,和/或TOF测距装置和拍摄装置在任意轴的平移参数进行校正。该实施例仅对误差较大的参数进行校正,在确保提高TOF测距装置和拍摄装置的相对位姿参数的准确性的情况下,可节省系统资源,例如CPU资源,输入/输出(Input/Output,I/O)资源等。
在一个实施例中,可移动平台可以根据深度图像、第一灰度图像和第二灰度图像在平移方向以预设的步长,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。同理,可移动平台可以根据深度图像、第一灰度图像和第二灰度图像在旋转方向以预设的步长,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
在该实施例中,可移动平台按照预设的步长,根据深度图像、第一灰度图像和第二灰度图像在平移方向和/或旋转方向,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,可在确保提高TOF测距装置和拍摄装置 的相对位姿参数的准确性的情况下,节省系统资源。
在一个实施例中,可移动平移在根据深度图像、第一灰度图像和第二灰度图像,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正之前,可以检测是否满足预设的校正启动条件,当确定满足校正启动条件时,触发根据深度图像、第一灰度图像和第二灰度图像,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。相对实时对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,本申请实施例只有在满足校正启动条件时,才对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,可节省系统资源。
其中,可移动平台检测是否满足预设的校正启动条件的方式可以包括如下两种:
一、可移动平台检测是否获取到可移动平台的开机信号,当获取到开机信号时,确定满足校正启动条件,否则,不满足校正启动条件。
在该实施例中,可移动平台无需实时对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,而是在每次可移动平台开机时,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,以便可移动平台在接下来的时间可以使用校正后的相对位姿参数,例如根据校正后的相对位姿参数控制拍摄装置。
二、可移动平台检测是否获取到用户的校正启动操作,当获取到校正启动操作时,确定满足校正启动条件,否则,不满足校正启动条件。
在该实施例中,可移动平台无需实时对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,而是在获取到用户的校正启动操作时,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。示例性的,可移动平台可以获取拍摄装置在历史时刻输出的至少一个历史图像,并对至少一个历史图像进行显示,用户通过观察上述至少一个历史图像,了解到TOF测距装置和拍摄装置的相对位姿参数不准确,那么用户可以向可移动平台提交校正启动操作,可移动平台检测到校正启动操作时,确定满足校正启动条件。
在一个实施例中,可移动平台在获取到校正后的相对位姿参数之后,可以对校正后的相对位姿参数进行存储。例如,可移动平台可以在本地存储校正后的相对位姿参数,或者在云端存储校正后的相对位姿参数,或者将校正后的相 对位姿参数存储至其他设备中,例如可移动平台的控制设备,控制设备可以为地面站、遥控器或者手机等。
在一个实施例中,可移动平台在获取到校正后的相对位姿参数之后,可以根据校正后的相对位姿参数控制拍摄装置。
示例性的,可移动平台根据校正后的相对位姿参数控制拍摄装置的方式可以为:可移动平台根据校正后的相对位姿参数,从TOF测距装置输出的深度图像中获取深度信息,根据深度信息控制拍摄装置。例如,可移动平台可以根据深度信息控制拍摄装置的对焦、跟焦、变焦和滑动变焦中的一种或多种。又如,可移动平台可以根据深度信息控制拍摄装置进行目标跟踪、视觉定位或者人脸识别等。
在一个实施例中,可移动平台还可以在拍摄装置的拍摄画面中确定第一目标区域,然后根据校正后的相对位姿参数和第一目标区域在拍摄画面中的位置,在深度图像中确定第二目标区域,从深度图像中的第二目标区域中获取深度信息。例如,在人脸识别场景中,可移动平台需获取人脸的深度信息,那么可移动平台可以在拍摄装置的拍摄画面中确定包含人脸的第一目标区域,然后根据校正后的相对位姿参数和第一目标区域在拍摄画面中的位置,在深度图像中确定第二目标区域,从深度图像中的第二目标区域中获取深度信息,该深度信息即人脸的深度信息。进一步地,可以根据所述人脸的深度信息控制拍摄拍摄装置的对焦、跟焦、变焦和滑动变焦中的一种或多种。通过对预存的相对位姿参数进行校正,可以从深度图像中准确地确定人脸对应的深度信息。
在一个实施例中,可移动平台在拍摄装置的拍摄画面中确定第一目标区域的方式可以包括如下一种或多种:
一、可移动平台可以根据图像跟踪算法确定周围环境中目标对象在拍摄画面中的第一目标区域图像。
可移动平台可以运行图像跟踪算法来确定,目标对象在拍摄画面中的第一目标区域图像。其中,所述图像跟踪算法可以为KLT跟踪算法。具体地,在可移动平台获取到拍摄画面之后,可在获取的拍摄画面确定与历史拍摄画面中目标对象的区域图像最相似的图像区域,可以将该区域图像确定为周围环境中目标对象在拍摄画面中的第一目标区域图像。所述历史拍摄画面可以是所述获 取的拍摄画面的上一帧拍摄画面。目标对象可以是由用户选中的。
二、可移动平台可以检测用户的目标区域选择操作,根据检测到的所述操作在所述拍摄装置的拍摄画面中确定所述第一目标区域。
举例来说,如果用户需要对拍摄装置的拍摄画面中的一棵树进行对焦,那么用户可以通过点击拍摄画面中需要对焦的树所在的区域的方式提交目标区域选择操作。可移动平台根据检测到的用户的目标区域选择操作在拍摄装置的拍摄画面中确定第一目标区域,即拍摄画面中需要对焦的树所在的区域。然后,可移动平台可以根据校正后的相对位姿参数和第一目标区域在拍摄画面中的位置,在深度图像中确定第二目标区域,从深度图像中的第二目标区域中获取深度信息,即需要对焦的树的深度信息。进而可移动平台可以根据该树的深度信息控制拍摄装置对该树进行对焦。
三、可移动平台可以将拍摄装置的拍摄画面发送给控制终端,然后接收控制终端发送的区域指示信息,根据区域指示信息在拍摄装置的拍摄画面中确定第一目标区域。
其中,控制终端接收到可移动平台发送的拍摄装置的拍摄画面之后,可以在控制终端的显示屏幕中显示该拍摄画面,然后可以检测用户的用于指示所述控制终端的运动的运动数据,根据所述运动数据生成区域指示信息,然后控制终端将该区域指示信息发送给可移动平台,进而可移动平台根据区域指示信息确定第一目标区域;或者,控制终端接收到可移动平台发送的拍摄装置的拍摄画面之后,可以在控制终端的显示屏幕中显示该拍摄画面,然后可以获取用户的用于指示佩戴控制终端的用户的眼球的转动的信息,根据所述信息生成区域指示信息,然后控制终端将该区域指示信息发送给可移动平台,进而可移动平台根据区域指示信息确定第一目标区域;或者,控制终端接收到可移动平台发送的拍摄装置的拍摄画面之后,可以在控制终端的显示屏幕中显示该拍摄画面,然后可以获取根据用户在显示屏上的点击或者框选操作(用户可以通过点击或者框选的方式选中图像中感兴趣的对象),根据所述操作生成区域指示信息,然后控制终端将该区域指示信息发送给可移动平台,进而可移动平台根据区域指示信息确定第一目标区域。
所述指示所述控制终端的运动的运动数据可以佩戴控制终端的用户的头部转动数据等,例如,所述控制终端可以为视频眼镜,用户通过佩戴视频眼镜, 使得所述视频眼镜可以通过内置的运动传感器检测用户的头部转动以生成所述头部转动数据。或者,控制终端可以检测佩戴所述控制终端的用户的眼球的转动的信息,所述佩戴的控制终端例如可以是视频眼镜等。在本申请实施例中,可移动平台可以根据TOF测距装置输出的深度图像,深度图像对应的第一灰度图像,以及拍摄装置输出的图像对应的第二灰度图像或拍摄装置输出的第二灰度图像,对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,以获取校正后的相对位姿参数,可提高TOF测距装置和拍摄装置之间的相对位姿参数的准确性。
在一个实施例中,基于上述实施例的描述,为了对可移动平台对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正的方法进行具体描述,请参见图2,是本申请实施例提出的一种相对位姿参数校正方法,该方法包括:
S201,针对第一灰度图像运行特征对象检测算法,以获取第一特征对象图像。
可移动平台在获取TOF测距装置输出的深度图像,以及与深度图像对应的第一灰度图像之后,可以针对第一灰度图像运行特征对象检测算法,以获取第一特征对象图像。
在一个实施例中,特征对象可以包括物体边缘,特征对象检测算法可以为边缘提取算法,基于此,可移动平台可以针对第一灰度图像运行边缘提取算法,以获取第一特征对象图像,即第一灰度图像的边缘图像。
在一个实施例中,假设TOF测距装置是对可移动平台周围环境的指定对象进行采集,得到深度图像,那么可移动平台可以针对第一灰度图像运行特征对象检测算法,以获取包含该指定对象的第一特征对象图像。例如,指定对象为建筑,那么可移动平台可以针对第一灰度图像运行建筑检测算法,以获取包含建筑的第一特征对象图像。又如,指定对象为人,那么可移动平台可以针对第一灰度图像运行人体检测算法,以获取包含人体的第一特征对象图像,人体例如人脸或者手足等。
以图3所示的投影图像的示意图为例,TOF测距装置输出的深度图像可以如图3中位于左下方的图像所示,TOF测距装置输出的第一灰度图像可以如图3中位于左上方的图像所示。可移动平台获取到第一灰度图像之后,可以 针对第一灰度图像运行特征对象检测算法,以获取第一特征对象图像,第一特征对象图像可以如图3中位于右上方的图像所示。
在一个实施例中,可移动平台针对第一灰度图像运行特征对象检测算法,以获取第一特征对象图像之前,可以对第一灰度图像进行预处理,得到处理后的第一灰度图像。然后,可移动平台可以针对处理后的第一灰度图像运行特征对象检测算法,以获取第一特征对象图像。
其中,可移动平台对第一灰度图像进行预处理的方式可以为:对第一灰度图像进行直方图均衡化处理,和/或高斯平滑处理等。
在该实施例中,可移动平台通过对第一灰度图像进行预处理,可以使得处理后的第一灰度图像具有合理的对比度,清晰地显示预设距离范围内的物体,以便提高第一特征对象图像的图像质量。
S202,根据第一特征对象图像和深度图像,确定第一特征对象图像中的特征对象在空间中的位置。
在一个实施例中,可移动平台可以根据TOF测距装置的内参、第一特征对象图像和深度图像,确定第一特征对象图像中的特征对象在空间中的位置,即特征对象在空间中的三维坐标。
S203,针对第二灰度图像运行特征对象检测算法,以获取第二特征对象图像。
在一个实施例中,可移动平台在获取拍摄装置输出的第二灰度图像之后,可以针对第二灰度图像运行特征对象检测算法,以获取第二特征对象图像。或者,可移动平台在获取拍摄装置输出的图像,根据图像获取第二灰度图像之后,可以针对第二灰度图像运行特征对象检测算法,以获取第二特征对象图像。第二特征对象图像可以如图4所示。
在一个实施例中,特征对象可以包括物体边缘,特征对象检测算法可以为边缘提取算法,基于此,可移动平台可以针对第二灰度图像运行边缘提取算法,以获取第二特征对象图像,即第二灰度图像的边缘图像。
在一个实施例中,假设拍摄装置是对可移动平台周围环境的指定对象进行采集,得到第二灰度图像,那么可移动平台可以针对第二灰度图像运行特征对象检测算法,以获取包含该指定对象的第二特征对象图像。例如,指定对象为建筑,那么可移动平台可以针对第二灰度图像运行建筑检测算法,以获取包含 建筑的第二特征对象图像。又如,指定对象为人,那么可移动平台可以针对第二灰度图像运行人体检测算法,以获取包含人体的第二特征对象图像,人体例如人脸或者手足等。
S204,根据投影图像和第二特征对象图像运行优化算法,以对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
其中,投影图像是根据特征对象在空间中的位置和预存的相对位姿参数,将第一特征对象图像中的像素点投影到拍摄装置的成像面得到的。投影图像可以如图3中位于右下方的图像所示。
在该实施例中,若预存的相对位姿参数准确,那么投影图像和第二特征对象图像的像素分布应该一致,即特征对象在投影图像的像素点和特征对象在第二特征对象图像的像素点之间的距离为0。若投影图像和第二特征对象图像的像素分布不一致,即特征对象在投影图像的像素点和特征对象在第二特征对象图像的像素点之间的距离大于0,则表征预存的相对位姿参数不准确,可移动平台可以根据投影图像和第二特征对象图像运行优化算法,以对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
在该实施例中,可移动平台可以通过遍历搜索或者构造搜索树的方式,根据投影图像和第二特征对象图像运行优化算法,以对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
以遍历搜索为例,可移动平台可以获取与预存的相对位姿参数之间的差值小于预设阈值的至少一个相对位姿参数;根据目标相对位姿参数,以及特征对象在空间中的位置,将第一特征对象图像中的像素点投影到拍摄装置的成像面得到目标投影图像,所述目标相对位姿参数为所述至少一个相对位姿参数中的任一相对位姿参数;获取所述目标投影图像的像素点与特征对象在第二特征对象图像的像素点之间的目标距离;确定各个目标距离和上述投影图像的像素点与特征对象在第二特征对象图像的像素点之间的距离中的最小距离;将最小距离对应的相对位姿参数标定为所述拍摄装置和TOF测距装置的相对位姿参数,即将最小距离对应的相对位姿参数确定为校正后的相对位姿参数。
可移动平台构造的搜索树可以为N叉搜索树,N为大于或等于二的自然数。以搜索树为二叉搜索树为例,可移动平台可以获取与预存的相对位姿参数 之间的差值小于预设阈值的第一相对位姿参数和第二相对位姿参数,所述第一相对位姿参数小于所述预存的相对位姿参数,所述第二相对位姿参数大于所述预存的相对位姿参数;根据所述第一相对位姿参数,以及特征对象在空间中的位置,将第一特征对象图像中的像素点投影到拍摄装置的成像面得到第一投影图像;获取所述第一投影图像的像素点与特征对象在第二特征对象图像的像素点之间的第一距离;根据所述第二相对位姿参数,以及特征对象在空间中的位置,将第一特征对象图像中的像素点投影到拍摄装置的成像面得到第二投影图像;获取所述第二投影图像的像素点与特征对象在第二特征对象图像的像素点之间的第二距离;确定第一距离、第二距离和上述投影图像的像素点与特征对象在第二特征对象图像的像素点之间的距离中的最小距离;将最小距离对应的相对位姿参数作为预存的相对位姿参数,触发执行获取与预存的相对位姿参数之间的差值小于预设阈值的第一相对位姿参数和第二相对位姿参数,直至最小距离对应的相对位姿参数为预存的相对位姿参数;将最近确定得到的预存的相对位姿参数标定为所述拍摄装置和TOF测距装置的相对位姿参数。
在该实施例中,通过构造搜索树的方式,根据投影图像和第二特征对象图像运行优化算法,以对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,可加速搜索效率,提高获取校正后的相对位姿参数的有效性。
其中,优化运算的优化对象为预存的相对位姿参数,所述优化对象的优化目标为特征对象在投影图像的像素点与特征对象在第二特征对象图像的像素点之间的距离最小。
其中,优化对象的优化目标为投影图像与第二特征对象图像异或运算后非零像素点的数量最小。例如,可移动平台可以将投影图像的中的第一像素点和第二特征对象图像中的第二像素点进行异或计算,得到计算结果,所述第一像素点为所述投影图像中的任一像素点,所述第二像素点为所述第二特征对象图像中对应所述第一像素点的像素点;获取计算结果为1的第一像素点的数量;确定特征对象在投影图像的像素点与特征对象在第二特征对象图像的像素点之间的距离为计算结果为1的第一像素点的数量。
在本申请实施例中,可移动平台可以通过投影图像与第二特征对象图像异或运算后非零像素点的数量确定特征对象在投影图像的像素点与特征对象在 第二特征对象图像的像素点之间的距离,也可以通过对投影图像与第二特征对象图像运行平方距离之和(sum of square distance,SSD)算法,绝对距离之和归一化互相关(sum of absolute distance,SAD)算法,(normalized cross corelation,NCC)算法,距离度量算法或相似度算法,以获取特征对象在投影图像的像素点与特征对象在第二特征对象图像的像素点之间的距离。
在该实施例中,特征对象在投影图像的像素点与特征对象在第二特征对象图像的像素点之间的距离越小,表征投影图像和第二特征对象图像的像素分布越接近;特征对象在投影图像的像素点与特征对象在第二特征对象图像的像素点之间的距离越大,表征投影图像和第二特征对象图像的像素分布差异越大。
在一个实施例中,可移动平台可以根据投影图像和第二特征对象图像在平移方向和旋转方向中的至少一种运行优化算法,以对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
在一个实施例中,可移动平台可以根据投影图像和第二特征对象图像在平移方向以预设的步长运行优化算法,以对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
在一个实施例中,可移动平台可以根据投影图像和第二特征对象图像在旋转方向以预设的步长运行优化算法,以对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
在本申请实施例中,可移动平台针对第一灰度图像运行特征对象检测算法,以获取第一特征对象图像,根据第一特征对象图像和深度图像,确定第一特征对象图像中的特征对象在空间中的位置,并针对第二灰度图像运行特征对象检测算法,以获取第二特征对象图像,然后根据投影图像和第二特征对象图像运行优化算法,以对预存的TOF测距装置和拍摄装置之间的相对位姿参数进行校正,可提高相对位姿参数的准确性。
本申请实施例提供了一种标定装置,应用于可移动平台中,图5是本申请实施例提供的应用于可移动平台的标定装置的结构图,如图5所示,所述应用于可移动平台的标定装置500包括存储器501和处理器502,可移动平台配置有拍摄装置和TOF测距装置,其中,存储器502中存储有程序代码,处理器 502调用存储器中的程序代码,当程序代码被执行时,处理器502执行如下操作:
获取所述TOF测距装置输出的深度图像,以及与所述深度图像对应的第一灰度图像;
获取所述拍摄装置输出的图像,根据所述图像获取第二灰度图像,或者获取所述拍摄装置输出的第二灰度图像;
根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,以获取校正后的相对位姿参数。
在一个实施例中,所述处理器502还用于执行如下操作:对所述校正后的相对位姿参数进行存储。
在一个实施例中,所述处理器502还用于执行如下操作:根据所述校正后的相对位姿参数控制所述拍摄装置。
在一个实施例中,所述处理器502在根据所述校正后的相对位姿参数控制所述拍摄装置时,具体执行如下操作:
根据所述校正后的相对位姿参数,从所述TOF测距装置输出的深度图像中获取深度信息;
根据所述深度信息控制所述拍摄装置。
在一个实施例中,所述处理器502在根据所述深度信息控制所述拍摄装置时,具体执行如下操作:
根据所述深度信息控制所述拍摄装置的对焦、跟焦、变焦和滑动变焦中的一种或多种。
在一个实施例中,所述处理器502还用于执行如下操作:
在所述拍摄装置的拍摄画面中确定第一目标区域;
所述根据所述校正后的相对位姿参数,从所述TOF测距装置输出的深度图像中获取深度信息,包括:
根据所述校正后的相对位姿参数和所述第一目标区域在所述拍摄画面中的位置,在所述深度图像中确定第二目标区域;
从所述深度图像中的所述第二目标区域中获取所述深度信息。
在一个实施例中,所述处理器502在所述拍摄装置的拍摄画面中确定第一 目标区域时,具体执行如下操作:
检测用户的目标区域选择操作,根据检测到的所述操作在所述拍摄装置的拍摄画面中确定所述第一目标区域。
在一个实施例中,所述处理器502还用于执行如下操作:
检测是否满足预设的校正启动条件;
所述根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,包括:
当确定满足所述校正启动条件时,根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
在一个实施例中,所述处理器502在检测是否满足预设的校正启动条件时,具体执行如下操作:
检测是否获取到所述可移动平台的开机信号;
当获取到所述开机信号时,确定满足所述校正启动条件,否则,不满足所述校正启动条件;或者,
检测是否获取到用户的校正启动操作;
当获取到所述校正启动操作时,确定满足所述校正启动条件,否则,不满足所述校正启动条件。
在一个实施例中,所述处理器502在根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正时,具体执行如下操作:
根据所述深度图像、第一灰度图像和第二灰度图像在平移方向和旋转方向中的至少一种,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
在一个实施例中,所述处理器502在根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正时,具体执行如下操作:
根据所述深度图像、第一灰度图像和第二灰度图像在平移方向以预设的步长,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
在一个实施例中,所述处理器502在根据所述深度图像、第一灰度图像和 第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正时,具体执行如下操作:
针对所述第一灰度图像运行特征对象检测算法,以获取第一特征对象图像;
根据所述第一特征对象图像和所述深度图像,确定所述第一特征对象图像中的特征对象在空间中的位置;
针对所述第二灰度图像运行特征对象检测算法,以获取第二特征对象图像;
根据投影图像和所述第二特征对象图像运行优化算法,以对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,其中,所述投影图像是根据所述特征对象在空间中的位置和所述预存的相对位姿参数,将所述第一特征对象图像中的像素点投影到所述拍摄装置的成像面得到的。
在一个实施例中,所述优化运算的优化对象为预存的相对位姿参数,所述优化对象的优化目标为特征对象在投影图像的像素点与特征对象在第二特征对象图像的像素点之间的距离最小。
在一个实施例中,所述优化对象的优化目标为投影图像与第二特征对象图像异或运算后非零像素点的数量最小。
在一个实施例中,所述特征对象包括物体边缘。
本实施例提供的应用于可移动平台的标定装置能执行前述实施例提供的如图1或图2所示的标定方法,且执行方式和有益效果类似,在这里不再赘述。
本申请实施例提供一种可移动平台,包括机身,动力系统,拍摄装置,TOF测距装置以及如前所述的标定装置。可移动平台的深度计算装置工作与前述相同或类似,此处不再赘述。动力系统,安装在所述机身,用于为所述可移动平台提供动力。拍摄装置,安装在所述机身,用于输出图像或者第二灰度图像。TOF测距装置,安装在所述机身,用于输出深度图像。当所述可移动平台为无人机时,其动力系统可以包括旋翼、驱动旋翼旋转的电机及其电调。所述无人机可以是四旋翼、六旋翼、八旋翼或其他多旋翼无人机,此时无人机垂直起降进行工作。可以理解的是,无人机还可以是固定翼可移动平台或混合翼可移动平台。
在一个实施例中,所述可移动平台还包括显示屏幕,所述显示屏幕安装在所述拍摄装置,所述显示屏幕用于显示拍摄画面,或者检测用户的目标区域选 择操作。
在一个实施例中,所述可移动平台还包括通信设备,所述通信设备安装在所述机身,所述通信设备用于获取用户的校正启动操作。
在一个实施例中,所述可移动平台至少包括如下的一种:用于对所述拍摄装置增稳的手持增稳系统或者无人机。
本申请实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机程序指令,所述计算机程序指令被处理器执行时,用于执行如图1或图2所示的标定方法。
可以理解,以上所揭露的仅为本申请实施例的部分实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。

Claims (33)

  1. 一种标定方法,所述方法应用于可移动平台,所述可移动平台包括飞行时间TOF测距装置和拍摄装置,其特征在于,所述方法包括:
    获取所述TOF测距装置输出的深度图像和所述深度图像对应的第一灰度图像;
    获取所述拍摄装置输出的图像,根据所述图像获取第二灰度图像,或者获取所述拍摄装置输出的第二灰度图像;
    根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,以获取校正后的相对位姿参数。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:对所述校正后的相对位姿参数进行存储。
  3. 根据权利要求所述1或2所述的方法,其特征在于,所述方法还包括:根据所述校正后的相对位姿参数控制所述拍摄装置。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述校正后的相对位姿参数控制所述拍摄装置,包括:
    根据所述校正后的相对位姿参数,从所述TOF测距装置输出的深度图像中获取深度信息;
    根据所述深度信息控制所述拍摄装置。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述深度信息控制所述拍摄装置,包括:
    根据所述深度信息控制所述拍摄装置的对焦、跟焦、变焦和滑动变焦中的一种或多种。
  6. 根据权利要求所述的4或5所述的方法,其特征在于,所述方法还包 括:
    在所述拍摄装置的拍摄画面中确定第一目标区域;
    所述根据所述校正后的相对位姿参数,从所述TOF测距装置输出的深度图像中获取深度信息,包括:
    根据所述校正后的相对位姿参数和所述第一目标区域在所述拍摄画面中的位置,在所述深度图像中确定第二目标区域;
    从所述深度图像中的所述第二目标区域中获取所述深度信息。
  7. 根据权利要求6所述的方法,其特征在于,所述在所述拍摄装置的拍摄画面中确定第一目标区域,包括:
    检测用户的目标区域选择操作,根据检测到的所述操作在所述拍摄装置的拍摄画面中确定所述第一目标区域。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述方法还包括:
    检测是否满足预设的校正启动条件;
    所述根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,包括:
    当确定满足所述校正启动条件时,根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
  9. 根据权利要求8所述的方法,其特征在于,所述检测是否满足预设的校正启动条件,包括:
    检测是否获取到所述可移动平台的开机信号;
    当获取到所述开机信号时,确定满足所述校正启动条件,否则,不满足所述校正启动条件;或者,
    检测是否获取到用户的校正启动操作;
    当获取到所述校正启动操作时,确定满足所述校正启动条件,否则,不满足所述校正启动条件。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,包括:
    根据所述深度图像、第一灰度图像和第二灰度图像在平移方向和旋转方向中的至少一种,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
  11. 根据权利要求1-9任一项所述的方法,其特征在于,所述根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,包括:
    根据所述深度图像、第一灰度图像和第二灰度图像在平移方向以预设的步长,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,包括:
    针对所述第一灰度图像运行特征对象检测算法,以获取第一特征对象图像;
    根据所述第一特征对象图像和所述深度图像,确定所述第一特征对象图像中的特征对象在空间中的位置;
    针对所述第二灰度图像运行特征对象检测算法,以获取第二特征对象图像;
    根据投影图像和所述第二特征对象图像运行优化算法,以对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,其中,所述投影图像是根据所述特征对象在空间中的位置和所述预存的相对位姿参数,将所述第一特征对象图像中的像素点投影到所述拍摄装置的成像面得到的。
  13. 根据权利要求12所述的方法,其特征在于,所述优化运算的优化对象为预存的相对位姿参数,所述优化对象的优化目标为特征对象在投影图像的像素点与特征对象在第二特征对象图像的像素点之间的距离最小。
  14. 根据权利要求13所述的所述的方法,其特征在于,所述优化对象的 优化目标为投影图像与第二特征对象图像异或运算后非零像素点的数量最小。
  15. 根据权利要求12-14任一项所述的方法,其特征在于,所述特征对象包括物体边缘。
  16. 一种标定装置,其特征在于,包括存储器和处理器,所述标定装置应用于可移动平台,所述可移动平台包括飞行时间TOF测距装置和拍摄装置,其中,
    所述存储器,用于存储有程序代码;
    所述处理器,调用存储器中的程序代码,当程序代码被执行时,用于执行如下操作:
    获取所述TOF测距装置输出的深度图像和与所述深度图像对应的第一灰度图像;
    获取所述拍摄装置输出的图像,根据所述图像获取第二灰度图像,或者获取所述拍摄装置输出的第二灰度图像;
    根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,以获取校正后的相对位姿参数。
  17. 根据权利要求16所述的装置,其特征在于,所述处理器还用于执行如下操作:对所述校正后的相对位姿参数进行存储。
  18. 根据权利要求所述16或17所述的装置,其特征在于,所述处理器还用于执行如下操作:根据所述校正后的相对位姿参数控制所述拍摄装置。
  19. 根据权利要求18所述的装置,其特征在于,所述处理器在根据所述校正后的相对位姿参数控制所述拍摄装置时,具体执行如下操作:
    根据所述校正后的相对位姿参数,从所述TOF测距装置输出的深度图像中获取深度信息;
    根据所述深度信息控制所述拍摄装置。
  20. 根据权利要求19所述的装置,其特征在于,所述处理器在根据所述深度信息控制所述拍摄装置时,具体执行如下操作:
    根据所述深度信息控制所述拍摄装置的对焦、跟焦、变焦和滑动变焦中的一种或多种。
  21. 根据权利要求所述的19或20所述的装置,其特征在于,所述处理器还用于执行如下操作:
    在所述拍摄装置的拍摄画面中确定第一目标区域;
    所述根据所述校正后的相对位姿参数,从所述TOF测距装置输出的深度图像中获取深度信息,包括:
    根据所述校正后的相对位姿参数和所述第一目标区域在所述拍摄画面中的位置,在所述深度图像中确定第二目标区域;
    从所述深度图像中的所述第二目标区域中获取所述深度信息。
  22. 根据权利要求21所述的装置,其特征在于,所述处理器在所述拍摄装置的拍摄画面中确定第一目标区域时,具体执行如下操作:
    检测用户的目标区域选择操作,根据检测到的所述操作在所述拍摄装置的拍摄画面中确定所述第一目标区域。
  23. 根据权利要求16-22任一项所述的装置,其特征在于,所述处理器还用于执行如下操作:
    检测是否满足预设的校正启动条件;
    所述根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,包括:
    当确定满足所述校正启动条件时,根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
  24. 根据权利要求23所述的装置,其特征在于,所述处理器在检测是否 满足预设的校正启动条件时,具体执行如下操作:
    检测是否获取到所述可移动平台的开机信号;
    当获取到所述开机信号时,确定满足所述校正启动条件,否则,不满足所述校正启动条件;或者,
    检测是否获取到用户的校正启动操作;
    当获取到所述校正启动操作时,确定满足所述校正启动条件,否则,不满足所述校正启动条件。
  25. 根据权利要求16-24任一项所述的装置,其特征在于,所述处理器在根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正时,具体执行如下操作:
    根据所述深度图像、第一灰度图像和第二灰度图像在平移方向和旋转方向中的至少一种,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
  26. 根据权利要求16-24任一项所述的装置,其特征在于,所述处理器在根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正时,具体执行如下操作:
    根据所述深度图像、第一灰度图像和第二灰度图像在平移方向以预设的步长,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正。
  27. 根据权利要求16-26任一项所述的装置,其特征在于,所述处理器在根据所述深度图像、第一灰度图像和第二灰度图像,对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正时,具体执行如下操作:
    针对所述第一灰度图像运行特征对象检测算法,以获取第一特征对象图像;
    根据所述第一特征对象图像和所述深度图像,确定所述第一特征对象图像中的特征对象在空间中的位置;
    针对所述第二灰度图像运行特征对象检测算法,以获取第二特征对象图像;
    根据投影图像和所述第二特征对象图像运行优化算法,以对预存的所述TOF测距装置和拍摄装置之间的相对位姿参数进行校正,其中,所述投影图 像是根据所述特征对象在空间中的位置和所述预存的相对位姿参数,将所述第一特征对象图像中的像素点投影到所述拍摄装置的成像面得到的。
  28. 根据权利要求27所述的装置,其特征在于,所述优化运算的优化对象为预存的相对位姿参数,所述优化对象的优化目标为特征对象在投影图像的像素点与特征对象在第二特征对象图像的像素点之间的距离最小。
  29. 根据权利要求28所述的所述的装置,其特征在于,所述优化对象的优化目标为投影图像与第二特征对象图像异或运算后非零像素点的数量最小。
  30. 根据权利要求27-29任一项所述的装置,其特征在于,所述特征对象包括物体边缘。
  31. 一种可移动平台,其特征在于,包括:
    拍摄装置;
    飞行时间TOF测距装置;
    以及如权利要求16-30中任一项所述的标定装置。
  32. 根据权利要求31所述的可移动平台,其特征在于,所述可移动平台至少包括如下的一种:用于对所述拍摄装置增稳的手持增稳系统或者无人机。
  33. 一种计算机存储介质,其特征在于,所述计算机存储介质中存储有计算机程序指令,所述计算机程序指令被处理器执行时,用于执行如权利要求16-30任一项所述的标定方法。
PCT/CN2020/111155 2020-08-25 2020-08-25 标定方法、装置、可移动平台及存储介质 WO2022040940A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/111155 WO2022040940A1 (zh) 2020-08-25 2020-08-25 标定方法、装置、可移动平台及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/111155 WO2022040940A1 (zh) 2020-08-25 2020-08-25 标定方法、装置、可移动平台及存储介质

Publications (1)

Publication Number Publication Date
WO2022040940A1 true WO2022040940A1 (zh) 2022-03-03

Family

ID=80352400

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111155 WO2022040940A1 (zh) 2020-08-25 2020-08-25 标定方法、装置、可移动平台及存储介质

Country Status (1)

Country Link
WO (1) WO2022040940A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115097427A (zh) * 2022-08-24 2022-09-23 北原科技(深圳)有限公司 基于飞行时间法的自动标定系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190028646A1 (en) * 2016-01-12 2019-01-24 Huawei Technologies Co., Ltd. Depth information obtaining method and apparatus, and image acquisition device
CN109949371A (zh) * 2019-03-18 2019-06-28 北京智行者科技有限公司 一种用于激光雷达和相机数据的标定方法
CN109949372A (zh) * 2019-03-18 2019-06-28 北京智行者科技有限公司 一种激光雷达与视觉联合标定方法
CN110415286A (zh) * 2019-09-24 2019-11-05 杭州蓝芯科技有限公司 一种多飞行时间深度相机系统的外参标定方法
CN111355891A (zh) * 2020-03-17 2020-06-30 香港光云科技有限公司 基于ToF的微距对焦方法、微距拍摄方法及其拍摄装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190028646A1 (en) * 2016-01-12 2019-01-24 Huawei Technologies Co., Ltd. Depth information obtaining method and apparatus, and image acquisition device
CN109949371A (zh) * 2019-03-18 2019-06-28 北京智行者科技有限公司 一种用于激光雷达和相机数据的标定方法
CN109949372A (zh) * 2019-03-18 2019-06-28 北京智行者科技有限公司 一种激光雷达与视觉联合标定方法
CN110415286A (zh) * 2019-09-24 2019-11-05 杭州蓝芯科技有限公司 一种多飞行时间深度相机系统的外参标定方法
CN111355891A (zh) * 2020-03-17 2020-06-30 香港光云科技有限公司 基于ToF的微距对焦方法、微距拍摄方法及其拍摄装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115097427A (zh) * 2022-08-24 2022-09-23 北原科技(深圳)有限公司 基于飞行时间法的自动标定系统及方法
CN115097427B (zh) * 2022-08-24 2023-02-10 北原科技(深圳)有限公司 基于飞行时间法的自动标定方法

Similar Documents

Publication Publication Date Title
US11797009B2 (en) Unmanned aerial image capture platform
US11649052B2 (en) System and method for providing autonomous photography and videography
US11106203B2 (en) Systems and methods for augmented stereoscopic display
US11019322B2 (en) Estimation system and automobile
US10401872B2 (en) Method and system for collision avoidance
US11587261B2 (en) Image processing apparatus and ranging apparatus
WO2018095278A1 (zh) 飞行器的信息获取方法、装置及设备
WO2019138678A1 (ja) 情報処理装置及びその制御方法及びプログラム、並びに、車両の運転支援システム
WO2021226876A1 (zh) 一种目标检测方法及装置
CN110187720B (zh) 无人机导引方法、装置、系统、介质及电子设备
WO2019051832A1 (zh) 可移动物体控制方法、设备及系统
CN115023627A (zh) 用于将世界点投影到滚动快门图像的高效算法
CN112136137A (zh) 一种参数优化方法、装置及控制设备、飞行器
WO2022040940A1 (zh) 标定方法、装置、可移动平台及存储介质
CN209991983U (zh) 一种障碍物检测设备及无人机
CN109618085B (zh) 电子设备和移动平台
CN109803089B (zh) 电子设备和移动平台
CN110012280B (zh) 用于vslam系统的tof模组及vslam计算方法
KR20200076628A (ko) 모바일 디바이스의 위치 측정 방법, 위치 측정 장치 및 전자 디바이스
WO2022040941A1 (zh) 深度计算方法、装置、可移动平台及存储介质
JP7242822B2 (ja) 推定システムおよび自動車
Hakim et al. Asus Xtion Pro Camera Performance in Constructing a 2D Map Using Hector SLAM Method
WO2022141123A1 (zh) 可移动平台及其控制方法、装置、终端设备和存储介质
CN109729250B (zh) 电子设备和移动平台
JP7317684B2 (ja) 移動体、情報処理装置、及び撮像システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950613

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20950613

Country of ref document: EP

Kind code of ref document: A1