WO2018176963A1 - 电子稳像方法、系统及无人机 - Google Patents

电子稳像方法、系统及无人机 Download PDF

Info

Publication number
WO2018176963A1
WO2018176963A1 PCT/CN2017/120343 CN2017120343W WO2018176963A1 WO 2018176963 A1 WO2018176963 A1 WO 2018176963A1 CN 2017120343 W CN2017120343 W CN 2017120343W WO 2018176963 A1 WO2018176963 A1 WO 2018176963A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
coordinate system
image
image stabilization
rotation matrix
Prior art date
Application number
PCT/CN2017/120343
Other languages
English (en)
French (fr)
Inventor
周剑
周彬
Original Assignee
成都通甲优博科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都通甲优博科技有限责任公司 filed Critical 成都通甲优博科技有限责任公司
Publication of WO2018176963A1 publication Critical patent/WO2018176963A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations

Definitions

  • the invention relates to the technical field of drones, in particular to an electronic image stabilization method, a system and a drone.
  • an object of the present invention is to provide an electronic image stabilization method, system, and drone, which can achieve the purpose of image stabilization processing on an image collected by a drone.
  • the specific plan is as follows:
  • An electronic image stabilization method for a drone comprising:
  • the process of obtaining a image stabilization area includes:
  • the area is selected by the area, and an area selected by the user in the image image collected by the physical camera is obtained, and the image stabilization area is obtained.
  • the process of determining an area corresponding to the image stabilization area from a pre-created virtual camera coordinate system includes:
  • an area corresponding to the image stabilization area in the virtual camera coordinate system is determined to obtain the first area.
  • the process of determining an area corresponding to the second area from an image coordinate system includes:
  • the process of determining an area corresponding to the first area from a physical camera coordinate system includes:
  • the first rotation matrix is a rotation matrix between the world coordinate system and the virtual camera coordinate system
  • the second rotation matrix is a rotation between the physical camera coordinate system and the world coordinate system. matrix.
  • the acquiring process of the first rotation matrix includes:
  • the acquiring process of the second rotation matrix includes:
  • a rotation matrix in the attitude of the aircraft is directly determined as the second rotation matrix.
  • the process of determining an area corresponding to the first area from a physical camera coordinate system includes:
  • the third rotation matrix is a rotation matrix between the physical camera coordinate system and the virtual camera coordinate system.
  • the invention also correspondingly discloses an electronic image stabilization system for a drone, comprising:
  • a region obtaining module configured to acquire a image stabilization area
  • a first area determining module configured to determine an area corresponding to the image stabilization area from a pre-created virtual camera coordinate system, to obtain a first area; wherein the virtual camera coordinate system is in a posture relative to a world coordinate system a coordinate system created in a still virtual camera;
  • a second area determining module configured to determine an area corresponding to the first area from a physical camera coordinate system, to obtain a second area
  • a third area determining module configured to determine an area corresponding to the second area from an image coordinate system, to obtain a third area
  • An image mapping module configured to map a image to be stabilized acquired by a physical camera on the drone to the image stabilization area according to a mapping relationship between the image stabilization area and the third area, to obtain a image stabilization image After the image.
  • the second area determining module includes:
  • a first determining unit configured to determine, by using the first rotation matrix, an area corresponding to the first area from the world coordinate system, to obtain a transition area
  • a second determining unit configured to determine, by using the second rotation matrix, an area corresponding to the transition area from the physical camera coordinate system to obtain the second area
  • the first rotation matrix is a rotation matrix between the world coordinate system and the virtual camera coordinate system
  • the second rotation matrix is a rotation between the physical camera coordinate system and the world coordinate system. matrix.
  • the present invention further discloses an unmanned aerial vehicle comprising the aforementioned unmanned aerial vehicle electronic image stabilization system.
  • the electronic image stabilization method of the unmanned aerial vehicle includes: acquiring a image stabilization area; determining a region corresponding to the image stabilization region from a pre-created virtual camera coordinate system to obtain a first region; wherein, the virtual camera coordinate system a coordinate system created in a virtual camera whose posture is stationary relative to the world coordinate system; a region corresponding to the first region is determined from the physical camera coordinate system to obtain a second region; and the image region is determined to correspond to the second region
  • the region is obtained by the third region; according to the mapping relationship between the image stabilization region and the third region, the image to be stabilized acquired by the physical camera on the drone is mapped to the image stabilization region to obtain a stabilized image.
  • the present invention pre-creates a virtual camera coordinate system that is stationary with respect to the world coordinate system, and then sequentially maps the image stabilization area into the virtual camera coordinate system. Since the virtual camera coordinate system is stationary relative to the world coordinate system, When the image stabilization area on the image screen in the dithered state is mapped to the virtual camera coordinate system, a first region that is continuously stable relative to the virtual camera coordinate system is obtained, thereby suppressing the jitter of the image, and then the jitter phenomenon is obtained. The suppressed first region is remapped to the image coordinate system to obtain a third region located on the image coordinate system. Finally, according to the mapping relationship between the image stabilization region and the third region, the image map collected by the drone can be mapped. To stabilize the image area, thereby achieving stable output of the image picture, that is, the present invention achieves the purpose of performing image stabilization processing on the image collected by the drone.
  • FIG. 1 is a flow chart of a method for electronic image stabilization of a drone according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a specific method for electronic image stabilization of a drone according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a specific method for electronic image stabilization of a drone according to an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of an electronic image stabilization system of a drone according to an embodiment of the present invention.
  • the embodiment of the invention discloses a method for electronic image stabilization of a drone. Referring to FIG. 1 , the method comprises:
  • the process of acquiring the image stabilization area may include: providing a region selection channel for the user, and then acquiring an area selected by the user in the image image collected by the physical camera through the area selection channel to obtain a image stabilization area.
  • the image area waiting for the image stabilization may be selected from the image images collected by the physical camera by the user selecting the region, wherein the size of the selected image region may be smaller than or equal to that collected by the physical camera.
  • the size of the image screen so that the user can select any area in the image screen that needs to be focused on as a stable image area according to actual needs, which can improve the user experience and reduce the amount of calculation and speed up.
  • the image stabilization speed is calculated.
  • S12 Determine an area corresponding to the image stabilization area from the pre-created virtual camera coordinate system to obtain a first area; wherein the virtual camera coordinate system is a coordinate system created in a virtual camera whose posture is stationary relative to the world coordinate system.
  • a first area corresponding to the image stabilization area in the virtual camera coordinate system can be obtained.
  • the virtual camera before creating the virtual camera coordinate system, the virtual camera needs to be created first, wherein the posture of the virtual camera is static with respect to the world coordinate system, and then the virtual camera coordinate system is established in the virtual camera.
  • the horizontal angle between the virtual camera coordinate system and the world coordinate system may be maintained at 45 degrees.
  • the virtual camera coordinate system is stationary with respect to the world coordinate system, when the image stabilization area on the image screen in the dithered state is mapped to the virtual camera coordinate system, a relative virtual camera is obtained.
  • the coordinate system is continuously stable in the first region, thereby suppressing the jitter of the picture.
  • S13 Determine an area corresponding to the first area from the physical camera coordinate system to obtain a second area.
  • a second region corresponding to the first region located in the physical camera coordinate system can be obtained.
  • S14 Determine a region corresponding to the second region from the image coordinate system to obtain a third region.
  • a third region corresponding to the second region located in the image coordinate system can be obtained.
  • S15 Map the image to be stabilized acquired by the physical camera on the drone to the image stabilization area according to the mapping relationship between the image stabilization area and the third area to obtain a stabilized image.
  • the mapping relationship between the image stabilization area and the first area, the mapping relationship between the first area and the second area, and the mapping relationship between the second area and the third area may be determined to be stable.
  • the mapping relationship between the image area and the third area By using the mapping relationship between the image stabilization area and the third area, the image to be stabilized image acquired by the above physical camera can be mapped to the image stabilization area, and the image after image stabilization can be obtained.
  • the embodiment of the present invention pre-creates a virtual camera coordinate system that is stationary with respect to the world coordinate system, and then sequentially maps the image stabilization area into the virtual camera coordinate system, since the virtual camera coordinate system is stationary relative to the world coordinate system. Therefore, when the image stabilization area on the image screen in the dithered state is mapped to the virtual camera coordinate system, a first region that is continuously stable with respect to the virtual camera coordinate system is obtained, thereby suppressing the jitter of the image, and then The first region where the jitter phenomenon is suppressed is remapped to the image coordinate system to obtain a third region located on the image coordinate system, and finally, according to the mapping relationship between the image stabilization region and the third region, the drone can be collected. The image is mapped to the image stabilization area, thereby achieving stable output of the image frame. That is, the embodiment of the present invention achieves the purpose of performing image stabilization processing on the image collected by the drone.
  • an embodiment of the present invention discloses a specific electronic image stabilization method for a drone, including the following steps S21 to S26:
  • the image stabilization matrix of the virtual camera and the first mapping formula may be used to map the image stabilization area to the virtual camera coordinate system C1, thereby obtaining the first region S C1 .
  • the first mapping formula is specifically:
  • (x, y) ⁇ S is the coordinate of any point a on the image stabilization area S
  • K is the internal parameter matrix of the above virtual camera
  • (X, Y, Z) is the position of the point a in the virtual camera coordinate system C1.
  • S C1 represents the above first region.
  • the internal parameter matrix K of the virtual camera is specifically:
  • Fv x represents the principal distance of the virtual camera on the X axis
  • Fv y represents the principal distance of the virtual camera on the Y axis
  • (Cv x , Cv y ) represents the principal point coordinates in the virtual camera coordinate system C1.
  • the internal reference matrix K can be obtained by manual assignment after multiple experiments.
  • the acquiring process of the corresponding first rotation matrix R W-C1 includes: acquiring, by using an IMU unit (IMU, ie, an Inertial Measurement Unit) in the drone
  • IMU IMU
  • each frame image collected by the drone has a time stamp
  • each group of aircraft posture data also has a time stamp, if the two time stamps are If the two clocks are obtained from different clocks, the two time stamps need to be unified to the same clock before the image stabilization process is performed.
  • the index queue M between the frame image and the pose data can theoretically be established according to the correspondence between the timestamps, and then according to the index queue M
  • the index relationship can determine the attitude of the aircraft corresponding to each frame of image, and then use the attitude of the aircraft corresponding to each frame of image to develop the subsequent image stabilization process.
  • the filtering algorithm after the sampling of the IMU unit causes the delay of the attitude data
  • the posture indexed by the image-based timestamp in the index queue does not match the true posture.
  • the IMU delay is required to establish an accurate index relationship between the image and the attitude, so as to obtain an accurate posture.
  • the second rotation matrix R C2-W is a rotation matrix between the physical camera coordinate system R C2-W and the world coordinate system W.
  • the second region S C2 may be mapped into the image coordinate system C3 by using the internal parameter matrix T and the second mapping formula in the physical camera to obtain the third region S C3 .
  • the second mapping formula is specifically:
  • the internal reference matrix T of the physical camera is specifically:
  • F x and F y are the principal distances of the physical camera on the X-axis and the Y-axis, respectively, and (C x , C y ) represent the principal point coordinates in the physical camera coordinate system C2.
  • the internal reference matrix T can be obtained by means of camera calibration.
  • the image stabilization process in this embodiment is mainly performed in the GPU, and the frame image is mapped from the host virtual address space to the GPU address space in advance based on the zero copy technology, and various rotation matrices are introduced.
  • the GPU then performs image stabilization processing on the frame image according to the above-described image stabilization processing process, and then maps the image after stabilization to the host-side virtual address space based on the zero-copy technology for use by an encoder, a picture transmission, and the like.
  • an embodiment of the present invention discloses a specific electronic image stabilization method for a drone, which includes the following steps S31 to S35:
  • Step S31 Acquire a image stabilization area.
  • Step S32 determining an area corresponding to the image stabilization area from the virtual camera coordinate system created in advance, and obtaining the first area.
  • Step S33 Using a third rotation matrix, directly determining a region corresponding to the first region from the physical camera coordinate system to obtain a second region; wherein the third rotation matrix is between the physical camera coordinate system and the virtual camera coordinate system Rotate the matrix.
  • the third rotation matrix may be obtained by multiplying the first rotation matrix and the second rotation matrix in the previous embodiment, that is, the third rotation matrix is equal to the first rotation matrix and the second rotation. The product between the matrices.
  • Step S34 determining an area corresponding to the second area from the image coordinate system to obtain a third area.
  • Step S35 According to the mapping relationship between the image stabilization area and the third area, the image to be stabilized acquired by the physical camera on the drone is mapped to the image stabilization area to obtain a stabilized image.
  • an embodiment of the present invention further discloses an electronic image stabilization system for a drone.
  • the system includes:
  • the area obtaining module 11 is configured to obtain a image stabilization area
  • the first area determining module 12 is configured to determine an area corresponding to the image stabilization area from the pre-created virtual camera coordinate system to obtain a first area; wherein the virtual camera coordinate system is a virtual state in which the posture is stationary relative to the world coordinate system The coordinate system created in the camera.
  • a second area determining module 13 configured to determine an area corresponding to the first area from the physical camera coordinate system to obtain a second area
  • a third area determining module 14 is configured to determine an area corresponding to the second area from the image coordinate system to obtain a third area;
  • the image mapping module 15 is configured to map the image to be stabilized acquired by the physical camera on the drone to the image stabilization area according to the mapping relationship between the image stabilization area and the third area to obtain a stabilized image.
  • the foregoing second area determining module 12 may include a first determining unit and a second determining unit;
  • a first determining unit configured to determine a region corresponding to the first region from the world coordinate system by using the first rotation matrix, to obtain a transition region
  • a second determining unit configured to determine, by using a second rotation matrix, an area corresponding to the transition area from the physical camera coordinate system to obtain a second area
  • the first rotation matrix is a rotation matrix between the world coordinate system and the virtual camera coordinate system
  • the second rotation matrix is a rotation matrix between the physical camera coordinate system and the world coordinate system.
  • the embodiment of the present invention pre-creates a virtual camera coordinate system that is stationary with respect to the world coordinate system, and then sequentially maps the image stabilization area into the virtual camera coordinate system, since the virtual camera coordinate system is stationary relative to the world coordinate system. Therefore, when the image stabilization area on the image screen in the dithered state is mapped to the virtual camera coordinate system, a first region that is continuously stable with respect to the virtual camera coordinate system is obtained, thereby suppressing the jitter of the image, and then The first region where the jitter phenomenon is suppressed is remapped to the image coordinate system to obtain a third region located on the image coordinate system, and finally, according to the mapping relationship between the image stabilization region and the third region, the drone can be collected. The image is mapped to the image stabilization area, thereby achieving stable output of the image frame. That is, the embodiment of the present invention achieves the purpose of performing image stabilization processing on the image collected by the drone.
  • the present invention also discloses a UAV, including the UAV electronic image stabilization system disclosed in the foregoing embodiment.
  • a UAV including the UAV electronic image stabilization system disclosed in the foregoing embodiment.
  • the specific configuration of the system reference may be made to the corresponding content disclosed in the foregoing embodiment, and no longer Repeat them.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了电子稳像方法、系统及无人机,该方法包括:获取稳像区域;从预先创建的虚拟相机坐标系中确定出与稳像区域相对应的区域,得到第一区域;其中,虚拟相机坐标系为在姿态相对世界坐标系静止的虚拟相机中创建的坐标系;从物理相机坐标系中确定出与第一区域对应的区域,得到第二区域;从图像坐标系中确定出与第二区域对应的区域,得到第三区域;根据稳像区域和第三区域之间的映射关系,将无人机上的物理相机所采集到的待稳像图像映射至稳像区域,得到稳像后图像。本申请实现了对无人机采集到的图像进行稳像处理的目的。

Description

电子稳像方法、系统及无人机
本申请要求于2017年3月28日提交的申请号为201710192681.X、名称为“一种无人机及其电子稳像方法、系统”的中国专利申请的优先权,并将其全部内容通过引用的方式结合在本申请中。
技术领域
本发明涉及无人机技术领域,特别涉及电子稳像方法、系统及无人机。
背景技术
当前,随着科学技术的快速发展以及巨大的市场需求,无人机的应用范围越来越广,为用户提供了诸多便捷的无人机服务。
现有无人机绝大部分功能的实现均需依赖于无人机上搭载的摄像系统,然而,由于无人机在实际飞行过程中容易出现抖动,导致无人机上的摄像系统所采集的摄像画面时常出现抖动现象,严重影响了摄像质量。
综上所述可以看出,如何对无人机采集到的图像进行稳像处理是目前亟待解决的问题。
发明内容
有鉴于此,本发明的目的在于提供电子稳像方法、系统及无人机,能够实现对无人机采集到的图像进行稳像处理的目的。其具体方案如下:
一种无人机电子稳像方法,包括:
获取稳像区域;
从预先创建的虚拟相机坐标系中确定出与所述稳像区域相对应的区域,得到第一区域;其中,所述虚拟相机坐标系为在姿态相对世界坐标系静止的虚拟相机中创建的坐标系;
从物理相机坐标系中确定出与所述第一区域对应的区域,得到第二区域;
从图像坐标系中确定出与所述第二区域对应的区域,得到第三区域;
根据所述稳像区域和所述第三区域之间的映射关系,将无人机上的物理相机所采集到的待稳像图像映射至所述稳像区域,得到稳像后图像。
可选的,所述获取稳像区域的过程,包括:
为用户提供区域选取通道;
通过所述区域选取通道,获取用户在所述物理相机采集的图像画面中选取的区域,得到所述稳像区域。
可选的,所述从预先创建的虚拟相机坐标系中确定出与所述稳像区域相对应的区域的过程,包括:
利用所述虚拟相机的内参矩阵,确定出位于所述虚拟相机坐标系中的与所述稳像区域相对应的区域,得到所述第一区域。
可选的,所述从图像坐标系中确定出与所述第二区域对应的区域的过程,包括:
利用所述物理相机中的内参矩阵,确定出位于所述图像坐标系中的与所述第二区域相对应的区域,得到所述第三区域。
可选的,所述从物理相机坐标系中确定出与所述第一区域对应的区域的过程,包括:
利用第一旋转矩阵,从所述世界坐标系中确定出与所述第一区域对应的区域,得到过渡区域;
利用第二旋转矩阵,从所述物理相机坐标系中确定出与所述过渡区域对应的区域,得到所述第二区域;
其中,所述第一旋转矩阵为所述世界坐标系与所述虚拟相机坐标系之间的旋转矩阵,所述第二旋转矩阵为所述物理相机坐标系与所述世界坐标系之间的旋转矩阵。
可选的,在对所述待稳像图像进行稳像处理的过程中,相应的所述第一旋转矩阵的获取过程,包括:
通过所述无人机中的IMU单元,获取所述物理相机在采集所述待稳像图像时所述无人机的飞机姿态;
对所述飞机姿态中的旋转矩阵进行均值滤波,得到所述第一旋转矩阵;
所述第二旋转矩阵的获取过程,包括:
将所述飞机姿态中的旋转矩阵直接确定为所述第二旋转矩阵。
可选的,所述从物理相机坐标系中确定出与所述第一区域对应的区域的过程,包括:
利用第三旋转矩阵,从所述物理相机坐标系中直接确定出与所述第一区域对应的区域,得到所述第二区域;
其中,所述第三旋转矩阵为所述物理相机坐标系与所述虚拟相机坐标系之间的旋转矩阵。
本发明还相应公开了一种无人机电子稳像系统,包括:
区域获取模块,用于获取稳像区域;
第一区域确定模块,用于从预先创建的虚拟相机坐标系中确定出与所述稳像区域相对应的区域,得到第一区域;其中,所述虚拟相机坐标系为在姿态相对世界坐标系静止的虚拟相机中创建的坐标系;
第二区域确定模块,用于从物理相机坐标系中确定出与所述第一区域对应的区域,得到第二区域;
第三区域确定模块,用于从图像坐标系中确定出与所述第二区域对应的区域,得到第三区域;
图像映射模块,用于根据所述稳像区域和所述第三区域之间的映射关系,将无人机上的物理相机所采集到的待稳像图像映射至所述稳像区域,得到稳像后图像。
可选的,所述第二区域确定模块,包括:
第一确定单元,用于利用第一旋转矩阵,从所述世界坐标系中确定出与所述第一区域对应的区域,得到过渡区域;
第二确定单元,用于利用第二旋转矩阵,从所述物理相机坐标系中确定出与所述过渡区域对应的区域,得到所述第二区域;
其中,所述第一旋转矩阵为所述世界坐标系与所述虚拟相机坐标系之间的旋转矩阵,所述第二旋转矩阵为所述物理相机坐标系与所述世界坐标系之间的旋转矩阵。
本发明进一步公开了一种无人机,包括前述公开的无人机电子稳像系统。
本发明中,无人机电子稳像方法,包括:获取稳像区域;从预先创建的 虚拟相机坐标系中确定出与稳像区域相对应的区域,得到第一区域;其中,虚拟相机坐标系为在姿态相对世界坐标系静止的虚拟相机中创建的坐标系;从物理相机坐标系中确定出与第一区域对应的区域,得到第二区域;从图像坐标系中确定出与第二区域对应的区域,得到第三区域;根据稳像区域和第三区域之间的映射关系,将无人机上的物理相机所采集到的待稳像图像映射至稳像区域,得到稳像后图像。
可见,本发明预先创建了相对于世界坐标系静止的虚拟相机坐标系,然后将稳像区域依序映射进虚拟相机坐标系,由于虚拟相机坐标系相对于世界坐标系来说是静止的,所以,当处于抖动状态的图像画面上的稳像区域映射至上述虚拟相机坐标系之后,将会得到相对虚拟相机坐标系连续稳定的第一区域,从而对画面的抖动产生抑制效果,接着将抖动现象受到抑制的第一区域重新映射到图像坐标系,得到位于图像坐标系上的第三区域,最后根据稳像区域与第三区域之间的映射关系,便可将无人机采集到的图像映射至稳像区域,从而实现图像画面的稳定输出,也即,本发明实现了对无人机采集到的图像进行稳像处理的目的。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本发明实施例公开的一种无人机电子稳像方法流程图;
图2为本发明实施例公开的一种具体的无人机电子稳像方法流程图;
图3为本发明实施例公开的一种具体的无人机电子稳像方法流程图;
图4为本发明实施例公开的一种无人机电子稳像系统结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而 不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例公开了一种无人机电子稳像方法,参见图1所示,该方法包括:
S11:获取稳像区域。
本实施例中,上述获取稳像区域的过程,具体可以包括:为用户提供区域选取通道,然后通过区域选取通道,获取用户在物理相机采集的图像画面中选取的区域,得到稳像区域。
也即,本实施例可以通过用户选取区域的方式,从物理相机采集到的图像画面中选取等待稳像的画面区域,其中,选取出来的画面区域的尺寸大小可以小于或等于物理相机采集到的图像画面的尺寸大小,这样,用户便可以根据实际需要,从图像画面中的任一需要进行重点关注的区域选取为稳像区域,这样既能提高用户体验,并且也有利于减少计算量,加快稳像计算速度。
S12:从预先创建的虚拟相机坐标系中确定出与稳像区域相对应的区域,得到第一区域;其中,虚拟相机坐标系为在姿态相对世界坐标系静止的虚拟相机中创建的坐标系。
本实施例中,通过将稳像区域映射至上述虚拟相机坐标系中,可以得到位于虚拟相机坐标系中的与稳像区域对应的第一区域。
本实施例中,在创建虚拟相机坐标系之前,需要先创建上述虚拟相机,其中,该虚拟相机的姿态相对于世界坐标系来说是静止的,然后在上述虚拟相机中建立上述虚拟相机坐标系,具体的,上述虚拟相机坐标系与世界坐标系的水平夹角可以保持在45°。
本实施例中,由于虚拟相机坐标系相对于世界坐标系来说是静止的,所以,当处于抖动状态的图像画面上的稳像区域映射至上述虚拟相机坐标系之后,将会得到相对虚拟相机坐标系连续稳定的第一区域,从而对画面的抖动产生抑制效果。
S13:从物理相机坐标系中确定出与第一区域对应的区域,得到第二区域。
本实施例中,通过将第一区域映射至物理相机坐标系中,可以得到位于物理相机坐标系中的与第一区域对应的第二区域。
S14:从图像坐标系中确定出与第二区域对应的区域,得到第三区域。
本实施例中,通过将第二区域映射至图像坐标系中,可以得到位于图像坐标系中的与第二区域对应的第三区域。
S15:根据稳像区域和第三区域之间的映射关系,将无人机上的物理相机所采集到的待稳像图像映射至稳像区域,得到稳像后图像。
本实施例中,通过稳像区域与第一区域之间的映射关系、第一区域与第二区域之间的映射关系,以及第二区域与第三区域之间的映射关系,可以确定出稳像区域与第三区域之间的映射关系。利用稳像区域和第三区域之间的映射关系,可以将上述物理相机采集到的待稳像图像映射至上述稳像区域,便可得到稳像后图像。
可见,本发明实施例预先创建了相对于世界坐标系静止的虚拟相机坐标系,然后将稳像区域依序映射进虚拟相机坐标系,由于虚拟相机坐标系相对于世界坐标系来说是静止的,所以,当处于抖动状态的图像画面上的稳像区域映射至上述虚拟相机坐标系之后,将会得到相对虚拟相机坐标系连续稳定的第一区域,从而对画面的抖动产生抑制效果,接着将抖动现象受到抑制的第一区域重新映射到图像坐标系,得到位于图像坐标系上的第三区域,最后根据稳像区域与第三区域之间的映射关系,便可将无人机采集到的图像映射至稳像区域,从而实现图像画面的稳定输出,也即,本发明实施例实现了对无人机采集到的图像进行稳像处理的目的。
参见图2所示,本发明实施例公开了一种具体的无人机电子稳像方法,包括如下步骤S21至S26:
S21:获取稳像区域S。
S22:利用虚拟相机的内参矩阵K,确定出位于虚拟相机坐标系C1中的与稳像区域S相对应的区域,得到第一区域S C1
本实施例中,具体可以利用虚拟相机的内参矩阵K和第一映射公式,将稳像区域映射至上述虚拟相机坐标系C1中,从而得到第一区域S C1。其中,上述第一映射公式具体为:
Figure PCTCN2017120343-appb-000001
式中,(x,y)∈S为稳像区域S上任一点a的坐标,K为上述虚拟相机的内参矩阵,(X,Y,Z)为点a在虚拟相机坐标系C1中对应的位置,
Figure PCTCN2017120343-appb-000002
为点a在虚拟相机坐标系C1中对应的归一化位置,其中,
Figure PCTCN2017120343-appb-000003
S C1表示上述第一区域。
本实施例中,上述虚拟相机的内参矩阵K具体为:
Figure PCTCN2017120343-appb-000004
式中,Fv x表示虚拟相机在X轴上的主距,Fv y表示虚拟相机在Y轴上的主距,(Cv x,Cv y)表示虚拟相机坐标系C1中的主点坐标。本实施例中,上述内参矩阵K可以在多次实验之后通过人工赋值的方式来得到。
S23:利用第一旋转矩阵R W-C1,从世界坐标系W中确定出与第一区域S C1对应的区域,得到过渡区域S W
其中,第一旋转矩阵R W-C1为世界坐标系W与虚拟相机坐标系C1之间的旋转矩阵。
具体的,在对待稳像图像进行稳像处理的过程中,相应的第一旋转矩阵R W-C1的获取过程,包括:通过无人机中的IMU单元(IMU,即Inertial Measurement Unit),获取物理相机在采集待稳像图像时无人机的飞机姿态P,然后对飞机姿态P中的旋转矩阵进行均值滤波,得到第一旋转矩阵R W-C1;其中,飞机姿态P=(R,T),R表示飞机姿态P中的旋转矩阵,T表示飞机姿态P中的平移向量。
可以理解的是,本实施例中,上述过渡区域S W具体可以通过如下公式来得到:S W=R W-C1×S C1
需要进一步指出的是,考虑到在实际应用过程中,无人机采集到的每一帧图像均有一个时间戳,而每一组飞机姿态数据中也有一个时间戳,如果这 两种时间戳是分别从不同的时钟上得到的,那么在进行稳像处理之前,需要先将上述两种时间戳统一到同一个时钟上,也即,需要事先对上述两种时间戳进行时间对齐处理。
假设检测到图像时间戳对应的时钟t img比姿态数据时间戳对应的时钟t pose快Δt的时间,那么,可以将其中一个时钟转换到另外一个时钟去,使得两个时间戳均对应于同一个时钟,即:t pose_img=t pose+Δt。
在确保帧图像的时间戳与姿态数据的时间戳处于对齐状态之后,理论上可以根据时间戳之间的对应关系,建立帧图像与姿态数据之间的索引队列M,然后根据该索引队列M中的索引关系可以确定出每一帧图像所对应的飞机姿态,接着利用每一帧图像所对应的飞机姿态,便可展开后续的稳像处理过程。然而,由于IMU单元采样后的滤波算法会造成姿态数据的延时,使得基于图像的时间戳在索引队列里索引到的姿态与真实姿态不符,因此。需要求得IMU延时,建立图像与姿态之间的准确索引关系,从而得到准确的姿态。具体为:
假设在上述索引队列M中,帧图像
Figure PCTCN2017120343-appb-000005
与姿态
Figure PCTCN2017120343-appb-000006
相对应,其中,t i表示帧图像
Figure PCTCN2017120343-appb-000007
的时间戳,t j表示姿态
Figure PCTCN2017120343-appb-000008
的时间戳,并且,t i和t j均为经过对齐的时间点。那么,在IMU单元造成姿态数据的延时的情况下,如果上述索引队列M中的帧图像
Figure PCTCN2017120343-appb-000009
的时间戳t i与姿态
Figure PCTCN2017120343-appb-000010
的时间戳t j不再对应于同一个时间点,也即i≠j,t i≠t j,则可以利用线性插值的方法,确定帧图像
Figure PCTCN2017120343-appb-000011
所对应的准确姿态:假设t j-1<t i<t j,则计算出帧图像
Figure PCTCN2017120343-appb-000012
对应的准确姿态的公式如下:
Figure PCTCN2017120343-appb-000013
S24:利用第二旋转矩阵R C2-W,从物理相机坐标系C2中确定出与过渡区域S W对应的区域,得到第二区域S C2
其中,第二旋转矩阵R C2-W为物理相机坐标系R C2-W与世界坐标系W之间的旋转矩阵。
具体的,上述第二旋转矩阵的获取过程包括:将飞机姿态P中的旋转矩阵R直接确定为第二旋转矩阵R C2-W,也即,R C2-W=R。
可以理解的是,本实施例中,上述第二区域S C2具体可以通过如下公式来得到:S C2=R C2-W×S W
S25:利用物理相机中的内参矩阵T,确定出位于图像坐标系C3中的与第 二区域S C2相对应的区域,得到第三区域S C3
本实施例中,具体可以利用物理相机中的内参矩阵T和第二映射公式,将第二区域S C2映射至图像坐标系C3中,从而得到第三区域S C3。其中,上述第二映射公式具体为:
Figure PCTCN2017120343-appb-000014
式中,(x *,y *)∈S C3为稳像区域S上的点坐标(x,y)在第三区域S C3中相对应的点坐标,
Figure PCTCN2017120343-appb-000015
为稳像区域S上的点坐标(x,y)在第二区域S C2中相对应的点坐标,T表示物理相机中的内参矩阵。
本实施例中,上述物理相机的内参矩阵T具体为:
Figure PCTCN2017120343-appb-000016
式中,F x和F y分别为物理相机在X轴和Y轴上的主距,(C x,C y)表示物理相机坐标系C2中的主点坐标。本实施例中,上述内参矩阵T可以通过相机标定的方式来得到。
S26:根据稳像区域S和第三区域S C3之间的映射关系,将无人机上的物理相机所采集到的待稳像图像映射至稳像区域S,得到稳像后图像。
可以理解的是,本实施例中的稳像处理过程主要是在GPU中完成的,预先基于零拷贝技术,将帧图像从主机虚拟地址空间映射至GPU地址空间,并将各种旋转矩阵传入GPU,然后GPU便按照上述稳像处理过程对帧图像进行稳像处理,然后基于零拷贝技术,将稳像后图像映射至Host端虚拟地址空间,以供编码器、图传等应用程序使用。
参见图3所示,本发明实施例公开了一种具体的无人机电子稳像方法,包括如下步骤S31至S35:
步骤S31:获取稳像区域。
步骤S32:从预先创建的虚拟相机坐标系中确定出与稳像区域相对应的区域,得到第一区域。
步骤S33:利用第三旋转矩阵,从物理相机坐标系中直接确定出与第一区域对应的区域,得到第二区域;其中,第三旋转矩阵为物理相机坐标系与虚拟相机坐标系之间的旋转矩阵。
具体的,上述第三旋转矩阵可以通过对上一实施例中的第一旋转矩阵和第二旋转矩阵进行相乘来得到,也即,上述第三旋转矩阵等于上述第一旋转矩阵和第二旋转矩阵之间的乘积。
步骤S34:从图像坐标系中确定出与第二区域对应的区域,得到第三区域。
步骤S35:根据稳像区域和第三区域之间的映射关系,将无人机上的物理相机所采集到的待稳像图像映射至稳像区域,得到稳像后图像。
相应的,本发明实施例还公开了一种无人机电子稳像系统,参见图4所示,该系统包括:
区域获取模块11,用于获取稳像区域;
第一区域确定模块12,用于从预先创建的虚拟相机坐标系中确定出与稳像区域相对应的区域,得到第一区域;其中,虚拟相机坐标系为在姿态相对世界坐标系静止的虚拟相机中创建的坐标系。
第二区域确定模块13,用于从物理相机坐标系中确定出与第一区域对应的区域,得到第二区域;
第三区域确定模块14,用于从图像坐标系中确定出与第二区域对应的区域,得到第三区域;
图像映射模块15,用于根据稳像区域和第三区域之间的映射关系,将无人机上的物理相机所采集到的待稳像图像映射至稳像区域,得到稳像后图像。
具体的,上述第二区域确定模块12,可以包括第一确定单元和第二确定单元;其中,
第一确定单元,用于利用第一旋转矩阵,从世界坐标系中确定出与第一区域对应的区域,得到过渡区域;
第二确定单元,用于利用第二旋转矩阵,从物理相机坐标系中确定出与过渡区域对应的区域,得到第二区域;
其中,第一旋转矩阵为世界坐标系与虚拟相机坐标系之间的旋转矩阵,第二旋转矩阵为物理相机坐标系与世界坐标系之间的旋转矩阵。
关于上述各个模块和单元的更加具体的工作过程可以参考前述实施例中公开的相应内容,在此不再进行赘述。
可见,本发明实施例预先创建了相对于世界坐标系静止的虚拟相机坐标系,然后将稳像区域依序映射进虚拟相机坐标系,由于虚拟相机坐标系相对于世界坐标系来说是静止的,所以,当处于抖动状态的图像画面上的稳像区域映射至上述虚拟相机坐标系之后,将会得到相对虚拟相机坐标系连续稳定的第一区域,从而对画面的抖动产生抑制效果,接着将抖动现象受到抑制的第一区域重新映射到图像坐标系,得到位于图像坐标系上的第三区域,最后根据稳像区域与第三区域之间的映射关系,便可将无人机采集到的图像映射至稳像区域,从而实现图像画面的稳定输出,也即,本发明实施例实现了对无人机采集到的图像进行稳像处理的目的。
进一步的,本发明还公开了一种无人机,包括前述实施例中公开的无人机电子稳像系统,关于该系统的具体构造可以参考前述实施例中公开的相应内容,在此不再进行赘述。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明所提供的电子稳像方法、系统及无人机进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例 的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 无人机电子稳像方法,其特征在于,包括:
    获取稳像区域;
    从预先创建的虚拟相机坐标系中确定出与所述稳像区域相对应的区域,得到第一区域;其中,所述虚拟相机坐标系为在姿态相对世界坐标系静止的虚拟相机中创建的坐标系;
    从物理相机坐标系中确定出与所述第一区域对应的区域,得到第二区域;
    从图像坐标系中确定出与所述第二区域对应的区域,得到第三区域;
    根据所述稳像区域和所述第三区域之间的映射关系,将无人机上的物理相机所采集到的待稳像图像映射至所述稳像区域,得到稳像后图像。
  2. 根据权利要求1所述的无人机电子稳像方法,其特征在于,所述获取稳像区域的过程,包括:
    为用户提供区域选取通道;
    通过所述区域选取通道,获取用户在所述物理相机采集的图像画面中选取的区域,得到所述稳像区域。
  3. 根据权利要求1所述的无人机电子稳像方法,其特征在于,所述从预先创建的虚拟相机坐标系中确定出与所述稳像区域相对应的区域的过程,包括:
    利用所述虚拟相机的内参矩阵,确定出位于所述虚拟相机坐标系中的与所述稳像区域相对应的区域,得到所述第一区域。
  4. 根据权利要求1所述的无人机电子稳像方法,其特征在于,所述从图像坐标系中确定出与所述第二区域对应的区域的过程,包括:
    利用所述物理相机中的内参矩阵,确定出位于所述图像坐标系中的与所述第二区域相对应的区域,得到所述第三区域。
  5. 根据权利要求1至4任一项所述的无人机电子稳像方法,其特征在于,所述从物理相机坐标系中确定出与所述第一区域对应的区域的过程,包括:
    利用第一旋转矩阵,从所述世界坐标系中确定出与所述第一区域对应的区域,得到过渡区域;
    利用第二旋转矩阵,从所述物理相机坐标系中确定出与所述过渡区域对应的区域,得到所述第二区域;
    其中,所述第一旋转矩阵为所述世界坐标系与所述虚拟相机坐标系之间的旋转矩阵,所述第二旋转矩阵为所述物理相机坐标系与所述世界坐标系之间的旋转矩阵。
  6. 根据权利要求5所述的无人机电子稳像方法,其特征在于,
    在对所述待稳像图像进行稳像处理的过程中,相应的所述第一旋转矩阵的获取过程,包括:
    通过所述无人机中的IMU单元,获取所述物理相机在采集所述待稳像图像时所述无人机的飞机姿态;
    对所述飞机姿态中的旋转矩阵进行均值滤波,得到所述第一旋转矩阵;
    所述第二旋转矩阵的获取过程,包括:
    将所述飞机姿态中的旋转矩阵直接确定为所述第二旋转矩阵。
  7. 根据权利要求1至4任一项所述的无人机电子稳像方法,其特征在于,所述从物理相机坐标系中确定出与所述第一区域对应的区域的过程,包括:
    利用第三旋转矩阵,从所述物理相机坐标系中直接确定出与所述第一区域对应的区域,得到所述第二区域;
    其中,所述第三旋转矩阵为所述物理相机坐标系与所述虚拟相机坐标系之间的旋转矩阵。
  8. 无人机电子稳像系统,其特征在于,包括:
    区域获取模块,用于获取稳像区域;
    第一区域确定模块,用于从预先创建的虚拟相机坐标系中确定出与所述稳像区域相对应的区域,得到第一区域;其中,所述虚拟相机坐标系为在姿态相对世界坐标系静止的虚拟相机中创建的坐标系;
    第二区域确定模块,用于从物理相机坐标系中确定出与所述第一区域对应的区域,得到第二区域;
    第三区域确定模块,用于从图像坐标系中确定出与所述第二区域对应的区域,得到第三区域;
    图像映射模块,用于根据所述稳像区域和所述第三区域之间的映射关系,将无人机上的物理相机所采集到的待稳像图像映射至所述稳像区域,得到稳像后图像。
  9. 根据权利要求8所述的无人机电子稳像系统,其特征在于,所述第二区域确定模块,包括:
    第一确定单元,用于利用第一旋转矩阵,从所述世界坐标系中确定出与所述第一区域对应的区域,得到过渡区域;
    第二确定单元,用于利用第二旋转矩阵,从所述物理相机坐标系中确定出与所述过渡区域对应的区域,得到所述第二区域;
    其中,所述第一旋转矩阵为所述世界坐标系与所述虚拟相机坐标系之间的旋转矩阵,所述第二旋转矩阵为所述物理相机坐标系与所述世界坐标系之间的旋转矩阵。
  10. 一种无人机,其特征在于,包括如权利要求8或9所述的无人机电子稳像系统。
PCT/CN2017/120343 2017-03-28 2017-12-29 电子稳像方法、系统及无人机 WO2018176963A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710192681.X 2017-03-28
CN201710192681.XA CN106954024B (zh) 2017-03-28 2017-03-28 一种无人机及其电子稳像方法、系统

Publications (1)

Publication Number Publication Date
WO2018176963A1 true WO2018176963A1 (zh) 2018-10-04

Family

ID=59473875

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/120343 WO2018176963A1 (zh) 2017-03-28 2017-12-29 电子稳像方法、系统及无人机

Country Status (2)

Country Link
CN (1) CN106954024B (zh)
WO (1) WO2018176963A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109624854A (zh) * 2018-12-03 2019-04-16 浙江明航智能科技有限公司 一种适用于特种车辆的360°全景辅助可视系统
CN111540022A (zh) * 2020-05-14 2020-08-14 深圳市艾为智能有限公司 一种基于虚拟相机的图像一致化方法

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106954024B (zh) * 2017-03-28 2020-11-06 成都通甲优博科技有限责任公司 一种无人机及其电子稳像方法、系统
US10462370B2 (en) 2017-10-03 2019-10-29 Google Llc Video stabilization
CN108363946B (zh) * 2017-12-29 2022-05-03 成都通甲优博科技有限责任公司 基于无人机的人脸跟踪系统及方法
CN108600622B (zh) * 2018-04-12 2021-12-24 联想(北京)有限公司 一种视频防抖的方法及装置
US10171738B1 (en) 2018-05-04 2019-01-01 Google Llc Stabilizing video to reduce camera and face movement
CN108989688B (zh) * 2018-09-14 2019-05-31 成都数字天空科技有限公司 虚拟相机防抖方法、装置、电子设备及可读存储介质
CN109579844B (zh) * 2018-12-04 2023-11-21 电子科技大学 定位方法及系统
CN110610465B (zh) * 2019-08-26 2022-05-17 Oppo广东移动通信有限公司 图像校正方法和装置、电子设备、计算机可读存储介质
CN110943796B (zh) * 2019-11-19 2022-06-17 深圳市道通智能航空技术股份有限公司 时间戳对齐方法、装置、存储介质及设备
CN113132612B (zh) * 2019-12-31 2022-08-09 华为技术有限公司 一种图像稳像处理方法、终端拍摄方法、介质及系统
US11190689B1 (en) 2020-07-29 2021-11-30 Google Llc Multi-camera video stabilization
CN113050664A (zh) * 2021-03-24 2021-06-29 北京三快在线科技有限公司 一种无人机降落方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070132856A1 (en) * 2005-12-14 2007-06-14 Mitsuhiro Saito Image processing apparatus, image-pickup apparatus, and image processing method
CN103827921A (zh) * 2011-09-30 2014-05-28 西门子工业公司 用于在存在长期图像漂移的情况下使实时视频稳定的方法和系统
CN104580899A (zh) * 2014-12-26 2015-04-29 魅族科技(中国)有限公司 物体成像的控制方法及成像装置
CN104933758A (zh) * 2015-05-20 2015-09-23 北京控制工程研究所 一种基于osg三维引擎的空间相机三维成像仿真方法
CN106500669A (zh) * 2016-09-22 2017-03-15 浙江工业大学 一种基于四旋翼imu参数的航拍图像矫正方法
CN106525001A (zh) * 2016-11-16 2017-03-22 上海卫星工程研究所 地球静止轨道遥感卫星相机视轴空间指向计算方法
CN106954024A (zh) * 2017-03-28 2017-07-14 成都通甲优博科技有限责任公司 一种无人机及其电子稳像方法、系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070132856A1 (en) * 2005-12-14 2007-06-14 Mitsuhiro Saito Image processing apparatus, image-pickup apparatus, and image processing method
CN103827921A (zh) * 2011-09-30 2014-05-28 西门子工业公司 用于在存在长期图像漂移的情况下使实时视频稳定的方法和系统
CN104580899A (zh) * 2014-12-26 2015-04-29 魅族科技(中国)有限公司 物体成像的控制方法及成像装置
CN104933758A (zh) * 2015-05-20 2015-09-23 北京控制工程研究所 一种基于osg三维引擎的空间相机三维成像仿真方法
CN106500669A (zh) * 2016-09-22 2017-03-15 浙江工业大学 一种基于四旋翼imu参数的航拍图像矫正方法
CN106525001A (zh) * 2016-11-16 2017-03-22 上海卫星工程研究所 地球静止轨道遥感卫星相机视轴空间指向计算方法
CN106954024A (zh) * 2017-03-28 2017-07-14 成都通甲优博科技有限责任公司 一种无人机及其电子稳像方法、系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109624854A (zh) * 2018-12-03 2019-04-16 浙江明航智能科技有限公司 一种适用于特种车辆的360°全景辅助可视系统
CN111540022A (zh) * 2020-05-14 2020-08-14 深圳市艾为智能有限公司 一种基于虚拟相机的图像一致化方法
CN111540022B (zh) * 2020-05-14 2024-04-19 深圳市艾为智能有限公司 一种基于虚拟相机的图像一致化方法

Also Published As

Publication number Publication date
CN106954024B (zh) 2020-11-06
CN106954024A (zh) 2017-07-14

Similar Documents

Publication Publication Date Title
WO2018176963A1 (zh) 电子稳像方法、系统及无人机
US10129462B2 (en) Camera augmented reality based activity history tracking
US20210233275A1 (en) Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium
CN107747941B (zh) 一种双目视觉定位方法、装置及系统
US9437045B2 (en) Real-time mobile capture and application of photographic images as textures in three-dimensional models
US10169894B2 (en) Rebuilding images based on historical image data
US11557083B2 (en) Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
CN112333491B (zh) 视频处理方法、显示装置和存储介质
WO2018023492A1 (zh) 一种云台控制方法及系统
CN113273321B (zh) 显示器定位系统
US20130176337A1 (en) Device and Method For Information Processing
KR102340678B1 (ko) 복수의 모델들에 근거하는 객체들의 디스플레이
TW202011353A (zh) 深度資料處理系統的操作方法
JP2023502635A (ja) 較正方法および装置、プロセッサ、電子機器、記憶媒体
WO2021112382A1 (en) Apparatus and method for dynamic multi-camera rectification using depth camera
CN110517209A (zh) 数据处理方法、装置、系统以及计算机可读存储介质
CN109992111B (zh) 增强现实扩展方法和电子设备
US20220366717A1 (en) Sensor-based Bare Hand Data Labeling Method and System
WO2023216982A1 (zh) 数据处理方法、装置、计算机设备、存储介质及程序产品
CN113724141B (zh) 一种图像校正方法、装置及电子设备
WO2023101662A1 (en) Methods and systems for implementing visual-inertial odometry based on parallel simd processing
CN108171802B (zh) 一种云端与终端结合实现的全景增强现实实现方法
CN112132909A (zh) 参数获取方法及装置、媒体数据处理方法和存储介质
JP6640876B2 (ja) 作業支援装置、作業支援方法、作業支援プログラム、及び記録媒体
KR102585365B1 (ko) 볼륨메트릭 3d 모델 복원 품질향상을 위한 드론 촬영 제어 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17903074

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17903074

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17903074

Country of ref document: EP

Kind code of ref document: A1