WO2020038386A1 - Détermination du facteur d'échelle dans une reconstruction fondée sur la vision monoculaire - Google Patents

Détermination du facteur d'échelle dans une reconstruction fondée sur la vision monoculaire Download PDF

Info

Publication number
WO2020038386A1
WO2020038386A1 PCT/CN2019/101704 CN2019101704W WO2020038386A1 WO 2020038386 A1 WO2020038386 A1 WO 2020038386A1 CN 2019101704 W CN2019101704 W CN 2019101704W WO 2020038386 A1 WO2020038386 A1 WO 2020038386A1
Authority
WO
WIPO (PCT)
Prior art keywords
specified
monocular camera
moment
designated
pose
Prior art date
Application number
PCT/CN2019/101704
Other languages
English (en)
Chinese (zh)
Inventor
沈冰伟
朱建华
蒋腻聪
郭斌
Original Assignee
杭州萤石软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州萤石软件有限公司 filed Critical 杭州萤石软件有限公司
Publication of WO2020038386A1 publication Critical patent/WO2020038386A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present application relates to the field of mobile robot technology, and in particular, to a method for determining a mesoscale factor for monocular vision reconstruction and a mobile robot.
  • the simultaneous positioning and map construction algorithms based on monocular vision have become the focus of current mobile robot research.
  • the traditional simultaneous positioning and map construction methods based on monocular vision can only achieve 3D reconstruction at projective scale or affine-scaling, that is, there is a scale factor between the reconstructed scene and the real-world scene.
  • the scale factor is the ratio of the real world map scale to the constructed map scale. Therefore, if the scale factor can be determined when the mobile robot is initialized, the actual rotation and translation of the monocular camera in the real world can be calculated based on the projection model, and a map with the same scale as the real world can be constructed.
  • the present application provides a method for determining a mesoscale factor for monocular vision reconstruction and a mobile robot.
  • a first aspect of the present application provides a method for determining a mesoscale factor in monocular vision reconstruction.
  • the method is applied to a mobile robot, and the method includes:
  • the ratio of the modulus of the actual translation vector to the modulus of the normalized translation vector is determined as a scale factor in the monocular vision reconstruction of the device.
  • a second aspect of the present application provides a mobile robot, which includes a monocular camera and a processor; wherein,
  • the monocular camera is configured to acquire a first image of a designated object at a first moment and a second image of the designated object at a second moment;
  • the processor is configured to:
  • the ratio of the modulus of the actual translation vector to the modulus of the normalized translation vector is determined as a scale factor in the monocular vision reconstruction of the device.
  • a third aspect of the present application provides a computer-readable storage medium on which a computer program is stored, and the program is executed by a processor to implement the steps of any of the methods provided in the first aspect of the present application.
  • the method for determining the meso-scale factor of the monocular vision reconstruction and the mobile robot provided in this application, because the position of the designated object is fixed, therefore, in the case of the mobile robot slipping, jamming, etc., by calculating the designated object at the first moment The first pose relative to the monocular camera and the second pose of the designated object relative to the monocular camera at the second moment, so that the real-time The actual translation vector. Therefore, the method provided in this application does not have the problem that the determined scale factor is inaccurate due to slipping, jamming, and the like of the mobile robot.
  • FIG. 1 is a flowchart of Embodiment 1 of a method for determining a scale factor in monocular vision reconstruction provided by the present application.
  • Fig. 2 is a schematic diagram of a monocular camera acquiring an image of a specified object according to an exemplary embodiment of the present application.
  • Fig. 3 is a flowchart of calculating a pose of a specified object relative to a monocular camera according to an exemplary embodiment of the present application.
  • FIG. 4 is a hardware structural diagram of a first embodiment of a mobile robot provided in this application.
  • first, second, third, etc. may be used in this application to describe various information, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information.
  • word “if” as used herein can be interpreted as “at” or "when” or "in response to determination”.
  • a related method for determining the meso-scale factor in monocular vision reconstruction uses two adjacent frames of images collected by a monocular camera, and uses epipolar geometry to calculate the normalized translation vector of the monocular camera between the two frames of images; and uses code disk data and IMU (Inertial measurement unit (inertial measurement unit) data calculates the actual translation vector of the monocular camera in the real world between the two frames of images, and then uses the normalized translation vector and the actual translation vector to obtain the scale factor in monocular vision reconstruction.
  • IMU Inertial measurement unit (inertial measurement unit) data
  • the code disk count is inconsistent with the actual situation due to the mobile robot's slipping, jamming, etc., resulting in the actual calculation of the code disk data combined with the IMU data in this case.
  • the translation vector is also inaccurate, and the scale factor calculated based on the actual translation vector is also inaccurate.
  • the present application provides a method for determining a scale factor in a monocular vision reconstruction and a mobile robot, so as to solve the problem that the determined scale factor is inaccurate due to the slipping, jamming, etc. of the mobile robot in the existing method.
  • the method provided by this embodiment can be applied to a mobile robot.
  • it can be applied to a cleaning robot.
  • FIG. 1 is a flowchart of Embodiment 1 of a method for determining a scale factor in monocular vision reconstruction provided by the present application.
  • the method provided in this embodiment may include:
  • the mobile robot is provided with a monocular camera, and images can be collected by the monocular camera.
  • the designated object may be a charging device for charging the mobile robot.
  • the mobile robot may obtain a first image of the designated object at the first moment through a monocular camera.
  • the second image of the specified object at the second moment For example, there are neighboring sampling times: the first time t1 and the second time t2.
  • the mobile robot can obtain the first image F1 of the specified object at the first time t1 and the second image of the specified object at the second time t2 through the monocular camera.
  • Image F2 Image F2.
  • the mobile robot is at different positions at the first time t1 and the second time t2, that is, the monocular camera is at different shooting positions at the first time t1 and the second time t2.
  • Fig. 2 is a schematic diagram of a monocular camera acquiring an image of a specified object according to an exemplary embodiment of the present application. Please refer to FIG. 2.
  • the designated object is a charging device for charging the mobile robot.
  • the monocular camera 110 is at different shooting positions at a first time t1 and a second time t2.
  • the mobile robot may turn to the charging device after detecting that the device is disconnected from the charging device 200, and then photograph the charging device 200 with a monocular camera at a position different from the previous one. .
  • a first image of the charging device 200 at a first time t1 can be obtained, the first image corresponding to the first shooting position, and a second image of the charging device 200 at a second time t2, the second image corresponds to the second Shooting position.
  • S102 Perform feature point extraction and matching on the first image and the second image, and calculate a normalized translation vector of the monocular camera from the first time to the second time according to the paired feature points.
  • the pixel coordinates of the matched feature points in the first image and the second image may be used to calculate the monocular from the first time to the second time based on the epipolar constraint.
  • the normalized translation vector of the camera between the first shooting position and the second shooting position may be calculated using eight pairs of paired feature points.
  • the epipolar constraint can be expressed by the following formula:
  • K is the internal parameter matrix of the monocular camera
  • p 1 and p 2 are the pixel homogeneous coordinates of the paired feature points on the first image and the second image, respectively
  • Rep is the monocular camera from the first time t1 to The rotation change amount at the second time t2
  • t ep is the normalized translation vector of the monocular camera from the first time t1 to the second time t2.
  • FIG. 3 is a flowchart of calculating a pose of a specified object relative to a monocular camera according to an exemplary embodiment of the present application.
  • calculating the pose of the specified object relative to the monocular camera may include:
  • the specified object may be identified from the image based on the attribute information of the specified object, and then based on the identified specified object, the pixel coordinates of the specified point on the specified object may be obtained from the image.
  • attribute information of the designated object may include material attributes, color attributes, shape attributes, and the like. In this embodiment, this is not limited.
  • the designated object may be a charging device for charging the mobile robot.
  • the charging device is provided with a marker.
  • the marker may be a marker composed of several marker blocks of a specific material, a specific color, a specific shape, a specified number, and / or a specified content.
  • the marker may be a designated shape marker made of a specific material.
  • the marker when the monocular camera is an infrared camera, the marker may be composed of a specified number of highly reflective material; for another example, when the monocular camera is an RGB camera, the marker may be a specified number of black and white printed Consisting of checkered checkered blocks.
  • the specific setting form of the marker is not limited.
  • the marker on the charging device can reflect the attribute information of the charging device, and the charging device in the image can be identified based on the marker of the charging device.
  • the specific implementation principle and implementation process of identifying the specified object in the image based on the attribute information of the specified object refer to the description in the related technology, and details are not described herein again.
  • the designated point on the designated object may be set according to actual needs, for example, the designated point may be a corner point, a center point, etc. of the marker.
  • the specific position of the designated point is not limited. It should be noted that the number of the designated points is greater than or equal to four.
  • the marker 210 on the charging device 200 is composed of four marker blocks 1, 2, 3, and 4, and designated points on the charging device 200 are designated as the marker blocks.
  • the center point At this time, the four marker blocks 1, 2, 3, and 4 can be identified from the image based on attribute information such as the material, color, shape, and the distance between the marker blocks. 4. Furthermore, the pixel coordinates of the center point of each marker block are obtained. In this way, the pixel coordinates of the specified point on the specified object can be obtained.
  • the center point of each marked block is sequentially recorded as Bi, where i is equal to 1 to 4.
  • the pixel coordinates of the center point Bi of the i-th labeled block are labeled (u i , v i ).
  • a distortion correction algorithm is used to calculate the first coordinates of each of the specified points after distortion correction.
  • the distortion correction algorithm is expressed by the following formula:
  • K is the internal parameter matrix of the monocular camera
  • k 1 , k 2 , k 3 , p 1 , p 2 are distortion parameters of the monocular camera
  • (x i , y i ) is the first coordinate after distortion correction of the i-th designated point.
  • the designated coordinate system is an absolute coordinate system.
  • the designated coordinate system is a coordinate system marked on the charging device. That is, in the example shown in FIG. 2, the origin of the designated coordinate system is the center point of the charging device, the X-axis is horizontal to the right, and the Y-axis is perpendicular to the X-axis downward.
  • the specific implementation process of this step may include:
  • the first formula is:
  • the designated point includes a reference designated point and a target designated point
  • A is the second coordinate of each target designated point in the designated coordinate system and the first coordinate of the reference designated point in the designated coordinate system.
  • a matrix formed by the difference between two coordinates; the X is the X coordinate in the first coordinate after the distortion correction of the designated point of each target and the X coordinate in the first coordinate after the distortion of the reference designated point are corrected
  • the Y is between the Y coordinate in the first coordinate after the distortion correction of the designated point of each target and the Y coordinate in the first coordinate after the distortion of the reference designated point are corrected A vector of differences.
  • reference designated point may be any designated point.
  • the reference designated point is used as an example for description.
  • the second sitting mark of the i-th designated point in the designated coordinate system is (a i , b i , 0).
  • each of the first vector i, the second vector j ′, the third vector k ′, and the first coefficient z includes three elements.
  • the rotation matrix of the specified object relative to the monocular camera is denoted as R, and at this time, there are:
  • the second formula is:
  • the first rotation matrix R t1 and the first translation vector t t1 of the specified object with respect to the monocular camera at the first time t1 can be calculated.
  • a second rotation matrix R t2 and a second translation vector t t2 of the designated object relative to the monocular camera at the second time t2 can be calculated.
  • the first pose of the designated object relative to the monocular camera at the first time t1 is recorded as T t1 .
  • T t2 the second pose of the designated object relative to the monocular camera at the second time t2 is recorded as T t2 .
  • S104 Calculate an actual translation vector of the monocular camera from the first moment to the second moment according to the first pose and the second pose.
  • the specific implementation process of this step may include: calculating the change in pose of the monocular camera from the first moment to the second moment according to the first pose and the second pose. Obtaining the true translation vector from the pose change.
  • the pose change of the monocular camera from the first time t1 to the second time t2 can be calculated according to the following formula
  • the pose change of the monocular camera from the first time t1 to the second time t2 includes an actual rotation matrix and an actual translation vector, and has:
  • the actual rotation matrix and actual translation vector of the monocular camera from the first time t1 to the second time t2 can be obtained.
  • the actual translation vector is a vector composed of the first three elements of the last column vector in the pose change.
  • S105 Determine a ratio between a mode of the actual translation vector and a mode of the normalized translation vector as a scale factor in the monocular vision reconstruction of the device.
  • the normalized translation vector of the monocular camera from the first time t1 to the second time t2 is calculated through step S102, and the monocular camera is calculated from the first time t1 to the second time t2 in the real world through step S104.
  • the ratio between the modulus of the actual translation vector and the modulus of the normalized translation vector is determined as the scale factor in the monocular vision reconstruction of the device. which is:
  • the map corresponding to the change in the pose and feature points of the monocular camera in the real world at two moments can be calculated, and then positioned at the same time in the subsequent
  • the existing vision-based simultaneous positioning and map reconstruction algorithms can be used to calculate the change in pose and position of map points of subsequent monocular cameras in the real world by minimizing reprojection errors.
  • a map at a real scale can be located and constructed.
  • the calculated value of the designated object relative to the first time is calculated by The first pose and designated object of the monocular camera are relative to the second pose of the monocular camera at the second moment, and the monocular camera is calculated from the first moment to the second moment according to the first pose and the second pose.
  • the actual amount of translation in the real world Therefore, the method provided by the present application can effectively avoid the problem that the determined scale factor is inaccurate due to slipping, jamming, and the like of the mobile robot.
  • FIG. 4 is a hardware structural diagram of a first embodiment of a mobile robot provided in this application.
  • the mobile robot 100 provided in this embodiment may include a monocular camera 410 and a processor 420. Among them,
  • the monocular camera 410 is configured to acquire a first image of a designated object at a first moment and a second image of the designated object at a second moment;
  • the processor 420 is configured to:
  • the ratio of the modulus of the actual translation vector to the modulus of the normalized translation vector is determined as a scale factor in the monocular vision reconstruction of the device.
  • the mobile robot of this embodiment may be used to execute the technical solution of the method embodiment shown in FIG. 1, and the implementation principles and technical effects thereof are similar, and details are not described herein again.
  • processor 420 is specifically configured to:
  • a posture of the designated object with respect to the monocular camera is obtained.
  • processor 420 is specifically configured to:
  • the actual translation vector is obtained from the pose change.
  • the processor 420 is configured to identify the specified object from the frame image based on the attribute information of the specified object, and obtain a designation on the specified object based on the identified specified object.
  • the pixel coordinates of the point is configured to identify the specified object from the frame image based on the attribute information of the specified object, and obtain a designation on the specified object based on the identified specified object.
  • processor 420 is specifically configured to:
  • the first formula is:
  • the second formula is:
  • the designated point includes a reference designated point and a target designated point, and A is a second coordinate of each target designated point in the designated coordinate system and a second coordinate of the reference designated point in the designated coordinate system.
  • a matrix formed by the differences; the X is a vector formed by the difference between the X coordinate in the first coordinate after the distortion of the designated point of the target is corrected and the X coordinate in the first coordinate after the distortion of the reference designated point is corrected.
  • the Y is a vector formed by the difference between the Y coordinate in the first coordinate after the distortion of the specified point of the target is corrected and the Y coordinate in the first coordinate after the distortion of the reference point is corrected;
  • (a 1 , b 1 ) are the second coordinates of the reference designated point under the designated coordinates;
  • (x 1 , y 1 ) are the first coordinates of the reference designated point after distortion correction; and
  • i 1 and i 2 are the first and second elements in i, respectively;
  • the j ' 1 and j' 2 are the first and second elements in j ', respectively Elements;
  • the k ′ 1 and the k ′ 2 are the first element and the second element in the k ′, respectively;
  • t is a translation vector of the specified object with respect to the monocular camera.
  • the designated object is a charging device for charging the device; and the processor 420 is configured to obtain the designated object through a monocular camera after detecting that the device is disconnected from the designated object. A first image at a first moment and a second image of the designated object at a second moment.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the method according to any one of the first aspect of the application are implemented.
  • a computer-readable storage medium suitable for storing computer program instructions includes all forms of non-volatile memory, media, and memory devices, such as semiconductor memory devices (such as EPROM, EEPROM, and flash memory devices), magnetic disks (such as internal hard disks) Or removable disks), magneto-optical disks, and CD ROM and DVD-ROM disks.
  • semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
  • magnetic disks such as internal hard disks) Or removable disks
  • magneto-optical disks and CD ROM and DVD-ROM disks.

Abstract

La présente invention concerne un procédé de détermination d'un facteur d'échelle dans une reconstruction fondée sur la vision monoculaire, le procédé comprenant les étapes consistant : à acquérir, au moyen d'une caméra monoculaire, une première image d'un objet spécifié à un premier point temporel et une seconde image associée à un second point temporel ; à effectuer une extraction de points caractéristiques et une mise en correspondance de la première image et de la seconde image, et à calculer, en fonction de points caractéristiques appariés, un vecteur de translation normalisé de la caméra monoculaire du premier point temporel au second point temporel ; à calculer une première pose de l'objet spécifié par rapport à la caméra monoculaire au premier point temporel et une seconde pose associée par rapport à la caméra monoculaire au second point temporel ; à calculer, en fonction de la première pose et de la seconde pose, un vecteur de translation réel de la caméra monoculaire dans le monde physique du premier point temporel au second point temporel ; et à déterminer un rapport d'une norme du vecteur de translation réel à une norme du vecteur de translation normalisé comme étant un facteur d'échelle dans une reconstruction fondée sur la vision monoculaire.
PCT/CN2019/101704 2018-08-22 2019-08-21 Détermination du facteur d'échelle dans une reconstruction fondée sur la vision monoculaire WO2020038386A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810961434.6 2018-08-22
CN201810961434.6A CN110858403B (zh) 2018-08-22 2018-08-22 一种单目视觉重建中尺度因子的确定方法和移动机器人

Publications (1)

Publication Number Publication Date
WO2020038386A1 true WO2020038386A1 (fr) 2020-02-27

Family

ID=69593088

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/101704 WO2020038386A1 (fr) 2018-08-22 2019-08-21 Détermination du facteur d'échelle dans une reconstruction fondée sur la vision monoculaire

Country Status (2)

Country Link
CN (1) CN110858403B (fr)
WO (1) WO2020038386A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260538A (zh) * 2018-12-03 2020-06-09 北京初速度科技有限公司 基于长基线双目鱼眼相机的定位及车载终端
CN112102406A (zh) * 2020-09-09 2020-12-18 东软睿驰汽车技术(沈阳)有限公司 单目视觉的尺度修正方法、装置及运载工具
CN112686950A (zh) * 2020-12-04 2021-04-20 深圳市优必选科技股份有限公司 位姿估计方法、装置、终端设备及计算机可读存储介质
CN114406985A (zh) * 2021-10-18 2022-04-29 苏州迪凯尔医疗科技有限公司 一种目标追踪的机械臂方法、系统、设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554703B (zh) * 2020-04-23 2024-03-01 北京京东乾石科技有限公司 机器人定位方法、装置、系统和计算机可读存储介质
CN111671360B (zh) * 2020-05-26 2021-11-16 深圳拓邦股份有限公司 一种扫地机器人位置的计算方法、装置及扫地机器人
CN112798812B (zh) * 2020-12-30 2023-09-26 中山联合汽车技术有限公司 基于单目视觉的目标测速方法
CN113126117B (zh) * 2021-04-15 2021-08-27 湖北亿咖通科技有限公司 一种确定sfm地图绝对尺度的方法及电子设备
CN116704047B (zh) * 2023-08-01 2023-10-27 安徽云森物联网科技有限公司 一种基于行人ReID的监控摄像设备位置的标定方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706957A (zh) * 2009-10-30 2010-05-12 无锡景象数字技术有限公司 一种双目立体视觉装置的自标定方法
WO2017114507A1 (fr) * 2015-12-31 2017-07-06 清华大学 Procédé et dispositif permettant un positionnement d'image en se basant sur une reconstruction tridimensionnelle de modèle de rayon
CN108010125A (zh) * 2017-12-28 2018-05-08 中国科学院西安光学精密机械研究所 基于线结构光和图像信息的真实尺度三维重建系统及方法
CN108090435A (zh) * 2017-12-13 2018-05-29 深圳市航盛电子股份有限公司 一种可停车区域识别方法、系统及介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103234454B (zh) * 2013-04-23 2016-03-30 合肥米克光电技术有限公司 一种影像测量仪的自标定方法
CN103278138B (zh) * 2013-05-03 2015-05-06 中国科学院自动化研究所 一种复杂结构薄部件三维位置及姿态的测量方法
CN104346829A (zh) * 2013-07-29 2015-02-11 中国农业机械化科学研究院 基于pmd相机和摄像头的彩色三维重建系统及其方法
CN104732518B (zh) * 2015-01-19 2017-09-01 北京工业大学 一种基于智能机器人地面特征的ptam改进方法
CN105118055B (zh) * 2015-08-11 2017-12-15 北京电影学院 摄影机定位修正标定方法及系统
CN105931222B (zh) * 2016-04-13 2018-11-02 成都信息工程大学 用低精度二维平面靶标实现高精度相机标定的方法
CN106529538A (zh) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 一种飞行器的定位方法和装置
CN106920259B (zh) * 2017-02-28 2019-12-06 武汉工程大学 一种定位方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706957A (zh) * 2009-10-30 2010-05-12 无锡景象数字技术有限公司 一种双目立体视觉装置的自标定方法
WO2017114507A1 (fr) * 2015-12-31 2017-07-06 清华大学 Procédé et dispositif permettant un positionnement d'image en se basant sur une reconstruction tridimensionnelle de modèle de rayon
CN108090435A (zh) * 2017-12-13 2018-05-29 深圳市航盛电子股份有限公司 一种可停车区域识别方法、系统及介质
CN108010125A (zh) * 2017-12-28 2018-05-08 中国科学院西安光学精密机械研究所 基于线结构光和图像信息的真实尺度三维重建系统及方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260538A (zh) * 2018-12-03 2020-06-09 北京初速度科技有限公司 基于长基线双目鱼眼相机的定位及车载终端
CN111260538B (zh) * 2018-12-03 2023-10-03 北京魔门塔科技有限公司 基于长基线双目鱼眼相机的定位及车载终端
CN112102406A (zh) * 2020-09-09 2020-12-18 东软睿驰汽车技术(沈阳)有限公司 单目视觉的尺度修正方法、装置及运载工具
CN112686950A (zh) * 2020-12-04 2021-04-20 深圳市优必选科技股份有限公司 位姿估计方法、装置、终端设备及计算机可读存储介质
CN112686950B (zh) * 2020-12-04 2023-12-15 深圳市优必选科技股份有限公司 位姿估计方法、装置、终端设备及计算机可读存储介质
CN114406985A (zh) * 2021-10-18 2022-04-29 苏州迪凯尔医疗科技有限公司 一种目标追踪的机械臂方法、系统、设备及存储介质
CN114406985B (zh) * 2021-10-18 2024-04-12 苏州迪凯尔医疗科技有限公司 一种目标追踪的机械臂方法、系统、设备及存储介质

Also Published As

Publication number Publication date
CN110858403B (zh) 2022-09-27
CN110858403A (zh) 2020-03-03

Similar Documents

Publication Publication Date Title
WO2020038386A1 (fr) Détermination du facteur d'échelle dans une reconstruction fondée sur la vision monoculaire
US9420265B2 (en) Tracking poses of 3D camera using points and planes
JP5832341B2 (ja) 動画処理装置、動画処理方法および動画処理用のプログラム
US10636151B2 (en) Method for estimating the speed of movement of a camera
KR102367361B1 (ko) 위치 측정 및 동시 지도화 방법 및 장치
WO2018076154A1 (fr) Étalonnage de positionnement spatial d'un procédé de génération de séquences vidéo panoramiques fondé sur une caméra ultra-grand-angulaire
CN112767542A (zh) 一种多目相机的三维重建方法、vr相机和全景相机
WO2021004416A1 (fr) Procédé et appareil permettant d'établir une carte de balises sur la base de balises visuelles
US11082633B2 (en) Method of estimating the speed of displacement of a camera
CN110969662A (zh) 鱼眼摄像机内参标定方法、装置、标定装置控制器和系统
WO2017022033A1 (fr) Dispositif de traitement d'images, procédé de traitement d'images et programme de traitement d'images
CN106530358A (zh) 仅用两幅场景图像标定ptz摄像机的方法
CN110490943B (zh) 4d全息捕捉系统的快速精确标定方法、系统及存储介质
CN111062966B (zh) 基于l-m算法和多项式插值对相机跟踪进行优化的方法
JP4109075B2 (ja) 球体の回転特性と飛行特性の測定方法及び球体の回転特性と飛行特性の測定装置
CN110567441A (zh) 基于粒子滤波的定位方法、定位装置、建图及定位的方法
CN113223078A (zh) 标志点的匹配方法、装置、计算机设备和存储介质
JP4109076B2 (ja) 曲面体の回転量と回転軸方向の測定方法、及び、曲面体の回転量と回転軸方向の測定装置
JP2018173882A (ja) 情報処理装置、方法、及びプログラム
CN113034347B (zh) 倾斜摄影图像处理方法、装置、处理设备及存储介质
JP2002109518A (ja) 三次元形状復元方法及びシステム
WO2014203743A1 (fr) Procédé pour enregistrer des données en utilisant un ensemble de primitives
JP4886661B2 (ja) カメラパラメータ推定装置およびカメラパラメータ推定プログラム
CN116152121A (zh) 基于畸变参数的曲面屏生成方法、矫正方法
WO2018072087A1 (fr) Procédé permettant de réaliser un effet de photo prise par d'autres au moyen d'un selfie, et dispositif de photographie

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19852178

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19852178

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19852178

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19852178

Country of ref document: EP

Kind code of ref document: A1