TWM594152U - Planar dynamic detection system - Google Patents

Planar dynamic detection system Download PDF

Info

Publication number
TWM594152U
TWM594152U TW108214345U TW108214345U TWM594152U TW M594152 U TWM594152 U TW M594152U TW 108214345 U TW108214345 U TW 108214345U TW 108214345 U TW108214345 U TW 108214345U TW M594152 U TWM594152 U TW M594152U
Authority
TW
Taiwan
Prior art keywords
depth
plane
camera
continuously
computing device
Prior art date
Application number
TW108214345U
Other languages
Chinese (zh)
Inventor
蕭淳澤
Original Assignee
大陸商南京深視光點科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商南京深視光點科技有限公司 filed Critical 大陸商南京深視光點科技有限公司
Priority to TW108214345U priority Critical patent/TWM594152U/en
Publication of TWM594152U publication Critical patent/TWM594152U/en

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

一種平面動態偵測系統,一慣性感測器可持續取得一慣性數據,一深度相機可於一觀視範圍內持續取得實體物件(例如平面或地面)的一深度影像,而一運算裝置經組態可持續判斷慣性感測器所取得的加速度及角速度是否超出一閾值,以判斷慣性感測器本身或其所搭載裝置之運動狀態,其中,運算裝置可依據加速度、深度影像座標、深度值及內參矩陣,初始化或持續更新慣性感測器於穩定狀態時,實體物件於相機座標系中的一平面方程式,亦可透過VIO演算法取得深度相機的位姿資訊,以持續修正慣性感測器於快速移動時的平面方程式。A plane dynamic detection system, an inertial sensor can continuously obtain an inertial data, a depth camera can continuously obtain a depth image of a physical object (such as a plane or the ground) within a viewing range, and a computing device is assembled The state can continuously determine whether the acceleration and angular velocity obtained by the inertial sensor exceed a threshold to determine the motion state of the inertial sensor itself or the device mounted on it. Among them, the computing device can be based on the acceleration, depth image coordinates, depth value and Internal parameter matrix, when initializing or continuously updating the inertial sensor in a stable state, a plane equation of the physical object in the camera coordinate system can also obtain the pose information of the depth camera through the VIO algorithm to continuously correct the inertial sensor in Plane equations when moving fast.

Description

平面動態偵測系統Planar motion detection system

本創作涉及計算機視覺技術,尤指一種可參考深度影像、彩色影像及慣性數據,以達成準確偵測平面及動態更新平面於三維空間中之相對位置的「平面動態偵測系統」。This creation involves computer vision technology, especially a "plane motion detection system" that can refer to depth images, color images, and inertial data to accurately detect the relative position of the plane and dynamically update the plane in three-dimensional space.

為了於需要3D資訊的應用(例如AR/VR服務)提供更為真實的互動效果,偵測現實中之平面即相當關鍵,若以偵測屬於地面的平面為目標,則偵測地面的方法可以為:(a)假設地面為最大的平面並利用RANSAC(Random Sample Consensus,隨機取樣)演算法,或是利用Hough Transform(霍夫轉換)演算法找到三維空間中最大的平面,並定義其為地面;(b)假設地面在影像中各掃描線(scan line)上的Z值為最大,並於修正相機姿態(roll rotation)後以影像中Z值最大且符合C曲線(fit curve C)的像素集合,定義其為地面。In order to provide more realistic interactive effects in applications that require 3D information (such as AR/VR services), it is critical to detect planes in reality. If the plane that belongs to the ground is targeted, the method of detecting the ground can be For: (a) Assuming that the ground is the largest plane and use the RANSAC (Random Sample Consensus, random sampling) algorithm, or use the Hough Transform (Hough Transform) algorithm to find the largest plane in three-dimensional space, and define it as the ground ; (B) Assuming that the ground has the largest Z value on each scan line in the image, and after correcting the camera rotation (roll rotation), the pixel with the largest Z value in the image and conforming to the C curve (fit curve C) Set, define it as the ground.

然而在許多情況下,前述(a)方法所假設的最大平面往往並非地面(例如影像中的最大平面可能為走廊的牆面),而可能發生RANSAC或Hough Transform演算法判斷錯誤的情形,並且,RANSAC演算法具備正確資料(inliers)至少需要占50%以上的限制,Hough Transform演算法也相當耗時;前述(b)方法也可能發生影像中Z值最大且符合C曲線的像素集合,其並非是地面的情形。However, in many cases, the maximum plane assumed by the aforementioned (a) method is often not the ground (for example, the maximum plane in the image may be the wall surface of the corridor), and a RANSAC or Hough Transform algorithm judgment error may occur, and, The RANSAC algorithm has at least 50% of the correct data (inliers), and the Hough Transform algorithm is also quite time-consuming; the aforementioned (b) method may also produce a pixel set with the largest Z value in the image and conform to the C curve, which is not This is the situation on the ground.

再者,無論利用何種方法偵測影像中的平面,在深度感測器(如深度攝影機)擷取深度影像後,依照點雲庫(Point Cloud Library,PCL)的習知作法,皆需將深度感測器取得的每個像素(pixel),先後與一相機投影反矩陣(inverse camera matrix)及一深度值作矩陣乘法運算,以轉換成點雲(Point Cloud)座標系中的多個三維座標,即如本項關係式所示:

Figure 02_image001
,其中,
Figure 02_image003
為點雲座標系中的三維座標,
Figure 02_image005
為深度值,
Figure 02_image007
為相機投影反矩陣,而
Figure 02_image009
通常為一內部參數(內部參數為深度感測器的固有性質參數,主要有關於相機座標與影像座標間的轉換關係),
Figure 02_image011
為深度影像於中每個像素的影像座標(其處於影像座標系);其後,再令此些三維座標的特徵點集合以點雲的型態呈現,接著,再以前述(a)或(b)等方法偵測點雲影像中的平面,但前述對每個像素均作矩陣乘法的方式,計算量相當龐大而有不佳的計算效能。 Furthermore, no matter what method is used to detect the plane in the image, after acquiring the depth image by the depth sensor (such as a depth camera), according to the conventional practice of the Point Cloud Library (PCL), all Each pixel obtained by the depth sensor is successively matrix multiplied with an inverse camera matrix and a depth value to convert into multiple 3D points in the point cloud coordinate system Coordinates, as shown in this relation:
Figure 02_image001
,among them,
Figure 02_image003
Is a three-dimensional coordinate in the point cloud coordinate system,
Figure 02_image005
Is the depth value,
Figure 02_image007
Project the inverse matrix for the camera, and
Figure 02_image009
It is usually an internal parameter (internal parameter is the inherent property parameter of the depth sensor, mainly about the conversion relationship between the camera coordinate and the image coordinate),
Figure 02_image011
Is the image coordinate of each pixel in the depth image (which is in the image coordinate system); thereafter, the feature point sets of these three-dimensional coordinates are presented in the form of a point cloud, and then, the above (a) or ( b) and other methods to detect the plane in the point cloud image, but the aforementioned matrix multiplication method for each pixel has a very large amount of calculation and has poor calculation performance.

綜上可知,習知偵測三維空間中平面的作法,針對不同的平面類型(例如地面、牆面等平面),須先作強烈假設而可能有平面類型誤判的問題,同時也有計算效能不佳的缺點,依此,如何提出一種可更準確偵測平面、更節省計算資源的「平面偵測系統及偵測方法」,乃有待解決之問題。In summary, the conventional method of detecting planes in three-dimensional space requires strong assumptions for different plane types (such as ground, wall, etc.), which may cause the problem of misjudgment of plane types, and also have poor computing performance. Therefore, how to propose a "plane detection system and detection method" that can detect planes more accurately and save computing resources is a problem to be solved.

為達上述目的,本創作提出一種平面動態偵測系統,包含:一慣性感測器、一深度相機及一運算裝置,其中,慣性感測器包含一加速度計及一陀螺儀;深度相機可持續擷取一深度影像,以持續輸入深度相機於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值;運算裝置分別耦接於慣性感測器及深度相機,運算裝置具有一運動狀態判斷單元及一平面偵測單元,運動狀態判斷單元供以持續判斷慣性感測器所取得的一加速度資訊及一角速度資訊是否超出一閾值,並且,若未超出閾值,平面偵測單元可依據加速度資訊、深度影像座標、深度值及一內部參數矩陣計算出一法向量及一距離常數,並以法向量及距離常數,初始化或持續更新慣性感測器於穩定狀態時,實體物件於一相機座標系中的一平面方程式;反之,若已超出閾值,平面偵測單元可依據加速度資訊的一重力加速度,執行一視覺慣性里程計演算法,以求得深度相機的一位姿資訊,並基於位姿資訊的一旋轉矩陣及一位移資訊,持續修正慣性感測器於快速移動時的平面方程式,而平面方程式的意涵,即位於一平面上的任意點及垂直於該平面的法線,可唯一定義出三維空間中的該平面。To achieve the above purpose, the author proposes a planar motion detection system, including: an inertial sensor, a depth camera and a computing device, wherein the inertial sensor includes an accelerometer and a gyroscope; the depth camera is sustainable Capture a depth image to continuously input a depth image coordinate and a depth value for one or more physical objects within a viewing range of the depth camera; the computing device is respectively coupled to the inertial sensor and the depth camera, the computing device It has a motion state judgment unit and a plane detection unit. The motion state judgment unit is used for continuously judging whether the acceleration information and the angular velocity information obtained by the inertial sensor exceed a threshold, and if the threshold is not exceeded, the plane detection The unit can calculate a normal vector and a distance constant based on acceleration information, depth image coordinates, depth values and an internal parameter matrix, and use the normal vector and the distance constant to initialize or continuously update the inertial sensor when it is in a stable state. A plane equation in a camera coordinate system; conversely, if the threshold has been exceeded, the plane detection unit can execute a visual inertial odometry algorithm based on a gravitational acceleration of the acceleration information to obtain the position information of the depth camera , And based on a rotation matrix and a displacement information of the pose information, continuously modify the plane equation of the inertial sensor during rapid movement, and the meaning of the plane equation is that any point on a plane and the plane perpendicular to the plane The normal can uniquely define the plane in three-dimensional space.

為使 貴審查委員得以清楚了解本創作之目的、技術特徵及其實施後之功效,茲以下列說明搭配圖示進行說明,敬請參閱。In order to enable your reviewing committee to clearly understand the purpose, technical features and effects of this creation, the following description is accompanied by illustrations, please refer to it.

請參閱「第1圖」,其為本創作之系統架構圖,本創作提出一種平面動態偵測系統1,主要包含一慣性感測器10、一深度相機20及一運算裝置30,其中: (1)  慣性感測器(Inertial Measurement Unit, IMU)10包含一加速度計(accelerometer/G-Seosor)101及一陀螺儀(Gyroscope)102,可持續取得的一加速度資訊及一角速度資訊; (2)  深度相機20可持續擷取一深度影像,以持續輸入深度相機20於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值,並且,深度相機20可被組態為採用一飛行時間法方案(Time of Flight,TOF)、一結構光光案(Structured Light)或一雙目視覺方案(Stereo Visual)量測出前述實體物件之深度的深度感測器,其中,飛行時間法方案係指深度相機20可作為一ToF相機,並利用發光二極體(LED)或雷射二極體(Laser Diode,LD)發射出紅外光,當照射到實體物件的物體表面的光反射回來後,由於光速為已知,故可藉此利用一個紅外光影像感測器,來量測實體物件於不同深度的位置將光線反射回來的時間,進而能推算出實體物件於不同位置的深度及實體物件的深度影像;結構光方案係指深度相機20可利用雷射二極體(Laser Diode,LD)或數位光源處理器(DLP)打出不同的光線圖形,並透過特定光柵繞射至實體物件的物體表面上,進而形成光斑圖案(Pattern),而由於實體物件於不同深度的位置所反射回來的光斑圖案會發生扭曲,故當反射回來的光線進入紅外光影像感測器後,即可反推實體物件的立體結構及其深度影像;雙目視覺方案指深度相機20可作為一雙目相機(stereo camera),並利用至少兩個攝像鏡頭拍攝實體物件及深度相機20所產生的視差(disparity),透過三角測量(Triangulation)原理量測出實體物件的三維立體資訊(深度影像); (3)  運算裝置30分別耦接於慣性感測器10及深度相機20,並具有一運動狀態判斷單元301及一平面偵測單元302,運動狀態判斷單元301及平面偵測單元302通訊連接,運動狀態判斷單元301被組態為可持續判斷慣性感測器10所取得的加速度資訊及角速度資訊是否超出一閾值(threshold),以判斷慣性感測器10本身或其所搭載裝置的運動狀態,值得注意的是,運算裝置30可至少具有一處理器(圖中未繪示,例如CPU、MCU),其供以運行運算裝置30,並具備邏輯運算、暫存運算結果、保存執行指令位置等功能,另外,運動狀態判斷單元301及平面偵測單元302本身可運行於一平面動態裝置(圖中未繪示,例如一頭戴式顯示器,且頭戴式顯示器可為VR頭盔、MR頭盔等頭戴式顯示器)、一主機(Host)、一實體伺服器或一虛擬化伺服器(VM)的運算裝置30,惟均不以此為限; (4)  承上,若當下未超出閾值,平面偵測單元302被組態為可依據加速度資訊、深度影像座標(Pixel Domain)、深度值(depth value)及一內部參數矩陣(intrinsic parameter matrix)計算出一法向量(normal vector)及一距離常數(d值),並以法向量及距離常數(其位處於影像座標系),初始化或持續更新慣性感測器10於穩定狀態時,實體物件於一相機座標系(camera coordinate system)中的一平面方程式(3D plane equation),而平面方程式的意涵,即位於一平面上的任意點及垂直於該平面的法線,可唯一定義出三維空間中的該平面; (5)  反之,若當下已超出閾值,則平面偵測單元302被組態為可依據加速度資訊中的一重力加速度,執行基於濾波(filter-based)或基於優化(optimization-based)的一視覺慣性里程計(visual inertial odometry,VIO)演算法,以求得深度相機20的一位姿資訊,並基於位姿資訊的一旋轉矩陣(orientation matrix)及一位移資訊(translation),持續修正慣性感測器10於快速移動時的平面方程式; (6)  另,前述所稱的影像座標是為了描述成像過程中,實體物件從相機座標系到影像座標系的投影透射關係而引入,是我們真正從深度相機20內讀取到的影像所在的座標系,單位為像素,而前述所稱的相機座標就是以深度相機20為原點建立的座標系,是為了從深度相機20的角度描述物體位置而定義。 Please refer to "Figure 1", which is a system architecture diagram of this creation. This creation proposes a planar motion detection system 1, which mainly includes an inertial sensor 10, a depth camera 20, and a computing device 30, in which: (1) Inertial Measurement Unit (IMU) 10 includes an accelerometer (G-Seosor) 101 and a gyro (Gyroscope) 102, which can continuously obtain acceleration information and angular velocity information; (2) The depth camera 20 can continuously capture a depth image to continuously input a depth image coordinate and a depth value for one or more physical objects within a viewing range of the depth camera 20, and the depth camera 20 can be A depth sensor configured to measure the depth of the aforementioned physical objects using a Time of Flight (TOF) scheme, a Structured Light or a Stereo Visual scheme, Among them, the time-of-flight method means that the depth camera 20 can be used as a ToF camera, and uses the light emitting diode (LED) or laser diode (Laser Diode, LD) to emit infrared light, when the object is irradiated to the physical object After the light on the surface is reflected back, since the speed of light is known, an infrared image sensor can be used to measure the time for the physical object to reflect the light at different depths, and then the physical object can be calculated Depth images of different positions and depth of physical objects; structured light scheme means that the depth camera 20 can use laser diode (LD) or digital light source processor (DLP) to produce different light patterns and pass through specific gratings It is diffracted onto the surface of the physical object to form a spot pattern. Since the spot pattern reflected by the physical object at different depths will be distorted, when the reflected light enters the infrared image sensor Afterwards, the three-dimensional structure and depth image of the physical object can be reversed; the binocular vision scheme means that the depth camera 20 can be used as a stereo camera, and at least two camera lenses are used to shoot the physical object and the depth camera 20. The generated disparity (disparity), through the principle of triangulation (Triangulation) to measure the three-dimensional information (depth image) of the physical object; (3) The computing device 30 is respectively coupled to the inertial sensor 10 and the depth camera 20, and has a motion state judgment unit 301 and a plane detection unit 302, and the motion state judgment unit 301 and the plane detection unit 302 are communicatively connected, The motion state judging unit 301 is configured to continuously judge whether the acceleration information and the angular velocity information obtained by the inertial sensor 10 exceed a threshold to judge the motion state of the inertial sensor 10 itself or the device mounted thereon, It is worth noting that the computing device 30 may have at least one processor (not shown in the figure, such as CPU and MCU), which is used to run the computing device 30, and has logic operations, temporary storage of operation results, and storage of execution instruction positions, etc. In addition, the motion state determination unit 301 and the plane detection unit 302 can run on a plane dynamic device (not shown in the figure, such as a head-mounted display, and the head-mounted display can be a VR helmet, MR helmet, etc. Head-mounted display), a host (Host), a physical server or a virtualized server (VM) computing device 30, but not limited to this; (4) According to the above, if the current threshold is not exceeded, the plane detection unit 302 is configured to be based on acceleration information, depth image coordinates (Pixel Domain), depth value (depth value), and an intrinsic parameter matrix Calculate a normal vector (normal vector) and a distance constant (d value), and use the normal vector and distance constant (which is in the image coordinate system) to initialize or continuously update the inertial sensor 10 in a stable state, the physical object 3D plane equation in a camera coordinate system, and the meaning of the plane equation, that is, any point on a plane and the normal normal to the plane, can uniquely define the three-dimensional The plane in space; (5) Conversely, if the current threshold has been exceeded, the plane detection unit 302 is configured to perform a filter-based or optimization-based vision based on a gravitational acceleration in the acceleration information Inertial odometry (VIO) algorithm to obtain the one-position information of the depth camera 20, and based on a rotation matrix (orientation matrix) and a displacement information (translation) of the position and orientation information, continuously correct the inertia The plane equation of the detector 10 during rapid movement; (6) In addition, the aforementioned image coordinates are introduced to describe the projection transmission relationship of the physical object from the camera coordinate system to the image coordinate system during the imaging process, which is where we actually read the image from the depth camera 20. The coordinate system is in pixels, and the aforementioned camera coordinate is a coordinate system established with the depth camera 20 as the origin, and is defined to describe the position of the object from the perspective of the depth camera 20.

請繼續參閱「第1圖」,本創作在一較佳實施例中,運算裝置30的平面偵測單元302亦可對實體物件的深度影像座標與深度值執行一內積運算,以持續生成實體物件於一影像座標系的一三維座標,並以前述的三維座標與內部參數矩陣演算出平面方程式。Please continue to refer to "Figure 1". In a preferred embodiment of the present invention, the plane detection unit 302 of the computing device 30 can also perform an inner product operation on the depth image coordinates and depth values of the physical object to continuously generate the entity The object is in a three-dimensional coordinate of an image coordinate system, and the plane equation is calculated by the aforementioned three-dimensional coordinate and internal parameter matrix.

請繼續參閱「第1圖」,本創作在一較佳實施例中,運算裝置30的平面偵測單元302亦可對前述的法向量執行一疊代最佳化(iterative optimization)演算法或一高斯牛頓(gauss newton)演算法求得一最佳法向量及其對應的距離常數(d值),並以最佳法向量取代前述的法向量演算出更臻精確的平面方程式。Please continue to refer to "Figure 1". In this preferred embodiment, the plane detection unit 302 of the computing device 30 may also perform an iterative optimization algorithm or an iterative optimization algorithm on the aforementioned normal vector. The Gauss Newton algorithm finds an optimal normal vector and its corresponding distance constant (d value), and replaces the foregoing normal vector with the optimal normal vector to calculate a more accurate plane equation.

請參閱「第2圖」至「第3圖」,其分別為本創作的平面動態偵測方法流程圖(一)、(二),並請搭配參閱「第1圖」,本創作提出一種平面動態偵測方法S,可包括以下步驟: (1)擷取影像步驟(步驟S10):一深度相機20持續擷取一深度影像,以持續輸入深度相機20於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值; (2)偵測慣性數據步驟(步驟S20):一慣性感測器10持續取得一加速度資訊及一角速度資訊等慣性數據; (3)判斷運動狀態步驟(步驟S30):一運算裝置30持續判斷慣性感測器10所取得的一加速度資訊及一角速度資訊是否超出一閾值,以判斷慣性感測器10本身或其所搭載裝置的運動狀態; (4)第一更新平面方程式步驟(步驟S40):承步驟S30,若未超出閾值,運算裝置30可依據加速度資訊、深度影像座標、深度值及一內部參數矩陣計算出一法向量及一距離常數(其對應於影像座標系),並以法向量及距離常數,初始化或持續更新慣性感測器10於穩定狀態時,實體物件於一相機座標系中的一平面方程式; (5)第二更新平面方程式步驟(步驟S50):承步驟S30,若已超出閾值,運算裝置30依據加速度資訊的一重力加速度,執行一視覺慣性里程計演算法,以求得深度相機20的一位姿資訊,並基於位姿資訊的一旋轉矩陣及一位移資訊,持續修正慣性感測器10於快速移動時的平面方程式。 Please refer to "Picture 2" to "Picture 3", which are flow charts (1) and (2) of the plane motion detection method of this creation, and please refer to "Picture 1" for the creation. The motion detection method S may include the following steps: (1) Step of capturing an image (step S10): a depth camera 20 continuously captures a depth image to continuously input the depth camera 20 within one viewing range for one or more A depth image coordinate and a depth value of each physical object; (2) Step of detecting inertial data (step S20): an inertial sensor 10 continuously obtains inertial data such as acceleration information and angular velocity information; (3) judging motion State step (step S30): an arithmetic device 30 continuously determines whether the acceleration information and the angular velocity information obtained by the inertial sensor 10 exceed a threshold to determine the motion state of the inertial sensor 10 itself or the device mounted on it; (4) The first step of updating the plane equation (step S40): following step S30, if the threshold is not exceeded, the computing device 30 can calculate a normal vector and a distance based on the acceleration information, depth image coordinates, depth value and an internal parameter matrix Constant (which corresponds to the image coordinate system), and the normal vector and the distance constant are used to initialize or continuously update a plane equation of the physical object in a camera coordinate system when the inertial sensor 10 is in a stable state; (5) Second Step of updating the plane equation (step S50): following step S30, if the threshold has been exceeded, the computing device 30 executes a visual inertial odometer algorithm based on a gravitational acceleration of the acceleration information to obtain the position information of the depth camera 20, Based on a rotation matrix and displacement information of the pose information, the plane equations of the inertial sensor 10 during rapid movement are continuously corrected.

承上,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,步驟S40執行時,若以欲偵測的平面類型為地面為例,且慣性感測器10的慣性數據未超出閾值,也就是慣性感測器10本身或其搭載裝置係處於穩定狀態時(例如靜止),則慣性感測器10僅會讀取到靜止加速度值g(gravity force direction),而其反方向為實體物件之平面方程式於相機座標的法向量n,關係式可參照如下: (1)慣性感測器10的靜止加速度值:g=9.8m/s 2或10m/s 2 (2)平面方程式於相機座標的法向量n=-g=

Figure 02_image013
(3)依此,深度影像中的實體物件(地面)於影像座標下的法向量
Figure 02_image015
可表示為:
Figure 02_image017
Continuing on, please continue to refer to "Picture 2" to "Picture 3", and please refer to "Picture 1", when step S40 is executed, if the type of plane to be detected is ground as an example, and the inertial sensing If the inertial data of the sensor 10 does not exceed the threshold value, that is, when the inertial sensor 10 itself or its mounted device is in a stable state (for example, at rest), the inertial sensor 10 will only read the static acceleration value g (gravity force direction ), and the opposite direction is the normal equation n of the plane equation of the solid object at the camera coordinate, the relationship can refer to the following: (1) The static acceleration value of the inertial sensor 10: g=9.8m/s 2 or 10m/s 2 (2) The normal vector of the plane equation to the camera coordinates n=-g=
Figure 02_image013
(3) According to this, the normal vector of the physical object (ground) in the depth image under the image coordinates
Figure 02_image015
Can be expressed as:
Figure 02_image017

承上,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,步驟S50執行時,若以欲偵測的平面類型為地面為例,由於當慣性感測器10處於劇烈或快速運動的情況,已無法以加速度計101的讀數來預估平面方程式的法向量,故前述的步驟S50於執行時,可利用例如基於濾波或基於優化的VIO演算法來更新實體物件(地面)的平面方程式,假設VIO預估的深度相機20的相對位姿(Relative Pose Motion)是

Figure 02_image019
,並假設更新前的平面方程式為
Figure 02_image021
,則之後的平面方程式得依以下關係式更新,但以下僅為舉例,並不以此為限:
Figure 02_image023
Figure 02_image025
Figure 02_image027
Inherit, please continue to refer to "Picture 2" to "Picture 3", and please refer to "Picture 1", when step S50 is executed, if the plane type to be detected is taken as an example, due to inertia When the detector 10 is in a violent or fast motion, the normal vector of the plane equation cannot be estimated by the reading of the accelerometer 101, so when the foregoing step S50 is executed, for example, a VIO algorithm based on filtering or optimization can be used to Update the plane equation of the solid object (ground), assuming that the relative pose (Relative Pose Motion) of the depth camera 20 estimated by the VIO is
Figure 02_image019
And assume that the plane equation before the update is
Figure 02_image021
, Then the following plane equations must be updated according to the following relationship, but the following is only an example, not limited to this:
Figure 02_image023
Figure 02_image025
Figure 02_image027

另,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,本創作在一較佳實施例中,若系統以欲偵測的平面類型為地面為目標,由於步驟S40執行時,即便運動狀態判斷單元301判斷慣性感測器10處於穩定狀態,慣性感測器10本身或其所搭載裝置也可能並非完全靜止,此外,也有實體物件(地面本身)有些傾斜的狀況,故在前述的步驟S40執行時,運算裝置30可進一步對法向量執行一疊代最佳化演算法或一高斯牛頓演算法(例如gauss newton least square),以求得一最佳法向量

Figure 02_image029
及其對應的距離常數(
Figure 02_image031
值),並以最佳法向量
Figure 02_image029
取代法向量
Figure 02_image033
而演算出平面方程式,更具體而言,運算裝置30的平面偵測單元302演算最佳法向量
Figure 02_image029
的公式可參照如下,但以下僅為舉例,並不以此為限: (1)首先,將深度影像中的深度值超過一定數值
Figure 02_image035
的像素予以排除,再以前述提及的法向量
Figure 02_image015
(此處暫稱法向量
Figure 02_image037
,其對應於影像座標系)排除後的n個深度影像座標,算出對應的
Figure 02_image039
值,如下關係式所示:
Figure 02_image041
Figure 02_image043
Figure 02_image045
Figure 02_image047
(2)接著,假設實體物件(地面)的
Figure 02_image039
值,是在所有深度影像中法向量為
Figure 02_image037
之實體物件(其它平面)中最小的,因為地面應為距離深度相機20最遠的平面,所以得依以下關係式,算出距離深度相機20最遠平面之對應的
Figure 02_image039
值:
Figure 02_image049
(3)其後,平面偵測單元302進一步對法向量執行一疊代最佳化演算法或一高斯牛頓演算法,求得誤差函數(Error Function,亦可稱評價函數)最小的一最佳法向量
Figure 02_image029
,在此之前需先定義一誤差函數E(
Figure 02_image033
)及一閾值
Figure 02_image051
,如下所示:
Figure 02_image053
Figure 02_image055
In addition, please continue to refer to "Picture 2" to "Picture 3", and please refer to "Picture 1". In a preferred embodiment of this creation, if the system targets the plane type to be detected as the ground Since step S40 is executed, even if the motion state judging unit 301 judges that the inertial sensor 10 is in a stable state, the inertial sensor 10 itself or the device mounted thereon may not be completely still, and there are also some physical objects (the ground itself) The tilted condition, so when the aforementioned step S40 is executed, the computing device 30 can further execute an iteration optimization algorithm or a Gaussian Newton algorithm (eg, gauss newton least square) on the normal vector to obtain an optimal Normal vector
Figure 02_image029
And its corresponding distance constant (
Figure 02_image031
Value) and the best normal vector
Figure 02_image029
Replace normal vector
Figure 02_image033
The plane equation is calculated, more specifically, the plane detection unit 302 of the computing device 30 calculates the best normal vector
Figure 02_image029
The formula of can refer to the following, but the following is only an example, not limited to this: (1) First, the depth value in the depth image exceeds a certain value
Figure 02_image035
Pixels are excluded, and the normal vector mentioned above is used
Figure 02_image015
(Temporarily called normal vector here
Figure 02_image037
, Which corresponds to the image coordinate system) n depth image coordinates after elimination, and calculate the corresponding
Figure 02_image039
Value, as shown in the following relationship:
Figure 02_image041
Figure 02_image043
Figure 02_image045
Figure 02_image047
(2) Next, assume that the physical object (ground)
Figure 02_image039
The value is the normal vector in all depth images
Figure 02_image037
The smallest of the solid objects (other planes), because the ground should be the plane farthest from the depth camera 20, so the corresponding relation to the plane farthest from the depth camera 20 must be calculated according to the following relationship
Figure 02_image039
value:
Figure 02_image049
(3) Thereafter, the plane detection unit 302 further executes an iterative optimization algorithm or a Gaussian Newton algorithm on the normal vector to obtain an optimal function with the smallest error function (error function, also called evaluation function) Normal vector
Figure 02_image029
, An error function E(
Figure 02_image033
) And a threshold
Figure 02_image051
,As follows:
Figure 02_image053
Figure 02_image055

請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,若以欲偵測的平面類型為地面為例,則運算裝置30之平面偵測單元302計算前述法向量的演算公式可參照如下,但並不以此為限,特先陳明: A.假設於深度影像中屬於地面部分的像素有N個;

Figure 02_image057
B.假設於深度影像中的一像素點座標為(
Figure 02_image059
,則:
Figure 02_image061
Figure 02_image063
C.第i個點於前述兩個不同座標系之三維座標的Z值相同,前述兩個三維座標於相機座標系與影像座標系的轉換關係如下:
Figure 02_image065
D.所以相機座標系與影像座標系的三維影像座標,係可透過深度相機20之內部參數矩陣K相關聯,而展開上述公式可得出,第i個點於影像座標系中的深度影像座標的x、y值分別為:
Figure 02_image067
Figure 02_image069
E.依據平面方程式的定義,並假設實體物件所處的平面上有前述的第i個點,可知以處於相機座標系的
Figure 02_image071
演算出的平面方程式為:
Figure 02_image073
F.承上,相機座標系的法向量
Figure 02_image075
G.依據平面方程式的定義,並假設實體物件所處的平面上有前述的第i個點,可知以處於影像座標系的
Figure 02_image077
演算出的平面方程式為:
Figure 02_image079
H.承上,影像座標系的法向量
Figure 02_image081
I.接著,演算處於相機座標系中實體物件(平面)的法向量,假設
Figure 02_image083
兩個點都在該平面上的話,會符合前述第G點的平面方程式,代入後的平面方程式分別如下:
Figure 02_image085
Figure 02_image087
J.將上述兩個平面方程式相減後,可得出:
Figure 02_image089
K.接著,將前述第D點的第i個點於影像座標系中的深度影像座標的x、y值代入第J點的方程式可得出:
Figure 02_image091
L.所以,實體物件對應到相機座標系的平面方程式的法向量為:
Figure 02_image093
Please continue to refer to "Picture 2" to "Picture 3", and please refer to "Picture 1", if the plane type to be detected is the ground, the plane detection unit 302 of the computing device 30 calculates the foregoing The calculation formula of the normal vector can be referred to as follows, but it is not limited to this. Chen Ming first: A. Assume that there are N pixels belonging to the ground part in the depth image;
Figure 02_image057
B. Assume that the coordinate of a pixel in the depth image is (
Figure 02_image059
,then:
Figure 02_image061
Figure 02_image063
C. The i-th point has the same Z value in the three-dimensional coordinates of the two different coordinate systems. The conversion relationship between the two three-dimensional coordinates in the camera coordinate system and the image coordinate system is as follows:
Figure 02_image065
D. Therefore, the three-dimensional image coordinates of the camera coordinate system and the image coordinate system can be related through the internal parameter matrix K of the depth camera 20, and the above formula can be obtained by expanding the above formula, the depth image coordinate of the i-th point in the image coordinate system The x and y values are:
Figure 02_image067
Figure 02_image069
E. According to the definition of the plane equation, and assuming the i-th point on the plane where the solid object is located, it can be known that it is in the camera coordinate system
Figure 02_image071
The calculated plane equation is:
Figure 02_image073
F. Continued, the normal vector of the camera coordinate system
Figure 02_image075
G. According to the definition of the plane equation, and assuming that the i-th point on the plane where the solid object is located, it can be known that it is in the image coordinate system
Figure 02_image077
The calculated plane equation is:
Figure 02_image079
H. Continued, the normal vector of the image coordinate system
Figure 02_image081
I. Next, calculate the normal vector of the solid object (plane) in the camera coordinate system, assuming
Figure 02_image083
If both points are on the plane, it will conform to the aforementioned plane equation at point G. The substituted plane equations are as follows:
Figure 02_image085
Figure 02_image087
J. After subtracting the above two plane equations, we can get:
Figure 02_image089
K. Next, the x and y values of the depth image coordinates of the ith point of the aforementioned D point in the image coordinate system are substituted into the equation of the J point to obtain:
Figure 02_image091
L. Therefore, the normal vector of the plane equation of the solid object corresponding to the camera coordinate system is:
Figure 02_image093

承上,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,當運算裝置30演算出實體物件對應到相機座標系的平面方程式的法向量n後,接續計算d值的演算公式可參照如下,但並不以此為限,特先陳明: M.首先,令一常數

Figure 02_image095
N.將處於影像座標系的像素點
Figure 02_image097
代入前述第G點的平面方程式可得出:
Figure 02_image085
O.將前述第D點「第i個點於影像座標系中的深度影像座標的x、y值」代入前述第N點的平面方程式可得出:
Figure 02_image099
Figure 02_image101
P.對前述第O點之平面方程式的等式兩側均除以c:
Figure 02_image103
Q.於此,可得出實體物件於相機座標中的平面方程式的d值為:
Figure 02_image105
To continue, please continue to refer to "Picture 2" to "Picture 3", and please refer to "Picture 1", when the computing device 30 calculates the normal vector n of the plane equation corresponding to the camera coordinate system of the physical object, The calculation formula for successively calculating the d value can be referred to as follows, but it is not limited to this. Chen Ming first: M. First, let a constant
Figure 02_image095
N. The pixels that will be in the image coordinate system
Figure 02_image097
Substituting the above-mentioned plane equation at point G gives:
Figure 02_image085
O. Substituting the aforementioned D point "x, y value of the depth image coordinate of the i-th point in the image coordinate system" into the aforementioned plane equation of the N-th point can be obtained:
Figure 02_image099
Figure 02_image101
P. Divide both sides of the equation of the aforementioned plane equation at point O by c:
Figure 02_image103
Q. Here, the d value of the plane equation of the physical object in the camera coordinate can be obtained as:
Figure 02_image105

另,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,本創作在一較佳實施例中,在前述的步驟S30執行前,可先執行一取得三維座標步驟(步驟S25):運算裝置30對實體物件的深度影像座標與深度值執行一內積運算,以持續生成實體物件於一影像座標系的一三維座標,依此,可於步驟S40或步驟S50執行時,以前述的三維座標、內部參數矩陣與加速度資訊演算出前述的法向量及距離常數,進而運算出實體物件的平面方程式,更具體而言,生成前述三維座標的演算公式可參照:

Figure 02_image107
,其中,
Figure 02_image109
為處於影像座標系的三維座標,Z為深度值,
Figure 02_image111
則為深度影像座標(處於影像座標系),藉此,相較於習知點雲庫(PCL)皆需將深度相機20所取得的每個像素,先後與一相機投影反矩陣(即前述的K)及一深度值作矩陣乘法運算,以轉換成點雲座標系中的多個三維座標的作法,本實施例可省去像素、深度值與相機投影反矩陣作矩陣運算的步驟,而直接以前述的三維座標進行實體物件(平面)的偵測,而能達成節省運算量的有益功效,同時能省去從深度影像轉換至點雲的轉換時間。 In addition, please continue to refer to "Picture 2" to "Picture 3", and please refer to "Picture 1". In a preferred embodiment of this creation, before the aforementioned step S30 is executed, an acquisition may be performed first. Three-dimensional coordinate step (step S25): The computing device 30 performs an inner product operation on the depth image coordinates and depth value of the physical object to continuously generate a three-dimensional coordinate of the physical object in an image coordinate system. Accordingly, it can be performed in step S40 or When step S50 is executed, the aforementioned normal vector and distance constant are calculated using the aforementioned three-dimensional coordinates, internal parameter matrix and acceleration information, and then the plane equation of the physical object is calculated. More specifically, the calculation formula for generating the aforementioned three-dimensional coordinates can be referred to :
Figure 02_image107
,among them,
Figure 02_image109
Is the three-dimensional coordinate in the image coordinate system, Z is the depth value,
Figure 02_image111
It is the depth image coordinate (in the image coordinate system). Therefore, compared to the conventional point cloud library (PCL), each pixel obtained by the depth camera 20 must be projected with a camera inverse matrix (that is, the aforementioned K) and a depth value for matrix multiplication operation to convert into multiple three-dimensional coordinates in the point cloud coordinate system. In this embodiment, the step of performing matrix operation for the pixel, depth value and camera projection inverse matrix can be omitted, and the direct The detection of physical objects (planes) with the aforementioned three-dimensional coordinates can achieve the beneficial effect of saving calculation amount, and at the same time can save the conversion time from the depth image to the point cloud.

請參閱「第4圖」,其為本創作之另一較佳實施例之系統架構圖,本實施例與「第1圖」至「第3圖」所揭技術類同,主要差異在於,本實施例的平面動態偵測系統1更可包括一彩色相機40(例如一RGB相機),其分別耦接於深度相機20及運算裝置30,供以持續擷取實體物件的一彩色影像,以供運算裝置30於步驟S10(擷取影像步驟)執行時,可確立實體物件之深度影像座標及一彩色影像座標之間的對應關係,以提升平面偵測之準確性,另,本實施例的彩色相機40亦可與深度相機20組構成一RGB-D相機,即如本圖所示,且本實施例的深度相機20可為雙目相機,但均不以此為限。Please refer to "Figure 4", which is a system architecture diagram of another preferred embodiment created. This embodiment is similar to the technology disclosed in "Figure 1" to "Figure 3", the main difference is that The planar motion detection system 1 of the embodiment may further include a color camera 40 (for example, an RGB camera), which is coupled to the depth camera 20 and the computing device 30, respectively, for continuously capturing a color image of the physical object for The computing device 30 can establish the correspondence between the depth image coordinate of the physical object and a color image coordinate when step S10 (capture image step) is executed to improve the accuracy of the plane detection. In addition, the color of this embodiment The camera 40 and the depth camera 20 can also form an RGB-D camera, as shown in this figure, and the depth camera 20 in this embodiment can be a binocular camera, but not limited to this.

綜上可知,本創作據以實施後,由於可解決習知偵測三維空間中平面時,針對不同的平面類型須作強烈假設而可能有平面誤判的問題,同時能改善習知平面偵測方法之計算效能不佳的缺點,而能達成更為準確偵測平面、更節省計算資源的有益功效。In summary, after the implementation of this creation, the problem of misjudgment of planes that must be made as a strong assumption for different plane types when conventional detection of planes in three-dimensional space can be solved, and the conventional plane detection method can be improved. The disadvantage of poor computing performance, and can achieve a more accurate detection of planes, and save the beneficial effects of computing resources.

以上所述者,僅為本創作之較佳之實施例而已,並非用以限定本創作實施之範圍;任何熟習此技藝者,在不脫離本創作之精神與範圍下所作之均等變化與修飾,皆應涵蓋於本創作之專利範圍內。The above are only the preferred embodiments of this creation, and are not intended to limit the scope of the implementation of this creation; anyone who is familiar with this skill will make equal changes and modifications without departing from the spirit and scope of this creation. It should be covered by the patent scope of this creation.

綜上所述,本創作係具有「產業利用性」、「新穎性」與「進步性」等專利要件;申請人爰依專利法之規定,向 鈞局提起新型專利之申請。In summary, this creative department has patent requirements such as "industrial utility", "novelty" and "progressiveness"; the applicant has filed an application for a new type of patent with the Bureau of Law in accordance with the provisions of the Patent Law.

1:平面動態偵測系統 10:慣性感測器 101:加速度計 102:陀螺儀 20:深度相機 30:運算裝置 301:運動狀態判斷單元 302:平面偵測單元 40:彩色相機 S:平面動態偵測方法 S10:擷取影像步驟 S20:偵測慣性數據步驟 S25:取得三維座標步驟 S30:判斷運動狀態步驟 S40:第一更新平面方程式步驟 S50:第二更新平面方程式步驟 1: Planar motion detection system 10: Inertial sensor 101: accelerometer 102: Gyroscope 20: depth camera 30: computing device 301: Motion state judgment unit 302: plane detection unit 40: Color camera S: Planar motion detection method S10: Steps to capture images S20: Steps to detect inertial data S25: Steps to obtain 3D coordinates S30: Steps for judging exercise status S40: The first step of updating the plane equation S50: Second step of updating the plane equation

第1圖,為本創作之系統架構圖。 第2圖,為本創作的平面偵測方法流程圖(一)。 第3圖,為本創作的平面偵測方法流程圖(二)。 第4圖,為本創作於另一較佳實施例之系統架構圖。 Figure 1 is a system architecture diagram for this creation. Figure 2 is a flow chart of the plane detection method for this creation (1). Figure 3 is a flow chart of the plane detection method for this creation (2). Figure 4 is a system architecture diagram created in another preferred embodiment.

1:平面動態偵測系統 1: Planar motion detection system

10:慣性感測器 10: Inertial sensor

101:加速度計 101: accelerometer

102:陀螺儀 102: Gyroscope

20:深度相機 20: depth camera

30:運算裝置 30: computing device

301:運動狀態判斷單元 301: Motion state judgment unit

302:平面偵測單元 302: plane detection unit

Claims (7)

一種平面動態偵測系統,包含: 一慣性感測器,其包含一加速度計及一陀螺儀; 一深度相機,供以持續擷取一深度影像,以持續輸入該深度相機於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值; 一運算裝置,至少具有一處理器,該運算裝置分別耦接於該慣性感測器及該深度相機,該運算裝置具有一運動狀態判斷單元及一平面偵測單元,該運動狀態判斷單元及該平面偵測單元通訊連接,該運動狀態判斷單元供以持續判斷該慣性感測器所取得的一加速度資訊及一角速度資訊是否超出一閾值; 若未超出該閾值,該平面偵測單元被組態為依據該加速度資訊、該深度影像座標、該深度值及一內部參數矩陣計算出一法向量及一距離常數,並以該法向量及該距離常數,初始化或持續更新該慣性感測器於穩定狀態時,該實體物件於一相機座標系中的一平面方程式;以及 若已超出該閾值,該平面偵測單元被組態為依據該加速度資訊的一重力加速度,執行一視覺慣性里程計演算法,以求得該深度相機的一位姿資訊,並基於該位姿資訊的一旋轉矩陣及一位移資訊,持續修正該慣性感測器於快速移動時的該平面方程式。 A planar motion detection system, including: An inertial sensor including an accelerometer and a gyroscope; A depth camera for continuously capturing a depth image to continuously input a depth image coordinate and a depth value for one or more physical objects within a viewing range of the depth camera; A computing device has at least a processor, the computing device is coupled to the inertial sensor and the depth camera, the computing device has a motion state judgment unit and a plane detection unit, the motion state judgment unit and the The plane detection unit is in communication connection, and the motion state judgment unit is used for continuously judging whether the acceleration information and the angular velocity information obtained by the inertial sensor exceed a threshold; If the threshold is not exceeded, the plane detection unit is configured to calculate a normal vector and a distance constant based on the acceleration information, the depth image coordinates, the depth value and an internal parameter matrix, and use the normal vector and the Distance constant, a plane equation of the physical object in a camera coordinate system when the inertial sensor is initialized or continuously updated; and If the threshold has been exceeded, the plane detection unit is configured to execute a visual inertial odometry algorithm based on a gravitational acceleration of the acceleration information to obtain the pose information of the depth camera and based on the pose A rotation matrix of information and a displacement information continuously correct the plane equation of the inertial sensor during rapid movement. 如申請專利範圍第1項的平面動態偵測系統,其中,該平面偵測單元亦被組態為對該深度影像座標與該深度值執行一內積運算,而持續生成該實體物件於一影像座標系的一三維座標,並以該三維座標、該內部參數矩陣與該加速度資訊演算該平面方程式。For example, the planar motion detection system of the first patent application, wherein the planar detection unit is also configured to perform an inner product operation on the depth image coordinates and the depth value, and continuously generate the physical object in an image A three-dimensional coordinate of the coordinate system, and the plane equation is calculated using the three-dimensional coordinate, the internal parameter matrix, and the acceleration information. 如申請專利範圍第1項或第2項的平面動態偵測系統,其中,該運算裝置亦供以對該法向量執行一疊代最佳化演算法或一高斯牛頓演算法求得一最佳法向量,並以該最佳法向量取代該法向量而演算出該平面方程式。For example, the planar motion detection system of patent application item 1 or item 2, in which the computing device is also used to execute an iterative optimization algorithm or a Gauss-Newton algorithm on the normal vector to obtain an optimal Normal vector, and replace the normal vector with the best normal vector to calculate the plane equation. 如申請專利範圍第1項或第2項的平面動態偵測系統,更包括一彩色相機,其分別耦接於該深度相機及該運算裝置,供以持續擷取該實體物件的一彩色影像,以供運算裝置確立該實體物件之該深度影像座標及一彩色影像座標之間的對應關係。For example, the planar motion detection system according to item 1 or 2 of the patent application scope further includes a color camera, which is coupled to the depth camera and the computing device, respectively, for continuously capturing a color image of the physical object, For the computing device to establish the correspondence between the depth image coordinate of the physical object and a color image coordinate. 如申請專利範圍第1項或第2項的平面動態偵測系統,其中,該運動狀態判斷單元及該平面偵測單元運行於一主機、一實體伺服器、一虛擬化伺服器或一頭戴式顯示器的該運算裝置。For example, the plane motion detection system of the first or second patent application scope, wherein the motion state judgment unit and the plane detection unit operate on a host, a physical server, a virtualized server or a headset Computing device of the digital display. 如申請專利範圍第1項或第2項的平面動態偵測系統,其中,該深度相機被組態為基於一飛行時間法方案、一結構光方案或一雙目視覺方案的深度感測器。For example, the planar motion detection system according to item 1 or 2 of the patent application, wherein the depth camera is configured as a depth sensor based on a time-of-flight method scheme, a structured light scheme, or a binocular vision scheme. 如申請專利範圍第4項的平面動態偵測系統,其中,該彩色相機為一RGB相機,且該RGB相機與該深度相機組構成一RGB-D相機。For example, the planar motion detection system of claim 4 of the patent application, wherein the color camera is an RGB camera, and the RGB camera and the depth camera group form an RGB-D camera.
TW108214345U 2019-10-31 2019-10-31 Planar dynamic detection system TWM594152U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW108214345U TWM594152U (en) 2019-10-31 2019-10-31 Planar dynamic detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108214345U TWM594152U (en) 2019-10-31 2019-10-31 Planar dynamic detection system

Publications (1)

Publication Number Publication Date
TWM594152U true TWM594152U (en) 2020-04-21

Family

ID=71133763

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108214345U TWM594152U (en) 2019-10-31 2019-10-31 Planar dynamic detection system

Country Status (1)

Country Link
TW (1) TWM594152U (en)

Similar Documents

Publication Publication Date Title
US20210190497A1 (en) Simultaneous location and mapping (slam) using dual event cameras
JP6198230B2 (en) Head posture tracking using depth camera
Orghidan et al. Camera calibration using two or three vanishing points
Takimoto et al. 3D reconstruction and multiple point cloud registration using a low precision RGB-D sensor
CN111951326B (en) Target object skeleton key point positioning method and device based on multiple camera devices
US20140300736A1 (en) Multi-sensor camera recalibration
EP2671384A2 (en) Mobile camera localization using depth maps
JP2013232195A5 (en)
Hansen et al. Online continuous stereo extrinsic parameter estimation
JP2023502192A (en) Visual positioning method and related apparatus, equipment and computer readable storage medium
US20130147785A1 (en) Three-dimensional texture reprojection
Mühlenbrock et al. Fast, accurate and robust registration of multiple depth sensors without need for RGB and IR images
TWI730482B (en) Plane dynamic detection system and detection method
Thomas et al. A monocular SLAM method for satellite proximity operations
KR20240015464A (en) Line-feature-based SLAM system using vanishing points
TWM594152U (en) Planar dynamic detection system
CN112750205B (en) Plane dynamic detection system and detection method
Lee et al. Gyroscope-aided relative pose estimation for rolling shutter cameras
JP5464671B2 (en) Image processing apparatus, image processing method, and image processing program
US20240338879A1 (en) Methods, storage media, and systems for selecting a pair of consistent real-world camera poses
Boas et al. Relative Pose Improvement of Sphere based RGB-D Calibration.
JP7255709B2 (en) Estimation method, estimation device and program
CN117315018B (en) User plane pose detection method, equipment and medium based on improved PnP
US20230326074A1 (en) Using cloud computing to improve accuracy of pose tracking
WO2023141491A1 (en) Sensor calibration system