TWM594152U - Planar dynamic detection system - Google Patents
Planar dynamic detection system Download PDFInfo
- Publication number
- TWM594152U TWM594152U TW108214345U TW108214345U TWM594152U TW M594152 U TWM594152 U TW M594152U TW 108214345 U TW108214345 U TW 108214345U TW 108214345 U TW108214345 U TW 108214345U TW M594152 U TWM594152 U TW M594152U
- Authority
- TW
- Taiwan
- Prior art keywords
- depth
- plane
- camera
- continuously
- computing device
- Prior art date
Links
Images
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
一種平面動態偵測系統,一慣性感測器可持續取得一慣性數據,一深度相機可於一觀視範圍內持續取得實體物件(例如平面或地面)的一深度影像,而一運算裝置經組態可持續判斷慣性感測器所取得的加速度及角速度是否超出一閾值,以判斷慣性感測器本身或其所搭載裝置之運動狀態,其中,運算裝置可依據加速度、深度影像座標、深度值及內參矩陣,初始化或持續更新慣性感測器於穩定狀態時,實體物件於相機座標系中的一平面方程式,亦可透過VIO演算法取得深度相機的位姿資訊,以持續修正慣性感測器於快速移動時的平面方程式。A plane dynamic detection system, an inertial sensor can continuously obtain an inertial data, a depth camera can continuously obtain a depth image of a physical object (such as a plane or the ground) within a viewing range, and a computing device is assembled The state can continuously determine whether the acceleration and angular velocity obtained by the inertial sensor exceed a threshold to determine the motion state of the inertial sensor itself or the device mounted on it. Among them, the computing device can be based on the acceleration, depth image coordinates, depth value and Internal parameter matrix, when initializing or continuously updating the inertial sensor in a stable state, a plane equation of the physical object in the camera coordinate system can also obtain the pose information of the depth camera through the VIO algorithm to continuously correct the inertial sensor in Plane equations when moving fast.
Description
本創作涉及計算機視覺技術,尤指一種可參考深度影像、彩色影像及慣性數據,以達成準確偵測平面及動態更新平面於三維空間中之相對位置的「平面動態偵測系統」。This creation involves computer vision technology, especially a "plane motion detection system" that can refer to depth images, color images, and inertial data to accurately detect the relative position of the plane and dynamically update the plane in three-dimensional space.
為了於需要3D資訊的應用(例如AR/VR服務)提供更為真實的互動效果,偵測現實中之平面即相當關鍵,若以偵測屬於地面的平面為目標,則偵測地面的方法可以為:(a)假設地面為最大的平面並利用RANSAC(Random Sample Consensus,隨機取樣)演算法,或是利用Hough Transform(霍夫轉換)演算法找到三維空間中最大的平面,並定義其為地面;(b)假設地面在影像中各掃描線(scan line)上的Z值為最大,並於修正相機姿態(roll rotation)後以影像中Z值最大且符合C曲線(fit curve C)的像素集合,定義其為地面。In order to provide more realistic interactive effects in applications that require 3D information (such as AR/VR services), it is critical to detect planes in reality. If the plane that belongs to the ground is targeted, the method of detecting the ground can be For: (a) Assuming that the ground is the largest plane and use the RANSAC (Random Sample Consensus, random sampling) algorithm, or use the Hough Transform (Hough Transform) algorithm to find the largest plane in three-dimensional space, and define it as the ground ; (B) Assuming that the ground has the largest Z value on each scan line in the image, and after correcting the camera rotation (roll rotation), the pixel with the largest Z value in the image and conforming to the C curve (fit curve C) Set, define it as the ground.
然而在許多情況下,前述(a)方法所假設的最大平面往往並非地面(例如影像中的最大平面可能為走廊的牆面),而可能發生RANSAC或Hough Transform演算法判斷錯誤的情形,並且,RANSAC演算法具備正確資料(inliers)至少需要占50%以上的限制,Hough Transform演算法也相當耗時;前述(b)方法也可能發生影像中Z值最大且符合C曲線的像素集合,其並非是地面的情形。However, in many cases, the maximum plane assumed by the aforementioned (a) method is often not the ground (for example, the maximum plane in the image may be the wall surface of the corridor), and a RANSAC or Hough Transform algorithm judgment error may occur, and, The RANSAC algorithm has at least 50% of the correct data (inliers), and the Hough Transform algorithm is also quite time-consuming; the aforementioned (b) method may also produce a pixel set with the largest Z value in the image and conform to the C curve, which is not This is the situation on the ground.
再者,無論利用何種方法偵測影像中的平面,在深度感測器(如深度攝影機)擷取深度影像後,依照點雲庫(Point Cloud Library,PCL)的習知作法,皆需將深度感測器取得的每個像素(pixel),先後與一相機投影反矩陣(inverse camera matrix)及一深度值作矩陣乘法運算,以轉換成點雲(Point Cloud)座標系中的多個三維座標,即如本項關係式所示: ,其中, 為點雲座標系中的三維座標, 為深度值, 為相機投影反矩陣,而 通常為一內部參數(內部參數為深度感測器的固有性質參數,主要有關於相機座標與影像座標間的轉換關係), 為深度影像於中每個像素的影像座標(其處於影像座標系);其後,再令此些三維座標的特徵點集合以點雲的型態呈現,接著,再以前述(a)或(b)等方法偵測點雲影像中的平面,但前述對每個像素均作矩陣乘法的方式,計算量相當龐大而有不佳的計算效能。 Furthermore, no matter what method is used to detect the plane in the image, after acquiring the depth image by the depth sensor (such as a depth camera), according to the conventional practice of the Point Cloud Library (PCL), all Each pixel obtained by the depth sensor is successively matrix multiplied with an inverse camera matrix and a depth value to convert into multiple 3D points in the point cloud coordinate system Coordinates, as shown in this relation: ,among them, Is a three-dimensional coordinate in the point cloud coordinate system, Is the depth value, Project the inverse matrix for the camera, and It is usually an internal parameter (internal parameter is the inherent property parameter of the depth sensor, mainly about the conversion relationship between the camera coordinate and the image coordinate), Is the image coordinate of each pixel in the depth image (which is in the image coordinate system); thereafter, the feature point sets of these three-dimensional coordinates are presented in the form of a point cloud, and then, the above (a) or ( b) and other methods to detect the plane in the point cloud image, but the aforementioned matrix multiplication method for each pixel has a very large amount of calculation and has poor calculation performance.
綜上可知,習知偵測三維空間中平面的作法,針對不同的平面類型(例如地面、牆面等平面),須先作強烈假設而可能有平面類型誤判的問題,同時也有計算效能不佳的缺點,依此,如何提出一種可更準確偵測平面、更節省計算資源的「平面偵測系統及偵測方法」,乃有待解決之問題。In summary, the conventional method of detecting planes in three-dimensional space requires strong assumptions for different plane types (such as ground, wall, etc.), which may cause the problem of misjudgment of plane types, and also have poor computing performance. Therefore, how to propose a "plane detection system and detection method" that can detect planes more accurately and save computing resources is a problem to be solved.
為達上述目的,本創作提出一種平面動態偵測系統,包含:一慣性感測器、一深度相機及一運算裝置,其中,慣性感測器包含一加速度計及一陀螺儀;深度相機可持續擷取一深度影像,以持續輸入深度相機於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值;運算裝置分別耦接於慣性感測器及深度相機,運算裝置具有一運動狀態判斷單元及一平面偵測單元,運動狀態判斷單元供以持續判斷慣性感測器所取得的一加速度資訊及一角速度資訊是否超出一閾值,並且,若未超出閾值,平面偵測單元可依據加速度資訊、深度影像座標、深度值及一內部參數矩陣計算出一法向量及一距離常數,並以法向量及距離常數,初始化或持續更新慣性感測器於穩定狀態時,實體物件於一相機座標系中的一平面方程式;反之,若已超出閾值,平面偵測單元可依據加速度資訊的一重力加速度,執行一視覺慣性里程計演算法,以求得深度相機的一位姿資訊,並基於位姿資訊的一旋轉矩陣及一位移資訊,持續修正慣性感測器於快速移動時的平面方程式,而平面方程式的意涵,即位於一平面上的任意點及垂直於該平面的法線,可唯一定義出三維空間中的該平面。To achieve the above purpose, the author proposes a planar motion detection system, including: an inertial sensor, a depth camera and a computing device, wherein the inertial sensor includes an accelerometer and a gyroscope; the depth camera is sustainable Capture a depth image to continuously input a depth image coordinate and a depth value for one or more physical objects within a viewing range of the depth camera; the computing device is respectively coupled to the inertial sensor and the depth camera, the computing device It has a motion state judgment unit and a plane detection unit. The motion state judgment unit is used for continuously judging whether the acceleration information and the angular velocity information obtained by the inertial sensor exceed a threshold, and if the threshold is not exceeded, the plane detection The unit can calculate a normal vector and a distance constant based on acceleration information, depth image coordinates, depth values and an internal parameter matrix, and use the normal vector and the distance constant to initialize or continuously update the inertial sensor when it is in a stable state. A plane equation in a camera coordinate system; conversely, if the threshold has been exceeded, the plane detection unit can execute a visual inertial odometry algorithm based on a gravitational acceleration of the acceleration information to obtain the position information of the depth camera , And based on a rotation matrix and a displacement information of the pose information, continuously modify the plane equation of the inertial sensor during rapid movement, and the meaning of the plane equation is that any point on a plane and the plane perpendicular to the plane The normal can uniquely define the plane in three-dimensional space.
為使 貴審查委員得以清楚了解本創作之目的、技術特徵及其實施後之功效,茲以下列說明搭配圖示進行說明,敬請參閱。In order to enable your reviewing committee to clearly understand the purpose, technical features and effects of this creation, the following description is accompanied by illustrations, please refer to it.
請參閱「第1圖」,其為本創作之系統架構圖,本創作提出一種平面動態偵測系統1,主要包含一慣性感測器10、一深度相機20及一運算裝置30,其中:
(1) 慣性感測器(Inertial Measurement Unit, IMU)10包含一加速度計(accelerometer/G-Seosor)101及一陀螺儀(Gyroscope)102,可持續取得的一加速度資訊及一角速度資訊;
(2) 深度相機20可持續擷取一深度影像,以持續輸入深度相機20於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值,並且,深度相機20可被組態為採用一飛行時間法方案(Time of Flight,TOF)、一結構光光案(Structured Light)或一雙目視覺方案(Stereo Visual)量測出前述實體物件之深度的深度感測器,其中,飛行時間法方案係指深度相機20可作為一ToF相機,並利用發光二極體(LED)或雷射二極體(Laser Diode,LD)發射出紅外光,當照射到實體物件的物體表面的光反射回來後,由於光速為已知,故可藉此利用一個紅外光影像感測器,來量測實體物件於不同深度的位置將光線反射回來的時間,進而能推算出實體物件於不同位置的深度及實體物件的深度影像;結構光方案係指深度相機20可利用雷射二極體(Laser Diode,LD)或數位光源處理器(DLP)打出不同的光線圖形,並透過特定光柵繞射至實體物件的物體表面上,進而形成光斑圖案(Pattern),而由於實體物件於不同深度的位置所反射回來的光斑圖案會發生扭曲,故當反射回來的光線進入紅外光影像感測器後,即可反推實體物件的立體結構及其深度影像;雙目視覺方案指深度相機20可作為一雙目相機(stereo camera),並利用至少兩個攝像鏡頭拍攝實體物件及深度相機20所產生的視差(disparity),透過三角測量(Triangulation)原理量測出實體物件的三維立體資訊(深度影像);
(3) 運算裝置30分別耦接於慣性感測器10及深度相機20,並具有一運動狀態判斷單元301及一平面偵測單元302,運動狀態判斷單元301及平面偵測單元302通訊連接,運動狀態判斷單元301被組態為可持續判斷慣性感測器10所取得的加速度資訊及角速度資訊是否超出一閾值(threshold),以判斷慣性感測器10本身或其所搭載裝置的運動狀態,值得注意的是,運算裝置30可至少具有一處理器(圖中未繪示,例如CPU、MCU),其供以運行運算裝置30,並具備邏輯運算、暫存運算結果、保存執行指令位置等功能,另外,運動狀態判斷單元301及平面偵測單元302本身可運行於一平面動態裝置(圖中未繪示,例如一頭戴式顯示器,且頭戴式顯示器可為VR頭盔、MR頭盔等頭戴式顯示器)、一主機(Host)、一實體伺服器或一虛擬化伺服器(VM)的運算裝置30,惟均不以此為限;
(4) 承上,若當下未超出閾值,平面偵測單元302被組態為可依據加速度資訊、深度影像座標(Pixel Domain)、深度值(depth value)及一內部參數矩陣(intrinsic parameter matrix)計算出一法向量(normal vector)及一距離常數(d值),並以法向量及距離常數(其位處於影像座標系),初始化或持續更新慣性感測器10於穩定狀態時,實體物件於一相機座標系(camera coordinate system)中的一平面方程式(3D plane equation),而平面方程式的意涵,即位於一平面上的任意點及垂直於該平面的法線,可唯一定義出三維空間中的該平面;
(5) 反之,若當下已超出閾值,則平面偵測單元302被組態為可依據加速度資訊中的一重力加速度,執行基於濾波(filter-based)或基於優化(optimization-based)的一視覺慣性里程計(visual inertial odometry,VIO)演算法,以求得深度相機20的一位姿資訊,並基於位姿資訊的一旋轉矩陣(orientation matrix)及一位移資訊(translation),持續修正慣性感測器10於快速移動時的平面方程式;
(6) 另,前述所稱的影像座標是為了描述成像過程中,實體物件從相機座標系到影像座標系的投影透射關係而引入,是我們真正從深度相機20內讀取到的影像所在的座標系,單位為像素,而前述所稱的相機座標就是以深度相機20為原點建立的座標系,是為了從深度相機20的角度描述物體位置而定義。
Please refer to "Figure 1", which is a system architecture diagram of this creation. This creation proposes a planar
請繼續參閱「第1圖」,本創作在一較佳實施例中,運算裝置30的平面偵測單元302亦可對實體物件的深度影像座標與深度值執行一內積運算,以持續生成實體物件於一影像座標系的一三維座標,並以前述的三維座標與內部參數矩陣演算出平面方程式。Please continue to refer to "Figure 1". In a preferred embodiment of the present invention, the
請繼續參閱「第1圖」,本創作在一較佳實施例中,運算裝置30的平面偵測單元302亦可對前述的法向量執行一疊代最佳化(iterative optimization)演算法或一高斯牛頓(gauss newton)演算法求得一最佳法向量及其對應的距離常數(d值),並以最佳法向量取代前述的法向量演算出更臻精確的平面方程式。Please continue to refer to "Figure 1". In this preferred embodiment, the
請參閱「第2圖」至「第3圖」,其分別為本創作的平面動態偵測方法流程圖(一)、(二),並請搭配參閱「第1圖」,本創作提出一種平面動態偵測方法S,可包括以下步驟:
(1)擷取影像步驟(步驟S10):一深度相機20持續擷取一深度影像,以持續輸入深度相機20於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值;
(2)偵測慣性數據步驟(步驟S20):一慣性感測器10持續取得一加速度資訊及一角速度資訊等慣性數據;
(3)判斷運動狀態步驟(步驟S30):一運算裝置30持續判斷慣性感測器10所取得的一加速度資訊及一角速度資訊是否超出一閾值,以判斷慣性感測器10本身或其所搭載裝置的運動狀態;
(4)第一更新平面方程式步驟(步驟S40):承步驟S30,若未超出閾值,運算裝置30可依據加速度資訊、深度影像座標、深度值及一內部參數矩陣計算出一法向量及一距離常數(其對應於影像座標系),並以法向量及距離常數,初始化或持續更新慣性感測器10於穩定狀態時,實體物件於一相機座標系中的一平面方程式;
(5)第二更新平面方程式步驟(步驟S50):承步驟S30,若已超出閾值,運算裝置30依據加速度資訊的一重力加速度,執行一視覺慣性里程計演算法,以求得深度相機20的一位姿資訊,並基於位姿資訊的一旋轉矩陣及一位移資訊,持續修正慣性感測器10於快速移動時的平面方程式。
Please refer to "Picture 2" to "Picture 3", which are flow charts (1) and (2) of the plane motion detection method of this creation, and please refer to "
承上,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,步驟S40執行時,若以欲偵測的平面類型為地面為例,且慣性感測器10的慣性數據未超出閾值,也就是慣性感測器10本身或其搭載裝置係處於穩定狀態時(例如靜止),則慣性感測器10僅會讀取到靜止加速度值g(gravity force direction),而其反方向為實體物件之平面方程式於相機座標的法向量n,關係式可參照如下:
(1)慣性感測器10的靜止加速度值:g=9.8m/s
2或10m/s
2 (2)平面方程式於相機座標的法向量n=-g=
(3)依此,深度影像中的實體物件(地面)於影像座標下的法向量
可表示為:
Continuing on, please continue to refer to "Picture 2" to "Picture 3", and please refer to "
承上,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,步驟S50執行時,若以欲偵測的平面類型為地面為例,由於當慣性感測器10處於劇烈或快速運動的情況,已無法以加速度計101的讀數來預估平面方程式的法向量,故前述的步驟S50於執行時,可利用例如基於濾波或基於優化的VIO演算法來更新實體物件(地面)的平面方程式,假設VIO預估的深度相機20的相對位姿(Relative Pose Motion)是
,並假設更新前的平面方程式為
,則之後的平面方程式得依以下關係式更新,但以下僅為舉例,並不以此為限:
Inherit, please continue to refer to "Picture 2" to "Picture 3", and please refer to "
另,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,本創作在一較佳實施例中,若系統以欲偵測的平面類型為地面為目標,由於步驟S40執行時,即便運動狀態判斷單元301判斷慣性感測器10處於穩定狀態,慣性感測器10本身或其所搭載裝置也可能並非完全靜止,此外,也有實體物件(地面本身)有些傾斜的狀況,故在前述的步驟S40執行時,運算裝置30可進一步對法向量執行一疊代最佳化演算法或一高斯牛頓演算法(例如gauss newton least square),以求得一最佳法向量
及其對應的距離常數(
值),並以最佳法向量
取代法向量
而演算出平面方程式,更具體而言,運算裝置30的平面偵測單元302演算最佳法向量
的公式可參照如下,但以下僅為舉例,並不以此為限:
(1)首先,將深度影像中的深度值超過一定數值
的像素予以排除,再以前述提及的法向量
(此處暫稱法向量
,其對應於影像座標系)排除後的n個深度影像座標,算出對應的
值,如下關係式所示:
(2)接著,假設實體物件(地面)的
值,是在所有深度影像中法向量為
之實體物件(其它平面)中最小的,因為地面應為距離深度相機20最遠的平面,所以得依以下關係式,算出距離深度相機20最遠平面之對應的
值:
(3)其後,平面偵測單元302進一步對法向量執行一疊代最佳化演算法或一高斯牛頓演算法,求得誤差函數(Error Function,亦可稱評價函數)最小的一最佳法向量
,在此之前需先定義一誤差函數E(
)及一閾值
,如下所示:
In addition, please continue to refer to "Picture 2" to "Picture 3", and please refer to "
請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,若以欲偵測的平面類型為地面為例,則運算裝置30之平面偵測單元302計算前述法向量的演算公式可參照如下,但並不以此為限,特先陳明:
A.假設於深度影像中屬於地面部分的像素有N個;
B.假設於深度影像中的一像素點座標為(
,則:
C.第i個點於前述兩個不同座標系之三維座標的Z值相同,前述兩個三維座標於相機座標系與影像座標系的轉換關係如下:
D.所以相機座標系與影像座標系的三維影像座標,係可透過深度相機20之內部參數矩陣K相關聯,而展開上述公式可得出,第i個點於影像座標系中的深度影像座標的x、y值分別為:
E.依據平面方程式的定義,並假設實體物件所處的平面上有前述的第i個點,可知以處於相機座標系的
演算出的平面方程式為:
F.承上,相機座標系的法向量
G.依據平面方程式的定義,並假設實體物件所處的平面上有前述的第i個點,可知以處於影像座標系的
演算出的平面方程式為:
H.承上,影像座標系的法向量
I.接著,演算處於相機座標系中實體物件(平面)的法向量,假設
兩個點都在該平面上的話,會符合前述第G點的平面方程式,代入後的平面方程式分別如下:
J.將上述兩個平面方程式相減後,可得出:
K.接著,將前述第D點的第i個點於影像座標系中的深度影像座標的x、y值代入第J點的方程式可得出:
L.所以,實體物件對應到相機座標系的平面方程式的法向量為:
Please continue to refer to "Picture 2" to "Picture 3", and please refer to "
承上,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,當運算裝置30演算出實體物件對應到相機座標系的平面方程式的法向量n後,接續計算d值的演算公式可參照如下,但並不以此為限,特先陳明:
M.首先,令一常數
N.將處於影像座標系的像素點
代入前述第G點的平面方程式可得出:
O.將前述第D點「第i個點於影像座標系中的深度影像座標的x、y值」代入前述第N點的平面方程式可得出:
P.對前述第O點之平面方程式的等式兩側均除以c:
Q.於此,可得出實體物件於相機座標中的平面方程式的d值為:
To continue, please continue to refer to "Picture 2" to "Picture 3", and please refer to "
另,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,本創作在一較佳實施例中,在前述的步驟S30執行前,可先執行一取得三維座標步驟(步驟S25):運算裝置30對實體物件的深度影像座標與深度值執行一內積運算,以持續生成實體物件於一影像座標系的一三維座標,依此,可於步驟S40或步驟S50執行時,以前述的三維座標、內部參數矩陣與加速度資訊演算出前述的法向量及距離常數,進而運算出實體物件的平面方程式,更具體而言,生成前述三維座標的演算公式可參照:
,其中,
為處於影像座標系的三維座標,Z為深度值,
則為深度影像座標(處於影像座標系),藉此,相較於習知點雲庫(PCL)皆需將深度相機20所取得的每個像素,先後與一相機投影反矩陣(即前述的K)及一深度值作矩陣乘法運算,以轉換成點雲座標系中的多個三維座標的作法,本實施例可省去像素、深度值與相機投影反矩陣作矩陣運算的步驟,而直接以前述的三維座標進行實體物件(平面)的偵測,而能達成節省運算量的有益功效,同時能省去從深度影像轉換至點雲的轉換時間。
In addition, please continue to refer to "Picture 2" to "Picture 3", and please refer to "
請參閱「第4圖」,其為本創作之另一較佳實施例之系統架構圖,本實施例與「第1圖」至「第3圖」所揭技術類同,主要差異在於,本實施例的平面動態偵測系統1更可包括一彩色相機40(例如一RGB相機),其分別耦接於深度相機20及運算裝置30,供以持續擷取實體物件的一彩色影像,以供運算裝置30於步驟S10(擷取影像步驟)執行時,可確立實體物件之深度影像座標及一彩色影像座標之間的對應關係,以提升平面偵測之準確性,另,本實施例的彩色相機40亦可與深度相機20組構成一RGB-D相機,即如本圖所示,且本實施例的深度相機20可為雙目相機,但均不以此為限。Please refer to "Figure 4", which is a system architecture diagram of another preferred embodiment created. This embodiment is similar to the technology disclosed in "Figure 1" to "Figure 3", the main difference is that The planar
綜上可知,本創作據以實施後,由於可解決習知偵測三維空間中平面時,針對不同的平面類型須作強烈假設而可能有平面誤判的問題,同時能改善習知平面偵測方法之計算效能不佳的缺點,而能達成更為準確偵測平面、更節省計算資源的有益功效。In summary, after the implementation of this creation, the problem of misjudgment of planes that must be made as a strong assumption for different plane types when conventional detection of planes in three-dimensional space can be solved, and the conventional plane detection method can be improved. The disadvantage of poor computing performance, and can achieve a more accurate detection of planes, and save the beneficial effects of computing resources.
以上所述者,僅為本創作之較佳之實施例而已,並非用以限定本創作實施之範圍;任何熟習此技藝者,在不脫離本創作之精神與範圍下所作之均等變化與修飾,皆應涵蓋於本創作之專利範圍內。The above are only the preferred embodiments of this creation, and are not intended to limit the scope of the implementation of this creation; anyone who is familiar with this skill will make equal changes and modifications without departing from the spirit and scope of this creation. It should be covered by the patent scope of this creation.
綜上所述,本創作係具有「產業利用性」、「新穎性」與「進步性」等專利要件;申請人爰依專利法之規定,向 鈞局提起新型專利之申請。In summary, this creative department has patent requirements such as "industrial utility", "novelty" and "progressiveness"; the applicant has filed an application for a new type of patent with the Bureau of Law in accordance with the provisions of the Patent Law.
1:平面動態偵測系統 10:慣性感測器 101:加速度計 102:陀螺儀 20:深度相機 30:運算裝置 301:運動狀態判斷單元 302:平面偵測單元 40:彩色相機 S:平面動態偵測方法 S10:擷取影像步驟 S20:偵測慣性數據步驟 S25:取得三維座標步驟 S30:判斷運動狀態步驟 S40:第一更新平面方程式步驟 S50:第二更新平面方程式步驟 1: Planar motion detection system 10: Inertial sensor 101: accelerometer 102: Gyroscope 20: depth camera 30: computing device 301: Motion state judgment unit 302: plane detection unit 40: Color camera S: Planar motion detection method S10: Steps to capture images S20: Steps to detect inertial data S25: Steps to obtain 3D coordinates S30: Steps for judging exercise status S40: The first step of updating the plane equation S50: Second step of updating the plane equation
第1圖,為本創作之系統架構圖。 第2圖,為本創作的平面偵測方法流程圖(一)。 第3圖,為本創作的平面偵測方法流程圖(二)。 第4圖,為本創作於另一較佳實施例之系統架構圖。 Figure 1 is a system architecture diagram for this creation. Figure 2 is a flow chart of the plane detection method for this creation (1). Figure 3 is a flow chart of the plane detection method for this creation (2). Figure 4 is a system architecture diagram created in another preferred embodiment.
1:平面動態偵測系統 1: Planar motion detection system
10:慣性感測器 10: Inertial sensor
101:加速度計 101: accelerometer
102:陀螺儀 102: Gyroscope
20:深度相機 20: depth camera
30:運算裝置 30: computing device
301:運動狀態判斷單元 301: Motion state judgment unit
302:平面偵測單元 302: plane detection unit
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW108214345U TWM594152U (en) | 2019-10-31 | 2019-10-31 | Planar dynamic detection system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW108214345U TWM594152U (en) | 2019-10-31 | 2019-10-31 | Planar dynamic detection system |
Publications (1)
Publication Number | Publication Date |
---|---|
TWM594152U true TWM594152U (en) | 2020-04-21 |
Family
ID=71133763
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW108214345U TWM594152U (en) | 2019-10-31 | 2019-10-31 | Planar dynamic detection system |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWM594152U (en) |
-
2019
- 2019-10-31 TW TW108214345U patent/TWM594152U/en unknown
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210190497A1 (en) | Simultaneous location and mapping (slam) using dual event cameras | |
JP6198230B2 (en) | Head posture tracking using depth camera | |
Orghidan et al. | Camera calibration using two or three vanishing points | |
Takimoto et al. | 3D reconstruction and multiple point cloud registration using a low precision RGB-D sensor | |
CN111951326B (en) | Target object skeleton key point positioning method and device based on multiple camera devices | |
US20140300736A1 (en) | Multi-sensor camera recalibration | |
EP2671384A2 (en) | Mobile camera localization using depth maps | |
JP2013232195A5 (en) | ||
Hansen et al. | Online continuous stereo extrinsic parameter estimation | |
JP2023502192A (en) | Visual positioning method and related apparatus, equipment and computer readable storage medium | |
US20130147785A1 (en) | Three-dimensional texture reprojection | |
Mühlenbrock et al. | Fast, accurate and robust registration of multiple depth sensors without need for RGB and IR images | |
TWI730482B (en) | Plane dynamic detection system and detection method | |
Thomas et al. | A monocular SLAM method for satellite proximity operations | |
KR20240015464A (en) | Line-feature-based SLAM system using vanishing points | |
TWM594152U (en) | Planar dynamic detection system | |
CN112750205B (en) | Plane dynamic detection system and detection method | |
Lee et al. | Gyroscope-aided relative pose estimation for rolling shutter cameras | |
JP5464671B2 (en) | Image processing apparatus, image processing method, and image processing program | |
US20240338879A1 (en) | Methods, storage media, and systems for selecting a pair of consistent real-world camera poses | |
Boas et al. | Relative Pose Improvement of Sphere based RGB-D Calibration. | |
JP7255709B2 (en) | Estimation method, estimation device and program | |
CN117315018B (en) | User plane pose detection method, equipment and medium based on improved PnP | |
US20230326074A1 (en) | Using cloud computing to improve accuracy of pose tracking | |
WO2023141491A1 (en) | Sensor calibration system |