TWI730482B - Plane dynamic detection system and detection method - Google Patents

Plane dynamic detection system and detection method Download PDF

Info

Publication number
TWI730482B
TWI730482B TW108139370A TW108139370A TWI730482B TW I730482 B TWI730482 B TW I730482B TW 108139370 A TW108139370 A TW 108139370A TW 108139370 A TW108139370 A TW 108139370A TW I730482 B TWI730482 B TW I730482B
Authority
TW
Taiwan
Prior art keywords
plane
depth
continuously
inertial sensor
camera
Prior art date
Application number
TW108139370A
Other languages
Chinese (zh)
Other versions
TW202119359A (en
Inventor
蕭淳澤
Original Assignee
大陸商南京深視光點科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商南京深視光點科技有限公司 filed Critical 大陸商南京深視光點科技有限公司
Priority to TW108139370A priority Critical patent/TWI730482B/en
Publication of TW202119359A publication Critical patent/TW202119359A/en
Application granted granted Critical
Publication of TWI730482B publication Critical patent/TWI730482B/en

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

本發明揭露一種平面動態偵測系統及偵測方法,一慣性感測器可持續取得一慣性數據,一深度相機可於一觀視範圍內持續取得實體物件(例如平面或地面)的一深度影像,而一運算裝置經組態可持續判斷慣性感測器所取得的加速度及角速度是否超出一閾值,以判斷慣性感測器本身或其所搭載裝置之運動狀態,其中,運算裝置可依據加速度、深度影像座標、深度值及內參矩陣,初始化或持續更新慣性感測器於穩定狀態時,實體物件於相機座標系中的一平面方程式,亦可透過VIO演算法取得深度相機的位姿資訊,以持續修正慣性感測器於快速移動時的平面方程式。The present invention discloses a plane motion detection system and detection method. An inertial sensor can continuously obtain inertial data, and a depth camera can continuously obtain a depth image of a physical object (such as a plane or ground) within a viewing range. , And an arithmetic device is configured to continuously determine whether the acceleration and angular velocity obtained by the inertial sensor exceed a threshold to determine the motion state of the inertial sensor itself or the device it is equipped with. The arithmetic device can be based on acceleration, Depth image coordinates, depth values and internal parameter matrix. When the inertial sensor is initialized or continuously updated, a plane equation of the physical object in the camera coordinate system can also be obtained through the VIO algorithm to obtain the position and attitude information of the depth camera. Continuously modify the plane equation of the inertial sensor when it moves fast.

Description

平面動態偵測系統及偵測方法Plane dynamic detection system and detection method

本發明涉及計算機視覺技術,尤指一種可參考深度影像、彩色影像及慣性數據,以達成準確偵測平面及動態更新平面於三維空間中之相對位置的「平面動態偵測系統及偵測方法」。The present invention relates to computer vision technology, in particular to a "plane motion detection system and detection method" that can refer to depth images, color images, and inertial data to accurately detect a plane and dynamically update the relative position of the plane in three-dimensional space .

為了於需要3D資訊的應用(例如AR/VR服務)提供更為真實的互動效果,偵測現實中之平面即相當關鍵,若以偵測屬於地面的平面為目標,則偵測地面的方法可以為:(a)假設地面為最大的平面並利用RANSAC(Random Sample Consensus,隨機取樣)演算法,或是利用Hough Transform(霍夫轉換)演算法找到三維空間中最大的平面,並定義其為地面;(b)假設地面在影像中各掃描線(scan line)上的Z值為最大,並於修正相機姿態(roll rotation)後以影像中Z值最大且符合C曲線(fit curve C)的像素集合,定義其為地面。In order to provide more realistic interactive effects in applications that require 3D information (such as AR/VR services), it is very important to detect the plane in reality. If the target is to detect a plane that belongs to the ground, the method of detecting the ground can be It is: (a) Assume that the ground is the largest plane and use the RANSAC (Random Sample Consensus) algorithm, or use the Hough Transform (Hough Transform) algorithm to find the largest plane in the three-dimensional space, and define it as the ground (B) Assuming that the Z value of the ground on each scan line in the image is the largest, and after correcting the camera posture (roll rotation), the pixel with the largest Z value in the image and conforming to the fit curve C (fit curve C) is used Set, define it as the ground.

然而在許多情況下,前述(a)方法所假設的最大平面往往並非地面(例如影像中的最大平面可能為走廊的牆面),而可能發生RANSAC或Hough Transform演算法判斷錯誤的情形,並且,RANSAC演算法具備正確資料(inliers)至少需要占50%以上的限制,Hough Transform演算法也相當耗時;前述(b)方法也可能發生影像中Z值最大且符合C曲線的像素集合,其並非是地面的情形。However, in many cases, the maximum plane assumed by the aforementioned method (a) is often not the ground (for example, the maximum plane in the image may be the wall of the corridor), and errors in the judgment of the RANSAC or Hough Transform algorithm may occur, and, The RANSAC algorithm has the limitation that correct data (inliers) need to account for at least 50%, and the Hough Transform algorithm is also quite time-consuming; the aforementioned method (b) may also occur in the image with the largest Z value and the pixel set that conforms to the C curve, which is not It's the situation on the ground.

再者,無論利用何種方法偵測影像中的平面,在深度感測器(如深度攝影機)擷取深度影像後,依照點雲庫(Point Cloud Library,PCL)的習知作法,皆需將深度感測器取得的每個像素(pixel),先後與一相機投影反矩陣(inverse camera matrix)及一深度值作矩陣乘法運算,以轉換成點雲(Point Cloud)座標系中的多個三維座標,即如本項關係式所示:

Figure 02_image001
,其中,
Figure 02_image003
為點雲座標系中的三維座標,
Figure 02_image005
為深度值,
Figure 02_image007
為相機投影反矩陣,而
Figure 02_image009
通常為一內部參數(內部參數為深度感測器的固有性質參數,主要有關於相機座標與影像座標間的轉換關係),
Figure 02_image011
為深度影像於中每個像素的影像座標(其處於影像座標系);其後,再令此些三維座標的特徵點集合以點雲的型態呈現,接著,再以前述(a)或(b)等方法偵測點雲影像中的平面,但前述對每個像素均作矩陣乘法的方式,計算量相當龐大而有不佳的計算效能。 Furthermore, no matter what method is used to detect the plane in the image, after the depth sensor (such as a depth camera) captures the depth image, according to the conventional practice of the Point Cloud Library (PCL), it is necessary to Each pixel (pixel) obtained by the depth sensor is sequentially multiplied with an inverse camera matrix and a depth value to be converted into multiple three dimensions in the point cloud coordinate system Coordinates, as shown in this relationship:
Figure 02_image001
,among them,
Figure 02_image003
Is the three-dimensional coordinate in the point cloud coordinate system,
Figure 02_image005
Is the depth value,
Figure 02_image007
For the camera projection inverse matrix, and
Figure 02_image009
Usually an internal parameter (the internal parameter is the inherent property parameter of the depth sensor, mainly related to the conversion relationship between the camera coordinate and the image coordinate),
Figure 02_image011
Is the image coordinate of each pixel in the depth image (which is in the image coordinate system); then, the feature point set of these three-dimensional coordinates is presented in the form of a point cloud, and then the above (a) or ( b) Other methods to detect the plane in the point cloud image, but the aforementioned method of matrix multiplication for each pixel requires a huge amount of calculation and poor calculation performance.

綜上可知,習知偵測三維空間中平面的作法,針對不同的平面類型(例如地面、牆面等平面),須先作強烈假設而可能有平面類型誤判的問題,同時也有計算效能不佳的缺點,依此,如何提出一種可更準確偵測平面、更節省計算資源的「平面偵測系統及偵測方法」,乃有待解決之問題。In summary, the conventional method of detecting planes in three-dimensional space requires strong assumptions for different plane types (such as ground, wall, etc.), which may lead to misjudgment of plane types and poor calculation performance. Based on this, how to propose a "plane detection system and detection method" that can detect planes more accurately and save computing resources is a problem to be solved.

為達上述目的,本發明提出一種平面動態偵測系統,包含:一慣性感測器、一深度相機及一運算裝置,其中,慣性感測器包含一加速度計及一陀螺儀;深度相機可持續擷取一深度影像,以持續輸入深度相機於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值;運算裝置分別耦接於慣性感測器及深度相機,運算裝置具有一運動狀態判斷單元及一平面偵測單元,運動狀態判斷單元供以持續判斷慣性感測器所取得的一加速度資訊及一角速度資訊是否超出一閾值,並且,若未超出閾值,平面偵測單元可依據加速度資訊、深度影像座標、深度值及一內部參數矩陣計算出一法向量及一距離常數,並以法向量及距離常數,初始化或持續更新慣性感測器於穩定狀態時,實體物件於一相機座標系中的一平面方程式;反之,若已超出閾值,平面偵測單元可依據加速度資訊的一重力加速度,執行一視覺慣性里程計演算法,以求得深度相機的一位姿資訊,並基於位姿資訊的一旋轉矩陣及一位移資訊,持續修正慣性感測器於快速移動時的平面方程式,而平面方程式的意涵,即位於一平面上的任意點及垂直於該平面的法線,可唯一定義出三維空間中的該平面。To achieve the above objective, the present invention proposes a planar motion detection system, which includes: an inertial sensor, a depth camera, and an arithmetic device, wherein the inertial sensor includes an accelerometer and a gyroscope; the depth camera is sustainable Capture a depth image to continuously input a depth image coordinate and a depth value of one or more physical objects within a viewing range of the depth camera; the computing device is respectively coupled to the inertial sensor and the depth camera, and the computing device It has a motion state determination unit and a plane detection unit. The motion state determination unit is used to continuously determine whether an acceleration information and an angular velocity information obtained by the inertial sensor exceed a threshold value, and if the threshold value is not exceeded, plane detection The unit can calculate a normal vector and a distance constant based on acceleration information, depth image coordinates, depth values and an internal parameter matrix, and use the normal vector and distance constant to initialize or continuously update the physical object when the inertial sensor is in a stable state A plane equation in a camera coordinate system; otherwise, if the threshold is exceeded, the plane detection unit can execute a visual inertial odometry algorithm based on a gravitational acceleration of the acceleration information to obtain the position information of the depth camera , And based on a rotation matrix and a displacement information of the pose information, continue to modify the plane equation of the inertial sensor during rapid movement. The meaning of the plane equation is that any point on a plane and the perpendicular to the plane The normal can uniquely define the plane in the three-dimensional space.

為達上述目的,本發明亦提出一種平面動態偵測方法,包含: (1) 一偵測慣性數據步驟:一慣性感測器持續取得一加速度資訊及一角速度資訊等慣性數據; (2) 一判斷運動狀態步驟:一運算裝置持續判斷慣性感測器所取得的一加速度資訊及一角速度資訊是否超出一閾值,以判斷慣性感測器的運動狀態; (3) 一第一更新平面方程式步驟:若未超出閾值,運算裝置依據加速度資訊、深度影像座標、深度值及一內部參數矩陣計算出一法向量及一距離常數,並以法向量及距離常數,初始化或持續更新慣性感測器於穩定狀態時,實體物件於一相機座標系中的一平面方程式;以及 (4) 一第二更新平面方程式步驟:若已超出閾值,運算裝置依據加速度資訊的一重力加速度,執行一視覺慣性里程計演算法,以求得深度相機的一位姿資訊,並基於位姿資訊的一旋轉矩陣及一位移資訊,持續修正慣性感測器於快速移動時的平面方程式。 To achieve the above objective, the present invention also provides a planar motion detection method, including: (1) A step of detecting inertial data: an inertial sensor continuously obtains inertial data such as acceleration information and angular velocity information; (2) A step of judging the movement state: an arithmetic device continuously judges whether an acceleration information and an angular velocity information obtained by the inertial sensor exceed a threshold to judge the movement state of the inertial sensor; (3) A first step of updating the plane equation: if the threshold is not exceeded, the computing device calculates a normal vector and a distance constant based on the acceleration information, depth image coordinates, depth value and an internal parameter matrix, and uses the normal vector and distance constant , To initialize or continuously update a plane equation of the physical object in a camera coordinate system when the inertial sensor is in a stable state; and (4) A second step of updating the plane equation: if the threshold is exceeded, the computing device executes a visual inertial odometry algorithm based on a gravitational acceleration of the acceleration information to obtain the position information of the depth camera based on the position and position A rotation matrix and a displacement information of the information continuously modify the plane equation of the inertial sensor when it moves quickly.

為使 貴審查委員得以清楚了解本發明之目的、技術特徵及其實施後之功效,茲以下列說明搭配圖示進行說明,敬請參閱。In order for your reviewer to have a clear understanding of the purpose, technical features and effects of the present invention after implementation, the following descriptions and illustrations are used for illustration, please refer to it.

請參閱「第1圖」,其為本發明之系統架構圖,本發明提出一種平面動態偵測系統1,主要包含一慣性感測器10、一深度相機20及一運算裝置30,其中: (1)  慣性感測器(Inertial Measurement Unit, IMU)10包含一加速度計(accelerometer/G-Seosor)101及一陀螺儀(Gyroscope)102,可持續取得的一加速度資訊及一角速度資訊; (2)  深度相機20可持續擷取一深度影像,以持續輸入深度相機20於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值,並且,深度相機20可被組態為採用一飛行時間法方案(Time of Flight,TOF)、一結構光光案(Structured Light)或一雙目視覺方案(Stereo Visual)量測出前述實體物件之深度的深度感測器,其中,飛行時間法方案係指深度相機20可作為一ToF相機,並利用發光二極體(LED)或雷射二極體(Laser Diode,LD)發射出紅外光,當照射到實體物件的物體表面的光反射回來後,由於光速為已知,故可藉此利用一個紅外光影像感測器,來量測實體物件於不同深度的位置將光線反射回來的時間,進而能推算出實體物件於不同位置的深度及實體物件的深度影像;結構光方案係指深度相機20可利用雷射二極體(Laser Diode,LD)或數位光源處理器(DLP)打出不同的光線圖形,並透過特定光柵繞射至實體物件的物體表面上,進而形成光斑圖案(Pattern),而由於實體物件於不同深度的位置所反射回來的光斑圖案會發生扭曲,故當反射回來的光線進入紅外光影像感測器後,即可反推實體物件的立體結構及其深度影像;雙目視覺方案指深度相機20可作為一雙目相機(stereo camera),並利用至少兩個攝像鏡頭拍攝實體物件及深度相機20所產生的視差(disparity),透過三角測量(Triangulation)原理量測出實體物件的三維立體資訊(深度影像); (3)  運算裝置30分別耦接於慣性感測器10及深度相機20,並具有一運動狀態判斷單元301及一平面偵測單元302,運動狀態判斷單元301及平面偵測單元302通訊連接,運動狀態判斷單元301被組態為可持續判斷慣性感測器10所取得的加速度資訊及角速度資訊是否超出一閾值(threshold),以判斷慣性感測器10本身或其所搭載裝置的運動狀態,值得注意的是,運算裝置30可至少具有一處理器(圖中未繪示,例如CPU、MCU),其供以運行運算裝置30,並具備邏輯運算、暫存運算結果、保存執行指令位置等功能,另外,運動狀態判斷單元301及平面偵測單元302本身可運行於一平面動態裝置(圖中未繪示,例如一頭戴式顯示器,且頭戴式顯示器可為VR頭盔、MR頭盔等頭戴式顯示器)、一主機(Host)、一實體伺服器或一虛擬化伺服器(VM)的運算裝置30,惟均不以此為限; (4)  承上,若當下未超出閾值,平面偵測單元302被組態為可依據加速度資訊、深度影像座標(Pixel Domain)、深度值(depth value)及一內部參數矩陣(intrinsic parameter matrix)計算出一法向量(normal vector)及一距離常數(d值),並以法向量及距離常數(其位處於影像座標系),初始化或持續更新慣性感測器10於穩定狀態時,實體物件於一相機座標系(camera coordinate system)中的一平面方程式(3D plane equation),而平面方程式的意涵,即位於一平面上的任意點及垂直於該平面的法線,可唯一定義出三維空間中的該平面; (5)  反之,若當下已超出閾值,則平面偵測單元302被組態為可依據加速度資訊中的一重力加速度,執行基於濾波(filter-based)或基於優化(optimization-based)的一視覺慣性里程計(visual inertial odometry,VIO)演算法,以求得深度相機20的一位姿資訊,並基於位姿資訊的一旋轉矩陣(orientation matrix)及一位移資訊(translation),持續修正慣性感測器10於快速移動時的平面方程式; (6)  另,前述所稱的影像座標是為了描述成像過程中,實體物件從相機座標系到影像座標系的投影透射關係而引入,是我們真正從深度相機20內讀取到的影像所在的座標系,單位為像素,而前述所稱的相機座標就是以深度相機20為原點建立的座標系,是為了從深度相機20的角度描述物體位置而定義。 Please refer to "Figure 1", which is a system architecture diagram of the present invention. The present invention proposes a planar motion detection system 1, which mainly includes an inertial sensor 10, a depth camera 20, and a computing device 30, in which: (1) The Inertial Measurement Unit (IMU) 10 includes an accelerometer (G-Seosor) 101 and a gyroscope (Gyroscope) 102, which can continuously obtain acceleration information and angular velocity information; (2) The depth camera 20 can continuously capture a depth image, so as to continuously input a depth image coordinate and a depth value of the depth camera 20 for one or more physical objects within a viewing range, and the depth camera 20 can be It is configured as a depth sensor that uses a Time of Flight (TOF), a Structured Light (Structured Light) or a Stereo Visual (Stereo Visual) scheme to measure the depth of the aforementioned physical objects, Among them, the time-of-flight method means that the depth camera 20 can be used as a ToF camera, and uses a light-emitting diode (LED) or a laser diode (LD) to emit infrared light when it is irradiated to a physical object. After the light on the surface is reflected back, since the speed of light is known, an infrared light image sensor can be used to measure the time for the physical object to reflect the light back at different depths, and then the physical object can be calculated. The depth images of different positions and physical objects; the structured light scheme means that the depth camera 20 can use a laser diode (LD) or a digital light source processor (DLP) to produce different light patterns and pass through a specific grating It is diffracted on the surface of the physical object to form a pattern of light spots. Since the light spot pattern reflected by the physical object at different depths will be distorted, when the reflected light enters the infrared image sensor Then, the three-dimensional structure of the physical object and its depth image can be reversed; the binocular vision solution means that the depth camera 20 can be used as a stereo camera, and at least two camera lenses are used to photograph the physical object and the depth camera 20. The generated disparity is used to measure the three-dimensional information (depth image) of the physical object through the principle of triangulation; (3) The computing device 30 is respectively coupled to the inertial sensor 10 and the depth camera 20, and has a motion state determination unit 301 and a plane detection unit 302, and the motion state determination unit 301 and the plane detection unit 302 are in communication connection. The motion state determination unit 301 is configured to continuously determine whether the acceleration information and angular velocity information obtained by the inertial sensor 10 exceed a threshold, so as to determine the motion state of the inertial sensor 10 itself or the device on which it is mounted. It is worth noting that the computing device 30 may have at least one processor (not shown in the figure, such as CPU, MCU), which is used to run the computing device 30, and is equipped with logical operations, temporary storage of computing results, storage of execution instruction positions, etc. In addition, the motion state determination unit 301 and the plane detection unit 302 themselves can run on a plane dynamic device (not shown in the figure, such as a head-mounted display, and the head-mounted display can be a VR helmet, an MR helmet, etc. Head-mounted display), a host (Host), a physical server or a virtualized server (VM) computing device 30, but they are not limited to this; (4) In addition, if the current threshold is not exceeded, the plane detection unit 302 is configured to be based on acceleration information, depth image coordinates (Pixel Domain), depth value (depth value) and an internal parameter matrix (intrinsic parameter matrix) Calculate a normal vector (normal vector) and a distance constant (d value), and use the normal vector and the distance constant (its position is in the image coordinate system) to initialize or continuously update the inertial sensor 10 in a stable state, the physical object A 3D plane equation in a camera coordinate system. The meaning of the plane equation is that any point on a plane and the normal line perpendicular to the plane can uniquely define the three-dimensional This plane in space; (5) Conversely, if the current threshold is exceeded, the plane detection unit 302 is configured to perform a filter-based or optimization-based vision based on a gravitational acceleration in the acceleration information. The visual inertial odometry (VIO) algorithm is used to obtain the position information of the depth camera 20, and based on a rotation matrix (orientation matrix) and a displacement information (translation) of the position and attitude information, the inertial sensibility is continuously corrected The plane equation of the detector 10 when it is moving fast; (6) In addition, the aforementioned image coordinates are introduced to describe the projection and transmission relationship of the physical object from the camera coordinate system to the image coordinate system during the imaging process. It is where the image we actually read from the depth camera 20 is located. The coordinate system, the unit is pixel, and the aforementioned camera coordinate is the coordinate system established with the depth camera 20 as the origin, and is defined for describing the position of the object from the perspective of the depth camera 20.

請繼續參閱「第1圖」,本發明在一較佳實施例中,運算裝置30的平面偵測單元302亦可對實體物件的深度影像座標與深度值執行一內積運算,以持續生成實體物件於一影像座標系的一三維座標,並以前述的三維座標與內部參數矩陣演算出平面方程式。Please continue to refer to "Figure 1". In a preferred embodiment of the present invention, the plane detection unit 302 of the computing device 30 can also perform an inner product operation on the depth image coordinates and depth values of the physical object to continuously generate the physical object. The object is in a three-dimensional coordinate system of an image coordinate system, and a plane equation is calculated using the aforementioned three-dimensional coordinate and internal parameter matrix.

請繼續參閱「第1圖」,本發明在一較佳實施例中,運算裝置30的平面偵測單元302亦可對前述的法向量執行一疊代最佳化(iterative optimization)演算法或一高斯牛頓(gauss newton)演算法求得一最佳法向量及其對應的距離常數(d值),並以最佳法向量取代前述的法向量演算出更臻精確的平面方程式。Please continue to refer to "Figure 1". In a preferred embodiment of the present invention, the plane detection unit 302 of the computing device 30 can also perform an iterative optimization algorithm or an iterative optimization algorithm on the aforementioned normal vector. The Gauss Newton algorithm obtains an optimal normal vector and its corresponding distance constant (d value), and replaces the aforementioned normal vector with the optimal normal vector to calculate a more accurate plane equation.

請參閱「第2圖」至「第3圖」,其分別為本發明的平面動態偵測方法流程圖(一)、(二),並請搭配參閱「第1圖」,本發明提出一種平面動態偵測方法S,可包括以下步驟: (1)擷取影像步驟(步驟S10):一深度相機20持續擷取一深度影像,以持續輸入深度相機20於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值; (2)偵測慣性數據步驟(步驟S20):一慣性感測器10持續取得一加速度資訊及一角速度資訊等慣性數據; (3)判斷運動狀態步驟(步驟S30):一運算裝置30持續判斷慣性感測器10所取得的一加速度資訊及一角速度資訊是否超出一閾值,以判斷慣性感測器10本身或其所搭載裝置的運動狀態; (4)第一更新平面方程式步驟(步驟S40):承步驟S30,若未超出閾值,運算裝置30可依據加速度資訊、深度影像座標、深度值及一內部參數矩陣計算出一法向量及一距離常數(其對應於影像座標系),並以法向量及距離常數,初始化或持續更新慣性感測器10於穩定狀態時,實體物件於一相機座標系中的一平面方程式; (5)第二更新平面方程式步驟(步驟S50):承步驟S30,若已超出閾值,運算裝置30依據加速度資訊的一重力加速度,執行一視覺慣性里程計演算法,以求得深度相機20的一位姿資訊,並基於位姿資訊的一旋轉矩陣及一位移資訊,持續修正慣性感測器10於快速移動時的平面方程式。 Please refer to "Figure 2" to "Figure 3", which are the flow diagrams (1) and (2) of the planar motion detection method of the present invention. Please also refer to "Figure 1". The present invention proposes a planar motion detection method. The motion detection method S may include the following steps: (1) Image capturing step (Step S10): A depth camera 20 continuously captures a depth image, so as to continuously input the depth camera 20 for one or more images within a viewing range. A depth image coordinate and a depth value of a physical object; (2) Step of detecting inertial data (step S20): An inertial sensor 10 continuously obtains inertial data such as acceleration information and angular velocity information; (3) Judging movement State step (step S30): an arithmetic device 30 continues to determine whether the acceleration information and angular velocity information obtained by the inertial sensor 10 exceed a threshold, to determine the motion state of the inertial sensor 10 itself or the device on which it is mounted; (4) The first step of updating the plane equation (step S40): following step S30, if the threshold is not exceeded, the computing device 30 can calculate a normal vector and a distance based on acceleration information, depth image coordinates, depth values and an internal parameter matrix Constants (which correspond to the image coordinate system), and use normal vectors and distance constants to initialize or continuously update a plane equation of the physical object in a camera coordinate system when the inertial sensor 10 is in a stable state; (5) Second Update the plane equation step (step S50): Step S30, if the threshold is exceeded, the computing device 30 executes a visual inertial odometry algorithm according to a gravitational acceleration of the acceleration information to obtain the position information of the depth camera 20, And based on a rotation matrix of the pose information and a displacement information, the plane equation of the inertial sensor 10 when it moves quickly is continuously corrected.

承上,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,步驟S40執行時,若以欲偵測的平面類型為地面為例,且慣性感測器10的慣性數據未超出閾值,也就是慣性感測器10本身或其搭載裝置係處於穩定狀態時(例如靜止),則慣性感測器10僅會讀取到靜止加速度值g(gravity force direction),而其反方向為實體物件之平面方程式於相機座標的法向量n,關係式可參照如下: (1)慣性感測器10的靜止加速度值:g=9.8m/s 2或10m/s 2 (2)平面方程式於相機座標的法向量n=-g=

Figure 02_image013
(3)依此,深度影像中的實體物件(地面)於影像座標下的法向量
Figure 02_image015
可表示為:
Figure 02_image017
Continuing, please continue to refer to "Picture 2" to "Picture 3", and please refer to "Picture 1" for collocation. When step S40 is executed, if the type of plane to be detected is the ground as an example, and the inertial sensing The inertial data of the sensor 10 does not exceed the threshold, that is, when the inertial sensor 10 itself or its carrying device is in a stable state (for example, at rest), the inertial sensor 10 will only read the static acceleration value g (gravity force direction ), and the opposite direction is the normal vector n of the plane equation of the physical object to the camera coordinates. The relational expression can be referred to as follows: (1) The static acceleration value of the inertial sensor 10: g=9.8m/s 2 or 10m/s 2 (2) The normal vector of the plane equation to the camera coordinates n=-g=
Figure 02_image013
(3) According to this, the normal vector of the physical object (ground) in the depth image under the image coordinates
Figure 02_image015
Can be expressed as:
Figure 02_image017

承上,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,步驟S50執行時,若以欲偵測的平面類型為地面為例,由於當慣性感測器10處於劇烈或快速運動的情況,已無法以加速度計101的讀數來預估平面方程式的法向量,故前述的步驟S50於執行時,可利用例如基於濾波或基於優化的VIO演算法來更新實體物件(地面)的平面方程式,假設VIO預估的深度相機20的相對位姿(Relative Pose Motion)是

Figure 02_image019
,並假設更新前的平面方程式為
Figure 02_image021
,則之後的平面方程式得依以下關係式更新,但以下僅為舉例,並不以此為限:
Figure 02_image023
Figure 02_image025
Figure 02_image027
Continuing from above, please continue to refer to "Picture 2" to "Picture 3", and please refer to "Picture 1" in conjunction. When step S50 is executed, if the type of plane to be detected is the ground as an example, because it is used to be sexy The measuring device 10 is in a situation of violent or fast movement, and it is no longer possible to estimate the normal vector of the plane equation based on the reading of the accelerometer 101. Therefore, the aforementioned step S50 can be performed by using, for example, a filtering or optimization-based VIO algorithm. Update the plane equation of the physical object (ground), assuming that the relative pose (Relative Pose Motion) of the depth camera 20 estimated by VIO is
Figure 02_image019
, And assume that the plane equation before the update is
Figure 02_image021
, The subsequent plane equations have to be updated according to the following relational expressions, but the following are only examples and not limited to this:
Figure 02_image023
Figure 02_image025
Figure 02_image027

另,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,本發明在一較佳實施例中,若系統以欲偵測的平面類型為地面為目標,由於步驟S40執行時,即便運動狀態判斷單元301判斷慣性感測器10處於穩定狀態,慣性感測器10本身或其所搭載裝置也可能並非完全靜止,此外,也有實體物件(地面本身)有些傾斜的狀況,故在前述的步驟S40執行時,運算裝置30可進一步對法向量執行一疊代最佳化演算法或一高斯牛頓演算法(例如gauss newton least square),以求得一最佳法向量

Figure 02_image029
及其對應的距離常數(
Figure 02_image031
值),並以最佳法向量
Figure 02_image029
取代法向量
Figure 02_image033
而演算出平面方程式,更具體而言,運算裝置30的平面偵測單元302演算最佳法向量
Figure 02_image029
的公式可參照如下,但以下僅為舉例,並不以此為限: (1)首先,將深度影像中的深度值超過一定數值
Figure 02_image035
的像素予以排除,再以前述提及的法向量
Figure 02_image015
(此處暫稱法向量
Figure 02_image037
,其對應於影像座標系)排除後的n個深度影像座標,算出對應的
Figure 02_image039
值,如下關係式所示:
Figure 02_image041
Figure 02_image043
Figure 02_image045
Figure 02_image047
(2)接著,假設實體物件(地面)的
Figure 02_image039
值,是在所有深度影像中法向量為
Figure 02_image037
之實體物件(其它平面)中最小的,因為地面應為距離深度相機20最遠的平面,所以得依以下關係式,算出距離深度相機20最遠平面之對應的
Figure 02_image039
值:
Figure 02_image049
(3)其後,平面偵測單元302進一步對法向量執行一疊代最佳化演算法或一高斯牛頓演算法,求得誤差函數(Error Function,亦可稱評價函數)最小的一最佳法向量
Figure 02_image029
,在此之前需先定義一誤差函數E(
Figure 02_image033
)及一閾值
Figure 02_image051
,如下所示:
Figure 02_image053
Figure 02_image055
In addition, please continue to refer to "Figure 2" to "Figure 3", and please refer to "Figure 1" together. In a preferred embodiment of the present invention, if the system targets the surface type to be detected as the ground Since step S40 is executed, even if the motion state determination unit 301 determines that the inertial sensor 10 is in a stable state, the inertial sensor 10 itself or the device it carries may not be completely static. In addition, there are some physical objects (the ground itself). In the inclined state, when the aforementioned step S40 is executed, the computing device 30 may further execute an iterative optimization algorithm or a Gauss Newton algorithm (for example, Gauss Newton least square) on the normal vector to obtain an optimum Normal vector
Figure 02_image029
And its corresponding distance constant (
Figure 02_image031
Value), and the best normal vector
Figure 02_image029
Replace normal vector
Figure 02_image033
The plane equation is calculated. More specifically, the plane detection unit 302 of the arithmetic device 30 calculates the best normal vector
Figure 02_image029
The formula for can refer to the following, but the following is only an example, and not limited to this: (1) First, the depth value in the depth image exceeds a certain value
Figure 02_image035
Pixels to be excluded, and then use the aforementioned normal vector
Figure 02_image015
(Here tentatively called normal vector
Figure 02_image037
, Which corresponds to the image coordinate system) excluded n depth image coordinates, and calculate the corresponding
Figure 02_image039
Value, as shown in the following relation:
Figure 02_image041
Figure 02_image043
Figure 02_image045
Figure 02_image047
(2) Next, suppose that the physical object (ground)
Figure 02_image039
Value, the normal vector in all depth images is
Figure 02_image037
The smallest of the physical objects (other planes), because the ground should be the plane farthest from the depth camera 20, so according to the following relational formula, calculate the corresponding to the farthest plane from the depth camera 20
Figure 02_image039
value:
Figure 02_image049
(3) Thereafter, the plane detection unit 302 further executes an iterative optimization algorithm or a Gauss-Newton algorithm on the normal vector, and obtains an optimal error function (Error Function, also called an evaluation function). Normal vector
Figure 02_image029
, Before this, it is necessary to define an error function E(
Figure 02_image033
) And a threshold
Figure 02_image051
,As follows:
Figure 02_image053
Figure 02_image055

請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,若以欲偵測的平面類型為地面為例,則運算裝置30之平面偵測單元302計算前述法向量的演算公式可參照如下,但並不以此為限,特先陳明: A.假設於深度影像中屬於地面部分的像素有N個;

Figure 02_image057
B.假設於深度影像中的一像素點座標為(
Figure 02_image059
,則:
Figure 02_image061
Figure 02_image063
C.第i個點於前述兩個不同座標系之三維座標的Z值相同,前述兩個三維座標於相機座標系與影像座標系的轉換關係如下:
Figure 02_image065
D.所以相機座標系與影像座標系的三維影像座標,係可透過深度相機20之內部參數矩陣K相關聯,而展開上述公式可得出,第i個點於影像座標系中的深度影像座標的x、y值分別為:
Figure 02_image067
Figure 02_image069
E.依據平面方程式的定義,並假設實體物件所處的平面上有前述的第i個點,可知以處於相機座標系的
Figure 02_image071
演算出的平面方程式為:
Figure 02_image073
F.承上,相機座標系的法向量
Figure 02_image075
G.依據平面方程式的定義,並假設實體物件所處的平面上有前述的第i個點,可知以處於影像座標系的
Figure 02_image077
演算出的平面方程式為:
Figure 02_image079
H.承上,影像座標系的法向量
Figure 02_image081
I.接著,演算處於相機座標系中實體物件(平面)的法向量,假設
Figure 02_image083
兩個點都在該平面上的話,會符合前述第G點的平面方程式,代入後的平面方程式分別如下:
Figure 02_image085
Figure 02_image087
J.將上述兩個平面方程式相減後,可得出:
Figure 02_image089
K.接著,將前述第D點的第i個點於影像座標系中的深度影像座標的x、y值代入第J點的方程式可得出:
Figure 02_image091
L.所以,實體物件對應到相機座標系的平面方程式的法向量為:
Figure 02_image093
Please continue to refer to "Figure 2" to "Figure 3", and also refer to "Figure 1". If the type of plane to be detected is the ground as an example, the plane detection unit 302 of the computing device 30 calculates the aforementioned The calculation formula of the normal vector can be referred to as follows, but it is not limited to this, and it is first stated: A. Assume that there are N pixels belonging to the ground part in the depth image;
Figure 02_image057
B. Assume that the coordinate of a pixel in the depth image is (
Figure 02_image059
,then:
Figure 02_image061
Figure 02_image063
C. The Z value of the ith point in the three-dimensional coordinates of the aforementioned two different coordinate systems is the same, and the conversion relationship between the aforementioned two three-dimensional coordinates in the camera coordinate system and the image coordinate system is as follows:
Figure 02_image065
D. Therefore, the camera coordinate system and the three-dimensional image coordinate of the image coordinate system can be related through the internal parameter matrix K of the depth camera 20, and the above formula can be expanded to obtain the depth image coordinate of the i-th point in the image coordinate system The x and y values are:
Figure 02_image067
Figure 02_image069
E. According to the definition of the plane equation, and assuming that there is the aforementioned i-th point on the plane where the physical object is located, it can be known that it is in the camera coordinate system
Figure 02_image071
The calculated plane equation is:
Figure 02_image073
F. Continuing, the normal vector of the camera coordinate system
Figure 02_image075
G. According to the definition of the plane equation, and assuming that there is the aforementioned i-th point on the plane where the physical object is located, it can be known that it is in the image coordinate system
Figure 02_image077
The calculated plane equation is:
Figure 02_image079
H. Continuing, the normal vector of the image coordinate system
Figure 02_image081
I. Next, calculate the normal vector of the physical object (plane) in the camera coordinate system, assuming
Figure 02_image083
If both points are on the plane, they will conform to the plane equation of the G-th point mentioned above. The substituted plane equations are as follows:
Figure 02_image085
Figure 02_image087
J. After subtracting the above two plane equations, we can get:
Figure 02_image089
K. Then, substituting the x and y values of the depth image coordinates of the ith point of the aforementioned D-th point in the image coordinate system into the equation for the J-th point, we can get:
Figure 02_image091
L. Therefore, the normal vector of the plane equation corresponding to the physical object to the camera coordinate system is:
Figure 02_image093

承上,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,當運算裝置30演算出實體物件對應到相機座標系的平面方程式的法向量n後,接續計算d值的演算公式可參照如下,但並不以此為限,特先陳明: M.首先,令一常數

Figure 02_image095
N.將處於影像座標系的像素點
Figure 02_image097
代入前述第G點的平面方程式可得出:
Figure 02_image085
O.將前述第D點「第i個點於影像座標系中的深度影像座標的x、y值」代入前述第N點的平面方程式可得出:
Figure 02_image099
Figure 02_image101
P.對前述第O點之平面方程式的等式兩側均除以c:
Figure 02_image103
Q.於此,可得出實體物件於相機座標中的平面方程式的d值為:
Figure 02_image105
Continuing, please continue to refer to "Figure 2" to "Figure 3", and please refer to "Figure 1" together, when the computing device 30 calculates the normal vector n of the plane equation corresponding to the physical object to the camera coordinate system, The calculation formula for the subsequent calculation of the value of d can be referred to as follows, but it is not limited to this, and it is first stated: M. First, let a constant
Figure 02_image095
N. Pixels that will be in the image coordinate system
Figure 02_image097
Substituting into the plane equation of the aforementioned G-th point, we can get:
Figure 02_image085
O. Substituting the aforementioned D-th point "the x and y values of the depth image coordinates of the ith point in the image coordinate system" into the plane equation of the aforementioned Nth point, we can get:
Figure 02_image099
Figure 02_image101
P. Divide both sides of the equation of the plane equation at point O by c:
Figure 02_image103
Q. Here, the d value of the plane equation of the physical object in the camera coordinates can be obtained:
Figure 02_image105

另,請繼續參閱「第2圖」至「第3圖」,並請搭配參閱「第1圖」,本發明在一較佳實施例中,在前述的步驟S30執行前,可先執行一取得三維座標步驟(步驟S25):運算裝置30對實體物件的深度影像座標與深度值執行一內積運算,以持續生成實體物件於一影像座標系的一三維座標,依此,可於步驟S40或步驟S50執行時,以前述的三維座標、內部參數矩陣與加速度資訊演算出前述的法向量及距離常數,進而運算出實體物件的平面方程式,更具體而言,生成前述三維座標的演算公式可參照:

Figure 02_image107
,其中,
Figure 02_image109
為處於影像座標系的三維座標,Z為深度值,
Figure 02_image111
則為深度影像座標(處於影像座標系),藉此,相較於習知點雲庫(PCL)皆需將深度相機20所取得的每個像素,先後與一相機投影反矩陣(即前述的K)及一深度值作矩陣乘法運算,以轉換成點雲座標系中的多個三維座標的作法,本實施例可省去像素、深度值與相機投影反矩陣作矩陣運算的步驟,而直接以前述的三維座標進行實體物件(平面)的偵測,而能達成節省運算量的有益功效,同時能省去從深度影像轉換至點雲的轉換時間。 In addition, please continue to refer to "Figure 2" to "Figure 3", and please refer to "Figure 1" in conjunction. In a preferred embodiment of the present invention, before the aforementioned step S30 is executed, a first execution can be obtained. Three-dimensional coordinate step (step S25): the computing device 30 performs an inner product operation on the depth image coordinates and the depth value of the physical object to continuously generate a three-dimensional coordinate of the physical object in an image coordinate system. Accordingly, it can be performed in step S40 or When step S50 is executed, the aforementioned normal vector and distance constant are calculated using the aforementioned three-dimensional coordinates, internal parameter matrix and acceleration information, and then the plane equation of the physical object is calculated. More specifically, the calculation formula for generating the aforementioned three-dimensional coordinates can be referred to :
Figure 02_image107
,among them,
Figure 02_image109
Is the three-dimensional coordinate in the image coordinate system, Z is the depth value,
Figure 02_image111
It is the depth image coordinates (in the image coordinate system). Therefore, compared with the conventional point cloud library (PCL), each pixel obtained by the depth camera 20 is sequentially inversely projected by a camera (that is, the aforementioned K) and a depth value is subjected to a matrix multiplication operation to convert it into multiple three-dimensional coordinates in the point cloud coordinate system. This embodiment can omit the steps of matrix operation for pixels, depth values and the inverse matrix of the camera projection, and directly The aforementioned three-dimensional coordinates are used to detect physical objects (planes), which can achieve the beneficial effect of saving calculations and save the conversion time from depth images to point clouds.

請參閱「第4圖」,其為本發明之另一較佳實施例之系統架構圖,本實施例與「第1圖」至「第3圖」所揭技術類同,主要差異在於,本實施例的平面動態偵測系統1更可包括一彩色相機40(例如RGB相機),其分別耦接於深度相機20及運算裝置30,供以持續擷取實體物件的一彩色影像,以供運算裝置30於步驟S10(擷取影像步驟)執行時,可確立實體物件之深度影像座標及一彩色影像座標之間的對應關係,以提升平面偵測之準確性,另,本實施例的彩色相機40亦可與深度相機20構成一RGB-D相機,即如本圖所示,且本實施例的深度相機20可為雙目相機,但均不以此為限。Please refer to "Figure 4", which is a system architecture diagram of another preferred embodiment of the present invention. This embodiment is similar to the technology disclosed in "Figure 1" to "Figure 3". The main difference lies in the The planar motion detection system 1 of the embodiment may further include a color camera 40 (such as an RGB camera), which is respectively coupled to the depth camera 20 and the computing device 30 for continuously capturing a color image of the physical object for computing When the device 30 executes step S10 (image capturing step), it can establish the correspondence between the depth image coordinates of the physical object and a color image coordinate, so as to improve the accuracy of plane detection. In addition, the color camera of this embodiment 40 can also form an RGB-D camera with the depth camera 20, as shown in this figure, and the depth camera 20 of this embodiment can be a binocular camera, but it is not limited to this.

綜上可知,本發明據以實施後,由於可解決習知偵測三維空間中平面時,針對不同的平面類型須作強烈假設而可能有平面誤判的問題,同時能改善習知平面偵測方法之計算效能不佳的缺點,而能達成更為準確偵測平面、更節省計算資源的有益功效。In summary, after the present invention is implemented, it can solve the problem that when conventionally detecting planes in three-dimensional space, strong assumptions must be made for different plane types, which may cause plane misjudgment, and at the same time, it can improve the conventional plane detection method. The disadvantage of poor computing performance can achieve the beneficial effects of more accurate detection of the plane and more saving of computing resources.

以上所述者,僅為本發明之較佳之實施例而已,並非用以限定本發明實施之範圍;任何熟習此技藝者,在不脫離本發明之精神與範圍下所作之均等變化與修飾,皆應涵蓋於本發明之專利範圍內。The above are only preferred embodiments of the present invention, and are not intended to limit the scope of implementation of the present invention; anyone who is familiar with this technique can make equal changes and modifications without departing from the spirit and scope of the present invention. Should be covered within the scope of the patent of the present invention.

綜上所述,本發明係具有「產業利用性」、「新穎性」與「進步性」等專利要件;申請人爰依專利法之規定,向 鈞局提起發明專利之申請。To sum up, the present invention has patent requirements such as "industrial applicability", "novelty" and "advancedness"; the applicant filed an application for a patent for invention with the Bureau in accordance with the provisions of the Patent Law.

1        平面動態偵測系統 10       慣性感測器       101     加速度計 102     陀螺儀 20       深度相機 30       運算裝置         301     運動狀態判斷單元 302     平面偵測單元 40       彩色相機 S        平面動態偵測方法 S10     擷取影像步驟 S20     偵測慣性數據步驟 S25     取得三維座標步驟 S30     判斷運動狀態步驟 S40     第一更新平面方程式步驟 S50     第二更新平面方程式步驟 1 Plane motion detection system 10 Inertial sensor 101 Accelerometer 102 Gyroscope 20 Depth Camera 30 Computing device 301 Motion state judgment unit 302 Plane detection unit 40 Color camera S Planar motion detection method S10 Steps to capture images S20 Steps to detect inertial data S25 Steps to obtain 3D coordinates S30 Steps for judging the movement status S40 The first step to update the plane equation S50 The second update of the plane equation step

第1圖,為本發明之系統架構圖。 第2圖,為本發明的平面偵測方法流程圖(一)。 第3圖,為本發明的平面偵測方法流程圖(二)。 第4圖,為本發明於另一較佳實施例之系統架構圖。 Figure 1 is a system architecture diagram of the present invention. Figure 2 is a flow chart (1) of the plane detection method of the present invention. Figure 3 is a flowchart (2) of the plane detection method of the present invention. Figure 4 is a system architecture diagram of another preferred embodiment of the present invention.

1        平面動態偵測系統 10       慣性感測器       101     加速度計 102     陀螺儀 20       深度相機 30       運算裝置         301     運動狀態判斷單元 302     平面偵測單元 1 Plane motion detection system 10 Inertial sensor 101 Accelerometer 102 Gyroscope 20 Depth Camera 30 Computing device 301 Motion state judgment unit 302 Plane detection unit

Claims (5)

一種平面動態偵測系統,包含:一慣性感測器,其包含一加速度計及一陀螺儀;一深度相機,供以持續擷取一深度影像,以持續輸入該深度相機於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值;一運算裝置,該運算裝置分別耦接於該慣性感測器及該深度相機,該運算裝置具有一運動狀態判斷單元及一平面偵測單元,該運動狀態判斷單元及該平面偵測單元通訊連接,該運動狀態判斷單元供以持續判斷該慣性感測器所取得的一加速度資訊及一角速度資訊是否超出一閾值;若未超出該閾值,該平面偵測單元被組態為依據該加速度資訊、該深度影像座標、該深度值及一內部參數矩陣計算出一法向量及一距離常數,並以該法向量及該距離常數,初始化或持續更新該慣性感測器於穩定狀態時,該實體物件於一相機座標系中的一平面方程式;以及若已超出該閾值,該平面偵測單元被組態為依據該加速度資訊的一重力加速度,執行一視覺慣性里程計演算法,以求得該深度相機的一位姿資訊,並基於該位姿資訊的一旋轉矩陣及一位移資訊,持續修正該慣性感測器於快速移動時的該平面方程式。 A plane motion detection system includes: an inertial sensor including an accelerometer and a gyroscope; a depth camera for continuously capturing a depth image to continuously input the depth camera in a viewing range For a depth image coordinate and a depth value of one or more physical objects; an arithmetic device, the arithmetic device is respectively coupled to the inertial sensor and the depth camera, the arithmetic device has a motion state determination unit and a plane The detection unit, the motion state determination unit and the plane detection unit are communicatively connected, and the motion state determination unit is used to continuously determine whether an acceleration information and an angular velocity information obtained by the inertial sensor exceeds a threshold; if it does not exceed For the threshold, the plane detection unit is configured to calculate a normal vector and a distance constant based on the acceleration information, the depth image coordinates, the depth value, and an internal parameter matrix, and use the normal vector and the distance constant, Initialize or continuously update a plane equation of the physical object in a camera coordinate system when the inertial sensor is in a stable state; and if the threshold value has been exceeded, the plane detection unit is configured as a basis for the acceleration information Gravity acceleration, execute a visual inertial odometry algorithm to obtain the position information of the depth camera, and based on a rotation matrix of the position and position information and a displacement information, continuously correct the inertial sensor when it is moving quickly Of the plane equation. 一種平面動態偵測方法,包含:一擷取影像步驟:一深度相機持續擷取一深度影像,以持續輸入該深度相機於一觀視範圍內對於一或多個實體物件的一深度影像座標及一深度值; 一偵測慣性數據步驟:一慣性感測器持續取得一加速度資訊及一角速度資訊;一判斷運動狀態步驟:一運算裝置持續判斷該慣性感測器所取得的該加速度資訊及該角速度資訊是否超出一閾值,以判斷該慣性感測器的運動狀態;一第一更新平面方程式步驟:若未超出該閾值,該運算裝置依據該加速度資訊、該深度影像座標、該深度值及一內部參數矩陣計算出一法向量及一距離常數,並以該法向量及該距離常數,初始化或持續更新該慣性感測器於穩定狀態時,該實體物件於一相機座標系中的一平面方程式;以及一第二更新平面方程式步驟:若已超出該閾值,該運算裝置依據該加速度資訊的一重力加速度,執行一視覺慣性里程計演算法,以求得該深度相機的一位姿資訊,並基於該位姿資訊的一旋轉矩陣及一位移資訊,持續修正該慣性感測器於快速移動時的該平面方程式。 A plane motion detection method, comprising: an image capturing step: a depth camera continuously captures a depth image to continuously input a depth image coordinate of one or more physical objects within a viewing range of the depth camera and A depth value; A step of detecting inertial data: an inertial sensor continuously obtains an acceleration information and an angular velocity information; a step of determining a motion state: an arithmetic device continuously determines whether the acceleration information and the angular velocity information obtained by the inertial sensor exceed A threshold to determine the motion state of the inertial sensor; a first step of updating the plane equation: if the threshold is not exceeded, the computing device calculates according to the acceleration information, the depth image coordinates, the depth value and an internal parameter matrix Out a normal vector and a distance constant, and use the normal vector and the distance constant to initialize or continuously update a plane equation of the physical object in a camera coordinate system when the inertial sensor is in a stable state; and 2. The step of updating the plane equation: if the threshold is exceeded, the computing device executes a visual inertial odometry algorithm based on a gravitational acceleration of the acceleration information to obtain the position information of the depth camera, and based on the position and position A rotation matrix and a displacement information of the information continuously modify the plane equation of the inertial sensor when it is moving rapidly. 如申請專利範圍第2項的平面動態偵測方法,更包括執行於該判斷運動狀態步驟之前的一取得三維座標步驟:該運算裝置對該實體物件的該深度影像座標與該深度值執行一內積運算,以持續生成該實體物件於一影像座標系的一三維座標,以於該第一更新平面方程式步驟或該第二更新平面方程式步驟執行時,以該三維座標、該內部參數矩陣與該加速度資訊演算該法向量及該距離常數。 For example, the planar motion detection method of item 2 of the scope of the patent application further includes a step of obtaining three-dimensional coordinates before the step of determining the motion state: the computing device performs an internal operation on the depth image coordinates and the depth value of the physical object. Product operation to continuously generate a three-dimensional coordinate of the physical object in an image coordinate system to use the three-dimensional coordinate, the internal parameter matrix and the The acceleration information calculates the normal vector and the distance constant. 如申請專利範圍第2項或第3項的平面動態偵測方法,其中,該第一更新平面方程式步驟執行時,該運算裝置亦對該法向量執行一疊代最佳化演算法或一高斯牛頓演算法,以求得一最佳法向量,並以該最佳法向量取代該法向量而演算出該平面方程式。 For example, the plane motion detection method of item 2 or item 3 of the scope of patent application, wherein, when the first step of updating the plane equation is executed, the arithmetic device also executes an iterative optimization algorithm or a Gaussian on the normal vector Newton's algorithm is to obtain an optimal normal vector, and replace the normal vector with the optimal normal vector to calculate the plane equation. 如申請專利範圍第2項或第3項的平面動態偵測方法,其中,該擷取影像步驟執行時,更包括一彩色相機持續該實體物件的一彩色影像,以持續輸入該彩色相機對於該實體物件的一彩色影像座標。 For example, the planar motion detection method of item 2 or item 3 of the scope of patent application, wherein, when the image capturing step is executed, it further includes a color camera continuing a color image of the physical object to continuously input the color camera to the A color image coordinate of the physical object.
TW108139370A 2019-10-31 2019-10-31 Plane dynamic detection system and detection method TWI730482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW108139370A TWI730482B (en) 2019-10-31 2019-10-31 Plane dynamic detection system and detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108139370A TWI730482B (en) 2019-10-31 2019-10-31 Plane dynamic detection system and detection method

Publications (2)

Publication Number Publication Date
TW202119359A TW202119359A (en) 2021-05-16
TWI730482B true TWI730482B (en) 2021-06-11

Family

ID=77020967

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108139370A TWI730482B (en) 2019-10-31 2019-10-31 Plane dynamic detection system and detection method

Country Status (1)

Country Link
TW (1) TWI730482B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015138822A1 (en) * 2014-03-14 2015-09-17 Qualcomm Incorporated Sensor-based camera motion detection for unconstrained slam
TW201915445A (en) * 2017-10-13 2019-04-16 緯創資通股份有限公司 Locating method, locator, and locating system for head-mounted display
US20190187783A1 (en) * 2017-12-18 2019-06-20 Alt Llc Method and system for optical-inertial tracking of a moving object
CN110246177A (en) * 2019-06-25 2019-09-17 上海大学 Automatic wave measuring method based on vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015138822A1 (en) * 2014-03-14 2015-09-17 Qualcomm Incorporated Sensor-based camera motion detection for unconstrained slam
TW201915445A (en) * 2017-10-13 2019-04-16 緯創資通股份有限公司 Locating method, locator, and locating system for head-mounted display
US20190187783A1 (en) * 2017-12-18 2019-06-20 Alt Llc Method and system for optical-inertial tracking of a moving object
CN110246177A (en) * 2019-06-25 2019-09-17 上海大学 Automatic wave measuring method based on vision

Also Published As

Publication number Publication date
TW202119359A (en) 2021-05-16

Similar Documents

Publication Publication Date Title
US20210190497A1 (en) Simultaneous location and mapping (slam) using dual event cameras
US8711206B2 (en) Mobile camera localization using depth maps
Jordt-Sedlazeck et al. Refractive calibration of underwater cameras
US9679384B2 (en) Method of detecting and describing features from an intensity image
KR102455632B1 (en) Mehtod and apparatus for stereo matching
US20140300736A1 (en) Multi-sensor camera recalibration
US9171393B2 (en) Three-dimensional texture reprojection
WO2022188334A1 (en) Positioning initialization method and apparatus, device, storage medium, and program product
JP5976089B2 (en) Position / orientation measuring apparatus, position / orientation measuring method, and program
CN116051650A (en) Laser radar and camera combined external parameter calibration method and device
TWI730482B (en) Plane dynamic detection system and detection method
TWI822423B (en) Computing apparatus and model generation method
WO2010089938A1 (en) Rotation estimation device, rotation estimation method, and storage medium
CN112750205B (en) Plane dynamic detection system and detection method
TWM594152U (en) Planar dynamic detection system
JP7255709B2 (en) Estimation method, estimation device and program
Boas et al. Relative Pose Improvement of Sphere based RGB-D Calibration.
US20230326074A1 (en) Using cloud computing to improve accuracy of pose tracking
KR20240015464A (en) Line-feature-based SLAM system using vanishing points
Ramli et al. Enhancement of Depth Value Approximation for 3D Image-Based Modelling using Noise Filtering and Inverse Perspective Mapping Techniques for Complex Object
OK Rahmat et al. Enhancement of depth value approximation using noise filtering and inverse perspective mapping techniques for image based modelling
Burschka Monocular navigation in large scale dynamic environments
WO2023141491A1 (en) Sensor calibration system
CN113706596A (en) Method for densely constructing image based on monocular camera
JP5510837B2 (en) Stereo camera placement method and system