TWI795306B - Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression - Google Patents
Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression Download PDFInfo
- Publication number
- TWI795306B TWI795306B TW111121968A TW111121968A TWI795306B TW I795306 B TWI795306 B TW I795306B TW 111121968 A TW111121968 A TW 111121968A TW 111121968 A TW111121968 A TW 111121968A TW I795306 B TWI795306 B TW I795306B
- Authority
- TW
- Taiwan
- Prior art keywords
- positioning
- information
- failure detection
- autonomous mobile
- mobile robot
- Prior art date
Links
Images
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
Description
本發明相關於一種自主移動機器人,特別是相關於一種基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統。The present invention relates to an autonomous mobile robot, in particular to a positioning failure detection system for an autonomous mobile robot based on deep learning six-dimensional attitude regression.
疫情當前,為了減少人與人之間的接觸及群聚,自動化送餐與運貨服務需求日益增加,引入移動型機器人(Mobile robot)便是其中的解決方法之一。移動型機器人大致可分為兩種:自動引導車(Automated Guided Vehicle;AGV)與自主移動機器人(Autonomous Mobile Robot;AMR)。兩者的差異在於前者需要在運行場域內施工,增加磁條或是軌道讓AGV運行,而後者僅需感測器事先在場域內建立地圖,便可在地圖中導航與避障。由於運送地點大多在一般住家以及室外,無法進行事前施工,因此AMR較為靈活與彈性的特點使其在近年來更加蓬勃發展。In the current epidemic situation, in order to reduce the contact and gathering between people, the demand for automated food delivery and delivery services is increasing. The introduction of mobile robots is one of the solutions. Mobile robots can be roughly divided into two types: Automated Guided Vehicle (AGV) and Autonomous Mobile Robot (AMR). The difference between the two is that the former requires construction in the operating field, adding magnetic strips or tracks to allow the AGV to operate, while the latter only needs sensors to build a map in the field in advance to navigate and avoid obstacles in the map. Since most of the delivery locations are ordinary homes and outdoors, it is impossible to carry out pre-construction, so AMR is relatively flexible and flexible, making it more vigorous in recent years.
在自主移動機器人的系統中,定位系統可以說特別重要。定位系統失效的情況,代表定位系統無法在地圖上找到AMR位置,又或是定位結果誤差大於系統定義定位誤差。當定位系統失效時經常造成路徑規劃系統無法正常運作,使得AMR停在原地動彈不得,甚至往前撞向障礙物。In the system of autonomous mobile robots, the positioning system can be said to be particularly important. If the positioning system fails, it means that the positioning system cannot find the AMR position on the map, or the error of the positioning result is greater than the positioning error defined by the system. When the positioning system fails, it often causes the path planning system to fail to operate normally, making the AMR stop in place and even crash into obstacles.
因此,本發明的目的即在提供一種基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統,以解決自主移動機器人的定位失效問題。Therefore, the purpose of the present invention is to provide a positioning failure detection system for an autonomous mobile robot based on deep learning six-dimensional attitude regression, so as to solve the problem of positioning failure of an autonomous mobile robot.
本發明為解決習知技術之問題所採用之技術手段係提供一種基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統,係用於偵測該自主移動機器人的一主要定位裝置的定位失效,該定位失效偵測系統包含:一視覺圖像輸入裝置,設置於該自主移動機器人,該視覺圖像輸入裝置經配置而取得該自主移動機器人對於預定的一定位環境的一當前視覺圖像資訊;一地圖座標迴歸裝置,連接於該視覺圖像輸入裝置,該地圖座標迴歸裝置係以經訓練的地圖座標迴歸神經網路來對該當前視覺圖像資訊進行特徵擷取處理,並基於該特徵擷取處理所得到的特徵圖資訊進行姿態估測處理而得到該自主移動機器人的當前的一六維姿態估測資訊,其中經訓練的該地圖座標迴歸神經網路係在該地圖座標迴歸神經網路與一視覺里程計神經網路二者的當前視覺圖像資訊輸入模組為相分享權重的狀態下將該地圖座標迴歸神經網路與該視覺里程計神經網路同時進行訓練所得到,經訓練的該地圖座標迴歸神經網路包括用於進行該特徵擷取處理的一特徵擷取器以及用於進行該姿態估測處理的一姿態迴歸器,該特徵擷取器包括一幾何限制模組,該幾何限制模組係經配置而在該特徵擷取器進行此刻的該特徵擷取處理的過程中將前一刻的該特徵擷取處理所得到的前一刻的該特徵圖資訊增加輸入至該特徵擷取器中;以及一定位失效偵測裝置,連接於該地圖座標迴歸裝置及該主要定位裝置,以取得該地圖座標迴歸裝置所估測得到的該六維姿態估測資訊以及該主要定位裝置對該自主移動機器人定位所得到的一六維姿態定位資訊,該定位失效偵測裝置經配置而根據該六維姿態估測資訊與該六維姿態定位資訊之間的偏差的隨時間的變化量,求得一偏差變化量函數值,並且在該偏差變化量函數值超過預設的一失效判斷閾值時,判斷該自主移動機器人的該主要定位裝置當前為定位失效。The technical means adopted by the present invention to solve the problems of the prior art is to provide a positioning failure detection system for an autonomous mobile robot based on deep learning six-dimensional attitude regression, which is used to detect the location of a main positioning device of the autonomous mobile robot Positioning failure, the positioning failure detection system includes: a visual image input device, arranged on the autonomous mobile robot, the visual image input device is configured to obtain a current visual image of the autonomous mobile robot for a predetermined positioning environment Image information; a map coordinate regression device, connected to the visual image input device, the map coordinate regression device uses the trained map coordinate regression neural network to perform feature extraction processing on the current visual image information, and based on The feature map information obtained by the feature extraction process is subjected to attitude estimation processing to obtain a current six-dimensional attitude estimation information of the autonomous mobile robot, wherein the trained map coordinate regression neural network is regressed at the map coordinates The current visual image information input module of the neural network and a visual odometry neural network are in the state of sharing weights, and the map coordinate regression neural network and the visual odometry neural network are trained simultaneously. , the trained map coordinate regression neural network includes a feature extractor for performing the feature extraction process and a pose regressor for performing the pose estimation process, the feature extractor includes a geometric constraint module, the geometric constraint module is configured to add the feature map information of the previous moment obtained by the feature extraction process of the previous moment to input during the feature extraction process of the feature extractor at the moment into the feature extractor; and a positioning failure detection device connected to the map coordinate regression device and the main positioning device to obtain the six-dimensional attitude estimation information estimated by the map coordinate regression device and the A six-dimensional attitude positioning information obtained by the main positioning device for positioning the autonomous mobile robot. A deviation change function value is obtained, and when the deviation change function value exceeds a preset failure judgment threshold, it is judged that the main positioning device of the autonomous mobile robot is currently a positioning failure.
在本發明的一實施例中係提供一種定位失效偵測系統,其中該視覺圖像輸入裝置為一相機,該當前視覺圖像資訊為單一RGB圖像。In one embodiment of the present invention, a positioning failure detection system is provided, wherein the visual image input device is a camera, and the current visual image information is a single RGB image.
在本發明的一實施例中係提供一種定位失效偵測系統,其中該特徵擷取器為一深度可分離卷積神經網路。In an embodiment of the present invention, a localization failure detection system is provided, wherein the feature extractor is a deep separable convolutional neural network.
在本發明的一實施例中係提供一種定位失效偵測系統,其中該特徵擷取器包括一MobileNetV2模組。In an embodiment of the present invention, a positioning failure detection system is provided, wherein the feature extractor includes a MobileNetV2 module.
在本發明的一實施例中係提供一種定位失效偵測系統,其中該特徵擷取器包括一全域性平均池化層。In an embodiment of the present invention, a positioning failure detection system is provided, wherein the feature extractor includes a global average pooling layer.
在本發明的一實施例中係提供一種定位失效偵測系統,其中該姿態迴歸器包括一全連接層。An embodiment of the present invention provides a positioning failure detection system, wherein the attitude regressor includes a fully connected layer.
在本發明的一實施例中係提供一種定位失效偵測系統,其中該定位失效偵測裝置包括一長期變化量計算單元以及一短期變化量計算單元,該長期變化量計算單元係配置而計算該六維姿態估測資訊與該六維姿態定位資訊之間的偏差在長時間的一長期變化量函數值,該短期變化量計算單元係配置而計算該六維姿態估測資訊與該六維姿態定位資訊之間的偏差在短時間的一短期變化量函數值,以及該定位失效偵測裝置係經配置而在該長期變化量函數值與該短期變化量函數值之間的差異超過該失效判斷閾值時,判斷該自主移動機器人的該主要定位裝置當前為定位失效。In an embodiment of the present invention, a positioning failure detection system is provided, wherein the positioning failure detection device includes a long-term variation calculation unit and a short-term variation calculation unit, and the long-term variation calculation unit is configured to calculate the The deviation between the six-dimensional attitude estimation information and the six-dimensional attitude positioning information is a long-term variation function value in a long period of time, and the short-term variation calculation unit is configured to calculate the six-dimensional attitude estimation information and the six-dimensional attitude The deviation between the positioning information is a short-term variation function value in a short time, and the positioning failure detection device is configured so that the difference between the long-term variation function value and the short-term variation function value exceeds the failure judgment threshold, it is judged that the main positioning device of the autonomous mobile robot is currently in failure of positioning.
在本發明的一實施例中係提供一種定位失效偵測系統,其中該定位失效偵測裝置包括一定位結果輸出單元,連接於該自主移動機器人之一控制系統,該定位結果輸出單元係經配置而在該主要定位裝置被判斷為當前為定位失效時,以該六維姿態估測資訊取代該六維姿態定位資訊,作為一定位結果資訊輸出至該控制系統。In one embodiment of the present invention, a positioning failure detection system is provided, wherein the positioning failure detection device includes a positioning result output unit connected to a control system of the autonomous mobile robot, and the positioning result output unit is configured And when the main positioning device is judged to be a current positioning failure, the six-dimensional attitude estimation information is used to replace the six-dimensional attitude positioning information, and output to the control system as a positioning result information.
在本發明的一實施例中係提供一種定位失效偵測系統,其中該定位失效偵測裝置包括一初始定位給予單元,連接於該自主移動機器人的該主要定位裝置,該初始定位給予單元係經配置而在該主要定位裝置被判斷為當前為定位失效時,以該六維姿態估測資訊作為一初始姿態定位資訊而給予至該主要定位裝置,以供該主要定位裝置後續利用該初始姿態定位資訊對該自主移動機器人進行定位而得到該六維姿態定位資訊。In one embodiment of the present invention, a positioning failure detection system is provided, wherein the positioning failure detection device includes an initial positioning giving unit connected to the main positioning device of the autonomous mobile robot, and the initial positioning giving unit is passed configured to provide the primary positioning device with the six-dimensional attitude estimation information as an initial attitude positioning information for the primary positioning device to use the initial attitude positioning information for subsequent positioning when the primary positioning device is judged to be a current positioning failure The information is used to locate the autonomous mobile robot to obtain the six-dimensional attitude positioning information.
經由本發明所採用之技術手段,定位失效偵測系統能夠有效地偵測自主移動機器人的主要定位裝置的定位失效。再者,在較佳實施例中,本發明的定位失效偵測系統在自主移動機器人的定位異常時,能夠將姿態估測所得到的六維姿態估測資訊作為定位結果資訊,輸出至自主移動機器人的控制系統,以維持自主移動機器人的正確運作。另一方面,也能夠將姿態估測所得到的六維姿態估測資訊作為初始姿態定位資訊,給予至自主移動機器人的主要定位裝置,使主要定位裝置重新定位。另外,在較佳實施例中,本發明的定位失效偵測系統藉由觀測偏差在長期與短期的變化,來推測目前是否遇上定位失效,以有效降低誤判的可能性。Through the technical means adopted in the present invention, the positioning failure detection system can effectively detect the positioning failure of the main positioning device of the autonomous mobile robot. Furthermore, in a preferred embodiment, when the positioning failure detection system of the present invention is abnormal in the positioning of the autonomous mobile robot, it can use the six-dimensional posture estimation information obtained by posture estimation as the positioning result information, and output it to the autonomous mobile robot. The control system of the robot to maintain the correct functioning of the autonomous mobile robot. On the other hand, the six-dimensional attitude estimation information obtained by the attitude estimation can also be used as the initial attitude positioning information, and given to the main positioning device of the autonomous mobile robot, so that the main positioning device can be repositioned. In addition, in a preferred embodiment, the positioning failure detection system of the present invention estimates whether there is a current positioning failure by observing long-term and short-term changes in the deviation, so as to effectively reduce the possibility of misjudgment.
以下根據第1圖至第6圖,而說明本發明的實施方式。該說明並非為限制本發明的實施方式,而為本發明之實施例的一種。Embodiments of the present invention will be described below based on FIGS. 1 to 6 . This description is not intended to limit the implementation of the present invention, but is one of the examples of the present invention.
如第1圖所示,依據本發明的一實施例的基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統100係用於偵測該自主移動機器人(圖未示)的一主要定位裝置A的定位失效,該定位失效偵測系統100包含:一視覺圖像輸入裝置1、一地圖座標迴歸裝置2、及一定位失效偵測裝置3。As shown in FIG. 1 , a positioning
如第1圖所示,在本發明的定位失效偵測系統100中,該視覺圖像輸入裝置1設置於該自主移動機器人,該視覺圖像輸入裝置1經配置而取得該自主移動機器人對於預定的一定位環境的一當前視覺圖像資訊I1。As shown in Figure 1, in the positioning
具體而言,該視覺圖像輸入裝置1為設置於該自主移動機器人,所取得的該當前視覺圖像資訊I1是指自該自主移動機器人的角度所見到的定位環境的圖像資訊。因此,在該當前視覺圖像資訊I1中並不需要含有該自主移動機器人。在本實施例中,該視覺圖像輸入裝置1係為一相機,特別是一單眼相機(monocular camera),該當前視覺圖像資訊I1為單一RGB圖像,即,單張的RGB點陣圖。Specifically, the visual
如第1圖及第2圖所示,在本實施例的定位失效偵測系統100中,該地圖座標迴歸裝置2連接於該視覺圖像輸入裝置1,該地圖座標迴歸裝置2係以經訓練的地圖座標迴歸神經網路20來對該當前視覺圖像資訊I1進行特徵擷取處理,並基於該特徵擷取處理所得到的特徵圖資訊M1進行姿態估測處理而得到該自主移動機器人的當前的一六維姿態估測資訊I2。As shown in Figure 1 and Figure 2, in the positioning
詳細而言,該地圖座標迴歸裝置2的地圖座標迴歸(map coordinate regression)方式係藉由學習場景(scene)與世界座標的關係,只需依靠單一RGB圖像的該當前視覺圖像資訊I1就可以輸出相機在世界座標中的六維姿態。該地圖座標迴歸裝置2所使用的經訓練的該地圖座標迴歸神經網路20主要包括二個部分:用於進行該特徵擷取處理的一特徵擷取器21、以及用於進行該姿態估測處理的一姿態迴歸器22。該特徵擷取器21所進行的該特徵擷取處理主要負責提取圖像(該當前視覺圖像資訊I1)中的特徵,例如在該定位環境為建築物內部環境時,所提取的特徵可為牆角、長廊消失線、門窗……等特徵,這些特徵在圖像中的分布可以在後續的該姿態估測處理的階段用來判斷姿態。該姿態迴歸器22所進行的該姿態估測處理主要是該特徵擷取處理所提取的特徵轉換為符合世界座標的值,最後迴歸六維姿態(6D POSE),得到該六維姿態估測資訊I2。Specifically, the map coordinate regression method of the map
該特徵擷取器21通常可採用卷積神經網路(Convolutional Neural Network;CNN)或循環神經網路(Recurrent Neural Network;RNN),該姿態迴歸器22則通常可為全連接神經網路(Fully Connected Neural Network;FNN),但本發明並不以此為限。在本實施例中,該特徵擷取器21為一深度可分離卷積神經網路(Depthwise Separable Convolution Neural Network),該姿態迴歸器22為複數層全連接層(fully connected layer)所構成的神經網路。該特徵擷取器21包括有MobileNetV2模組211、212、213及一全域性平均池化層214,以及特別包括有一幾何限制模組23。該MobileNetV2模組211、212、213是由「Google」所推出的第二代行動裝置版電腦視覺神經網路,其透過深度可分離卷積的方式來達到壓縮模型的目的,以減少參數並提升運算速度,除此之外更具備層間的線性轉換方式(linear bottleneck)以及瓶頸(bottleneck)之間的捷徑連接(shortcut connections)的二項特性。該MobileNetV2模組211、212、213的架構及技術細節為本發明所屬技術領域中具有通常知識者所能輕易獲知,故於此不再作進一步贅述。該全域性平均池化層(Global Average Pooling Layer)214係作為該特徵擷取器21的最後一層,以避免經過該MobileNetV2模組211、212、213特徵擷取後的特徵圖因數量過大而造成後續的該姿態估測處理上的問題。The
該幾何限制模組(Geometric Constrained Module;GCM)23係經配置而在該特徵擷取器21進行此刻的該特徵擷取處理的過程中將前一刻的該特徵擷取處理所得到的前一刻的該特徵圖資訊M0增加輸入至該特徵擷取器21中。設置該幾何限制模組23的原因在於,行進中的該自主移動機器人的姿態變化方式受到機構侷限,而影響此刻與前一刻的圖像位置之間的對應關係,若無法使該地圖座標迴歸神經網路20在訓練時有效地學習到這樣的幾何限制,則所得到該六維姿態估測資訊I2將會有準確度不足的問題。使用二張圖像雖然也可學習到該自主移動機器人的姿態變化,但卻會遇上無法正確估測場景的尺度以及需要初始位置才能進行定位的問題。因此,本發明藉由該幾何限制模組23連接前一刻的該特徵圖資訊M0與此刻的該特徵圖資訊M1,經過一個點方向卷積,再對每個像素點加上非線性激活函數,濾掉不需要的資訊。如第2圖所示,由該MobileNetV2模組211、212所得到的該特徵圖資訊M1,與前一刻同樣模組所得到的該特徵圖資訊M0,進入卷積核為1×1的卷積層231,再經過線性整流函數232(Rectified Linear Unit;ReLU)後進入下一階段的該MobileNetV2模組213的特徵擷取。The Geometric Constrained Module (GCM) 23 is configured so that when the
如第3圖所示,在該地圖座標迴歸裝置2中,經訓練的該地圖座標迴歸神經網路20係在該地圖座標迴歸神經網路20與一視覺里程計神經網路VO二者的當前視覺圖像資訊輸入模組為相分享權重的狀態下將該地圖座標迴歸神經網路20與該視覺里程計神經網路VO同時進行訓練所得到。該視覺里程計神經網路VO是指基於視覺里程計的神經網路,視覺里程計(Visual Odometry)是藉由圖像分析而對機器人定位的一種方式,其架構及技術細節為本發明所屬技術領域中具有通常知識者所能輕易獲知,故於此不再作進一步贅述。本發明係以輔助學習的方式來訓練該地圖座標迴歸神經網路20,輔助學習方式是藉由同時訓練該視覺里程計神經網路VO與該地圖座標迴歸神經網路20,使二者的特徵擷取器可學習到彼此擷取特徵的傾向。經訓練的該地圖座標迴歸神經網路20能夠藉由該視覺里程計神經網路VO學到每個圖像間的幾何限制,增加預測的準確度。經過輔助學習的經訓練的該地圖座標迴歸神經網路20在應用時(如第2圖所示)會與該視覺里程計神經網路VO分開,因此能夠在不增加參數量與模型架構情況下,訓練出更好的結果。在第3圖的實施例中,該視覺里程計神經網路VO具有類似孿生神經網路的架構,分別用於提取此刻的圖像資訊T1與前一刻的圖像資訊T0。該地圖座標迴歸神經網路20的當前視覺圖像資訊輸入模組(即,該MobileNetV2模組211)與該視覺里程計神經網路VO的當前視覺圖像資訊輸入模組(即,輸入此刻的圖像資訊T1的模組)分享權重(在第3圖中以虛線連結來表示)。藉此,每當更新參數時,分享的權重會考量視覺里程計部分與地圖座標迴歸部分的誤差進行梯度下降,因此能學習提取兩邊所需的特徵。應注意的是,本發明並不以此為限,該地圖座標迴歸神經網路20及該視覺里程計神經網路VO藉可具有第3圖的實施例以外的其它架構。As shown in Fig. 3, in the map
如第1圖至第6圖所示,該定位失效偵測裝置3連接於該地圖座標迴歸裝置2及該主要定位裝置A,以取得該地圖座標迴歸裝置2所估測得到的該六維姿態估測資訊I2以及該主要定位裝置A對該自主移動機器人定位所得到的一六維姿態定位資訊I
A。該定位失效偵測裝置3經配置而根據該六維姿態估測資訊I2與該六維姿態定位資訊I
A之間的偏差B的隨時間的變化量,求得一偏差變化量函數值F
B,並且在該偏差變化量函數值F
B超過預設的一失效判斷閾值Th時,判斷該自主移動機器人的該主要定位裝置A當前為定位失效。在本實施例中,該主要定位裝置A係為基於地圖定位的裝置,藉由感測環境的光達及預先建置的點雲地圖以定位得到該六維姿態定位資訊I
A。當然,該主要定位裝置A也可以是以其它方式進行定位的裝置,只要該主要定位裝置A所定位得到的該六維姿態定位資訊I
A與該地圖座標迴歸裝置2所估測得到的該六維姿態估測資訊I2相對應,而得以供該定位失效偵測裝置3進行上述的定位失效的相關計算及判斷即可。
As shown in Figures 1 to 6, the positioning
較佳地,如第4圖所示,在本實施例中,該定位失效偵測裝置3包括一長期變化量計算單元31以及一短期變化量計算單元32,該長期變化量計算單元31係配置而計算該六維姿態估測資訊I2與該六維姿態定位資訊I
A之間的偏差B在長時間的一長期變化量函數值F1,該短期變化量計算單元32係配置而計算該六維姿態估測資訊I2與該六維姿態定位資訊I
A之間的偏差B在短時間的一短期變化量函數值F2,以及該定位失效偵測裝置3的一判斷單元33係經配置而在該長期變化量函數值F1與該短期變化量函數值F2之間的差異(即,該偏差變化量函數值F
B)超過該失效判斷閾值Th時,判斷該自主移動機器人的該主要定位裝置A當前為定位失效。
Preferably, as shown in Figure 4, in this embodiment, the positioning
由於若單純以該六維姿態估測資訊I2與該六維姿態定位資訊I
A之間的偏差B作為判斷,只要超過特定的閾值就判斷為定位失效,在此種情況下,必須確定該地圖座標迴歸裝置2的每次的估測結果(該六維姿態估測資訊I2)都是極準確的,否則容易造成誤判(false positive)比例極高。因此,在本實施例中,藉由著重於偏差在長期與短期的變化,藉由將偏差累積,比較偏差累積的長期影響與短期影響,來判斷當前是否遇上定位失效。具體而言,該長期變化量函數值F1及該短期變化量函數值F2皆是代表以設定的計算方式所求得的一種代價(cost),在本實施例中係藉由計算當前的偏差B與前次的代價間的差異,並進行累加而得到此次的代價。請參閱第5圖及第6圖,其係顯示偏差B與該長期變化量函數值F1及該短期變化量函數值F2的變化關係。如圖所示,當偏差B變化小時,代價會小幅度震盪在一個區間,但當偏差B越來越大時,代價也能隨之上升。這樣的作法可以避免掉某一次的過大的偏差B就導致誤判的發生。當發生如機器人綁架或是主要定位裝置A的最佳化失敗時,偏差B會提升,代價也會跟著逐步提升。該短期變化量函數值F2表現出對於偏差B的短期變化量敏感的代價,能夠對偏差B的變化更加迅速地反應。該長期變化量函數值F1相較於該短期變化量函數值F2需要更久的時間累積代價,降低誤判的機會。當該長期變化量函數值F1與該短期變化量函數值F2之間的差異(該偏差變化量函數值F
B)超過該失效判斷閾值Th時,則判斷為定位失效。從第6圖中可以看出,即使在橫軸的第5個資料點時偏差B突然增加,該偏差變化量函數值F
B也不會過度增加,避免因該地圖座標迴歸裝置2的估測失誤而造成誤判。在定位異常狀況發生在橫軸的第11個資料點時,該偏差變化量函數值F
B表現出逐步上昇,而能夠藉由該失效判斷閾值Th判斷出定位失效。
Since if the deviation B between the six-dimensional attitude estimation information I2 and the six-dimensional attitude positioning information I A is simply used as a judgment, as long as it exceeds a specific threshold, it will be judged as a positioning failure. In this case, the map must be determined. Each estimation result of the coordinate regression device 2 (the six-dimensional attitude estimation information I2 ) is extremely accurate, otherwise it is easy to cause a high rate of false positives. Therefore, in this embodiment, by emphasizing the long-term and short-term changes of the deviation, by accumulating the deviation and comparing the long-term and short-term effects of the deviation accumulation, it is judged whether there is a positioning failure. Specifically, both the long-term variation function value F1 and the short-term variation function value F2 represent a cost obtained by a set calculation method. In this embodiment, by calculating the current deviation B The difference from the previous cost is accumulated to obtain the current cost. Please refer to FIG. 5 and FIG. 6, which show the variation relationship between the deviation B and the long-term variation function value F1 and the short-term variation function value F2. As shown in the figure, when the deviation B changes small, the cost will fluctuate in a small range, but when the deviation B becomes larger, the cost can also rise accordingly. This approach can avoid the occurrence of misjudgment caused by a certain excessive deviation B. When the robot kidnaps or the optimization failure of the main positioning device A occurs, the deviation B will increase, and the cost will gradually increase. The short-term change amount function value F2 shows the cost of being sensitive to the short-term change amount of the deviation B, and can respond to the change of the deviation B more quickly. Compared with the short-term variation function value F2, the long-term variation function value F1 needs a longer time to accumulate cost, which reduces the chance of misjudgment. When the difference between the long-term variation function value F1 and the short-term variation function value F2 (the deviation variation function value F B ) exceeds the failure judging threshold Th, it is judged as a positioning failure. As can be seen from Figure 6, even if the deviation B suddenly increases at the fifth data point on the horizontal axis, the deviation variation function value F B will not increase excessively, avoiding the estimation caused by the map coordinate
較佳地,如第4圖所示,在本實施例中,該定位失效偵測裝置3包括一定位結果輸出單元34,連接於該自主移動機器人之一控制系統C,該定位結果輸出單元34係經配置而在該主要定位裝置A被判斷為當前為定位失效時,以該六維姿態估測資訊I2取代該六維姿態定位資訊I
A,作為一定位結果資訊I3輸出至該控制系統C,以維持自主移動機器人的正確運作。
Preferably, as shown in Figure 4, in this embodiment, the positioning
較佳地,如第4圖所示,在本實施例中,該定位失效偵測裝置3包括一初始定位給予單元35,連接於該自主移動機器人的該主要定位裝置A,該初始定位給予單元35係經配置而在該主要定位裝置A被判斷為當前為定位失效時,以該六維姿態估測資訊I2作為一初始姿態定位資訊I0而給予至該主要定位裝置A,以供該主要定位裝置A後續利用該初始姿態定位資訊I0對該自主移動機器人進行定位而得到該六維姿態定位資訊I
A。
Preferably, as shown in Figure 4, in this embodiment, the positioning
以上之敘述以及說明僅為本發明之較佳實施例之說明,對於此項技術具有通常知識者當可依據以下所界定申請專利範圍以及上述之說明而作其他之修改,惟此些修改仍應是為本發明之發明精神而在本發明之權利範圍中。The above descriptions and descriptions are only descriptions of the preferred embodiments of the present invention. Those who have common knowledge of this technology may make other modifications according to the scope of the patent application defined below and the above descriptions, but these modifications should still be It is for the inventive spirit of the present invention and within the scope of rights of the present invention.
100:定位失效偵測系統 1:視覺圖像輸入裝置 2:地圖座標迴歸裝置 20:地圖座標迴歸神經網路 21:特徵擷取器 211:MobileNetV2模組 212:MobileNetV2模組 213:MobileNetV2模組 214:全域性平均池化層 22:姿態迴歸器 23:幾何限制模組 231:卷積層 232:線性整流函數 3:定位失效偵測裝置 31:長期變化量計算單元 32:短期變化量計算單元 33:判斷單元 34:定位結果輸出單元 35:初始定位給予單元 A:主要定位裝置 B:偏差 C:控制系統 F1:長期變化量函數值 F2:短期變化量函數值 F B:偏差變化量函數值 I0:初始姿態定位資訊 I1:當前視覺圖像資訊 I2:六維姿態估測資訊 I3:定位結果資訊 I A:六維姿態定位資訊 M0:特徵圖資訊 M1:特徵圖資訊 VO:視覺里程計神經網路100: Positioning failure detection system 1: Visual image input device 2: Map coordinate regression device 20: Map coordinate regression neural network 21: Feature extractor 211: MobileNetV2 module 212: MobileNetV2 module 213: MobileNetV2 module 214 : Global average pooling layer 22: Attitude regressor 23: Geometric restriction module 231: Convolution layer 232: Linear rectification function 3: Positioning failure detection device 31: Long-term variation calculation unit 32: Short-term variation calculation unit 33: Judging unit 34: positioning result output unit 35: initial positioning giving unit A: main positioning device B: deviation C: control system F1: long-term variation function value F2: short-term variation function value F B : deviation variation function value I0: Initial attitude positioning information I1: current visual image information I2: six-dimensional attitude estimation information I3: positioning result information I A : six-dimensional attitude positioning information M0: feature map information M1: feature map information VO: visual odometry neural network
[第1圖]為顯示根據本發明的一實施例的基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統的方塊示意圖; [第2圖]為顯示根據本發明的實施例的定位失效偵測系統中的地圖座標迴歸裝置的經訓練的地圖座標迴歸神經網路的示意圖; [第3圖]為顯示根據本發明的實施例的地圖座標迴歸神經網路與視覺里程計神經網路的示意圖; [第4圖]為顯示根據本發明的實施例的定位失效偵測系統中的定位失效偵測裝置的方塊示意圖; [第5圖]為顯示本發明的實施例的定位失效偵測裝置所得到的六維姿態估測資訊與六維姿態定位資訊之間的偏差、長期變化量函數值、短期變化量函數值的示意圖; [第6圖]為顯示本發明的實施例的定位失效偵測裝置所得到的六維姿態估測資訊與六維姿態定位資訊之間的偏差、長期變化量函數值與短期變化量函數值之間的差異、失效判斷閾值的示意圖。 [Fig. 1] is a schematic block diagram showing a positioning failure detection system for an autonomous mobile robot based on deep learning six-dimensional attitude regression according to an embodiment of the present invention; [Fig. 2] is a schematic diagram showing the trained map coordinate regression neural network of the map coordinate regression device in the positioning failure detection system according to an embodiment of the present invention; [Fig. 3] is a schematic diagram showing a map coordinate regression neural network and a visual odometry neural network according to an embodiment of the present invention; [Fig. 4] is a schematic block diagram showing a location failure detection device in a location failure detection system according to an embodiment of the present invention; [Fig. 5] shows the deviation between the six-dimensional attitude estimation information and the six-dimensional attitude positioning information obtained by the positioning failure detection device of the embodiment of the present invention, the long-term variation function value, and the short-term variation function value schematic diagram; [Figure 6] shows the deviation between the six-dimensional attitude estimation information and the six-dimensional attitude positioning information obtained by the positioning failure detection device according to the embodiment of the present invention, and the difference between the long-term variation function value and the short-term variation function value The schematic diagram of the difference between and the failure judgment threshold.
100:定位失效偵測系統 100: Positioning failure detection system
1:視覺圖像輸入裝置 1: Visual image input device
2:地圖座標迴歸裝置 2: Map coordinate return device
20:地圖座標迴歸神經網路 20: Map Coordinate Regression Neural Network
3:定位失效偵測裝置 3: Positioning failure detection device
A:主要定位裝置 A: Main positioning device
I0:初始姿態定位資訊 I0: Initial attitude positioning information
I1:當前視覺圖像資訊 I1: Current visual image information
I2:六維姿態估測資訊 I2: Six-dimensional attitude estimation information
I3:定位結果資訊 I3: Positioning result information
IA:六維姿態定位資訊 I A : Six-dimensional attitude positioning information
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW111121968A TWI795306B (en) | 2022-06-14 | 2022-06-14 | Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW111121968A TWI795306B (en) | 2022-06-14 | 2022-06-14 | Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression |
Publications (2)
Publication Number | Publication Date |
---|---|
TWI795306B true TWI795306B (en) | 2023-03-01 |
TW202348371A TW202348371A (en) | 2023-12-16 |
Family
ID=86692370
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW111121968A TWI795306B (en) | 2022-06-14 | 2022-06-14 | Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI795306B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210003937A (en) * | 2018-05-23 | 2021-01-12 | 모비디어스 리미티드 | Deep learning system |
CN112380923A (en) * | 2020-10-26 | 2021-02-19 | 天津大学 | Intelligent autonomous visual navigation and target detection method based on multiple tasks |
CN112560571A (en) * | 2020-10-09 | 2021-03-26 | 天津大学 | Intelligent autonomous visual navigation method based on convolutional neural network |
TWI764542B (en) * | 2021-01-27 | 2022-05-11 | 國立臺灣大學 | Autonomous intelligent vehicle real-time visual localization and uncertainty estimation system |
-
2022
- 2022-06-14 TW TW111121968A patent/TWI795306B/en active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210003937A (en) * | 2018-05-23 | 2021-01-12 | 모비디어스 리미티드 | Deep learning system |
CN112560571A (en) * | 2020-10-09 | 2021-03-26 | 天津大学 | Intelligent autonomous visual navigation method based on convolutional neural network |
CN112380923A (en) * | 2020-10-26 | 2021-02-19 | 天津大学 | Intelligent autonomous visual navigation and target detection method based on multiple tasks |
TWI764542B (en) * | 2021-01-27 | 2022-05-11 | 國立臺灣大學 | Autonomous intelligent vehicle real-time visual localization and uncertainty estimation system |
Also Published As
Publication number | Publication date |
---|---|
TW202348371A (en) | 2023-12-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110796063B (en) | Method, device, equipment, storage medium and vehicle for detecting parking space | |
CN110546459B (en) | Robot tracking navigation with data fusion | |
WO2021017212A1 (en) | Multi-scene high-precision vehicle positioning method and apparatus, and vehicle-mounted terminal | |
Christensen et al. | Fault detection in autonomous robots based on fault injection and learning | |
WO2022170847A1 (en) | Online calibration method based on laser and visual fusion | |
CN108981693B (en) | VIO rapid joint initialization method based on monocular camera | |
Chambers et al. | Robust multi-sensor fusion for micro aerial vehicle navigation in GPS-degraded/denied environments | |
US11636612B2 (en) | Automated guided vehicle navigation device and method thereof | |
CN111337018A (en) | Positioning method and device, intelligent robot and computer readable storage medium | |
Motlagh et al. | Position Estimation for Drones based on Visual SLAM and IMU in GPS-denied Environment | |
US20200401152A1 (en) | Self-location estimation method | |
CN111930094A (en) | Unmanned aerial vehicle actuator fault diagnosis method based on extended Kalman filtering | |
CN109739232A (en) | Barrier method for tracing, device, car-mounted terminal and storage medium | |
TWI795306B (en) | Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression | |
KR20200063538A (en) | Method for self-diagnosing localization status and autonomous mobile robot carrying out the same | |
Silva et al. | Saliency-based cooperative landing of a multirotor aerial vehicle on an autonomous surface vehicle | |
Zamanakos et al. | Energy-aware design of vision-based autonomous tracking and landing of a UAV | |
US10572802B1 (en) | Learning state-dependent sensor measurement models for localization | |
Xu et al. | Indoor multi-sensory self-supervised autonomous mobile robotic navigation | |
CN114282776A (en) | Method, device, equipment and medium for cooperatively evaluating automatic driving safety of vehicle and road | |
Tian et al. | Kidnapping detection and recognition in previous unknown environment | |
CN114037759A (en) | Dynamic characteristic point filtering and repositioning method in indoor environment | |
Mahmoud et al. | Dynamic Environments and Robust SLAM: Optimizing Sensor Fusion and Semantics for Wheeled Robots | |
Korodi et al. | Correcting odometry errors for mobile robots using image processing | |
Kummer et al. | Autonomous UAV landing via eye-in-hand visual servoing |