TWI795306B - Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression - Google Patents

Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression Download PDF

Info

Publication number
TWI795306B
TWI795306B TW111121968A TW111121968A TWI795306B TW I795306 B TWI795306 B TW I795306B TW 111121968 A TW111121968 A TW 111121968A TW 111121968 A TW111121968 A TW 111121968A TW I795306 B TWI795306 B TW I795306B
Authority
TW
Taiwan
Prior art keywords
positioning
information
failure detection
autonomous mobile
mobile robot
Prior art date
Application number
TW111121968A
Other languages
Chinese (zh)
Other versions
TW202348371A (en
Inventor
陳俊儒
Original Assignee
艾歐圖科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 艾歐圖科技股份有限公司 filed Critical 艾歐圖科技股份有限公司
Priority to TW111121968A priority Critical patent/TWI795306B/en
Application granted granted Critical
Publication of TWI795306B publication Critical patent/TWI795306B/en
Publication of TW202348371A publication Critical patent/TW202348371A/en

Links

Images

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

Disclosed is a localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression comprising: a visual image input device, a map coordinate regression device, and a localization failure detection device, wherein the localization failure detection system detects a localization failure of a primary localization device of the autonomous mobile robot based on the deviation over time between a localization result of the map coordinate regression device and a localization result of the primary localization device.

Description

基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統Positioning failure detection system for autonomous mobile robot based on deep learning six-dimensional attitude regression

本發明相關於一種自主移動機器人,特別是相關於一種基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統。The present invention relates to an autonomous mobile robot, in particular to a positioning failure detection system for an autonomous mobile robot based on deep learning six-dimensional attitude regression.

疫情當前,為了減少人與人之間的接觸及群聚,自動化送餐與運貨服務需求日益增加,引入移動型機器人(Mobile robot)便是其中的解決方法之一。移動型機器人大致可分為兩種:自動引導車(Automated Guided Vehicle;AGV)與自主移動機器人(Autonomous Mobile Robot;AMR)。兩者的差異在於前者需要在運行場域內施工,增加磁條或是軌道讓AGV運行,而後者僅需感測器事先在場域內建立地圖,便可在地圖中導航與避障。由於運送地點大多在一般住家以及室外,無法進行事前施工,因此AMR較為靈活與彈性的特點使其在近年來更加蓬勃發展。In the current epidemic situation, in order to reduce the contact and gathering between people, the demand for automated food delivery and delivery services is increasing. The introduction of mobile robots is one of the solutions. Mobile robots can be roughly divided into two types: Automated Guided Vehicle (AGV) and Autonomous Mobile Robot (AMR). The difference between the two is that the former requires construction in the operating field, adding magnetic strips or tracks to allow the AGV to operate, while the latter only needs sensors to build a map in the field in advance to navigate and avoid obstacles in the map. Since most of the delivery locations are ordinary homes and outdoors, it is impossible to carry out pre-construction, so AMR is relatively flexible and flexible, making it more vigorous in recent years.

在自主移動機器人的系統中,定位系統可以說特別重要。定位系統失效的情況,代表定位系統無法在地圖上找到AMR位置,又或是定位結果誤差大於系統定義定位誤差。當定位系統失效時經常造成路徑規劃系統無法正常運作,使得AMR停在原地動彈不得,甚至往前撞向障礙物。In the system of autonomous mobile robots, the positioning system can be said to be particularly important. If the positioning system fails, it means that the positioning system cannot find the AMR position on the map, or the error of the positioning result is greater than the positioning error defined by the system. When the positioning system fails, it often causes the path planning system to fail to operate normally, making the AMR stop in place and even crash into obstacles.

因此,本發明的目的即在提供一種基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統,以解決自主移動機器人的定位失效問題。Therefore, the purpose of the present invention is to provide a positioning failure detection system for an autonomous mobile robot based on deep learning six-dimensional attitude regression, so as to solve the problem of positioning failure of an autonomous mobile robot.

本發明為解決習知技術之問題所採用之技術手段係提供一種基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統,係用於偵測該自主移動機器人的一主要定位裝置的定位失效,該定位失效偵測系統包含:一視覺圖像輸入裝置,設置於該自主移動機器人,該視覺圖像輸入裝置經配置而取得該自主移動機器人對於預定的一定位環境的一當前視覺圖像資訊;一地圖座標迴歸裝置,連接於該視覺圖像輸入裝置,該地圖座標迴歸裝置係以經訓練的地圖座標迴歸神經網路來對該當前視覺圖像資訊進行特徵擷取處理,並基於該特徵擷取處理所得到的特徵圖資訊進行姿態估測處理而得到該自主移動機器人的當前的一六維姿態估測資訊,其中經訓練的該地圖座標迴歸神經網路係在該地圖座標迴歸神經網路與一視覺里程計神經網路二者的當前視覺圖像資訊輸入模組為相分享權重的狀態下將該地圖座標迴歸神經網路與該視覺里程計神經網路同時進行訓練所得到,經訓練的該地圖座標迴歸神經網路包括用於進行該特徵擷取處理的一特徵擷取器以及用於進行該姿態估測處理的一姿態迴歸器,該特徵擷取器包括一幾何限制模組,該幾何限制模組係經配置而在該特徵擷取器進行此刻的該特徵擷取處理的過程中將前一刻的該特徵擷取處理所得到的前一刻的該特徵圖資訊增加輸入至該特徵擷取器中;以及一定位失效偵測裝置,連接於該地圖座標迴歸裝置及該主要定位裝置,以取得該地圖座標迴歸裝置所估測得到的該六維姿態估測資訊以及該主要定位裝置對該自主移動機器人定位所得到的一六維姿態定位資訊,該定位失效偵測裝置經配置而根據該六維姿態估測資訊與該六維姿態定位資訊之間的偏差的隨時間的變化量,求得一偏差變化量函數值,並且在該偏差變化量函數值超過預設的一失效判斷閾值時,判斷該自主移動機器人的該主要定位裝置當前為定位失效。The technical means adopted by the present invention to solve the problems of the prior art is to provide a positioning failure detection system for an autonomous mobile robot based on deep learning six-dimensional attitude regression, which is used to detect the location of a main positioning device of the autonomous mobile robot Positioning failure, the positioning failure detection system includes: a visual image input device, arranged on the autonomous mobile robot, the visual image input device is configured to obtain a current visual image of the autonomous mobile robot for a predetermined positioning environment Image information; a map coordinate regression device, connected to the visual image input device, the map coordinate regression device uses the trained map coordinate regression neural network to perform feature extraction processing on the current visual image information, and based on The feature map information obtained by the feature extraction process is subjected to attitude estimation processing to obtain a current six-dimensional attitude estimation information of the autonomous mobile robot, wherein the trained map coordinate regression neural network is regressed at the map coordinates The current visual image information input module of the neural network and a visual odometry neural network are in the state of sharing weights, and the map coordinate regression neural network and the visual odometry neural network are trained simultaneously. , the trained map coordinate regression neural network includes a feature extractor for performing the feature extraction process and a pose regressor for performing the pose estimation process, the feature extractor includes a geometric constraint module, the geometric constraint module is configured to add the feature map information of the previous moment obtained by the feature extraction process of the previous moment to input during the feature extraction process of the feature extractor at the moment into the feature extractor; and a positioning failure detection device connected to the map coordinate regression device and the main positioning device to obtain the six-dimensional attitude estimation information estimated by the map coordinate regression device and the A six-dimensional attitude positioning information obtained by the main positioning device for positioning the autonomous mobile robot. A deviation change function value is obtained, and when the deviation change function value exceeds a preset failure judgment threshold, it is judged that the main positioning device of the autonomous mobile robot is currently a positioning failure.

在本發明的一實施例中係提供一種定位失效偵測系統,其中該視覺圖像輸入裝置為一相機,該當前視覺圖像資訊為單一RGB圖像。In one embodiment of the present invention, a positioning failure detection system is provided, wherein the visual image input device is a camera, and the current visual image information is a single RGB image.

在本發明的一實施例中係提供一種定位失效偵測系統,其中該特徵擷取器為一深度可分離卷積神經網路。In an embodiment of the present invention, a localization failure detection system is provided, wherein the feature extractor is a deep separable convolutional neural network.

在本發明的一實施例中係提供一種定位失效偵測系統,其中該特徵擷取器包括一MobileNetV2模組。In an embodiment of the present invention, a positioning failure detection system is provided, wherein the feature extractor includes a MobileNetV2 module.

在本發明的一實施例中係提供一種定位失效偵測系統,其中該特徵擷取器包括一全域性平均池化層。In an embodiment of the present invention, a positioning failure detection system is provided, wherein the feature extractor includes a global average pooling layer.

在本發明的一實施例中係提供一種定位失效偵測系統,其中該姿態迴歸器包括一全連接層。An embodiment of the present invention provides a positioning failure detection system, wherein the attitude regressor includes a fully connected layer.

在本發明的一實施例中係提供一種定位失效偵測系統,其中該定位失效偵測裝置包括一長期變化量計算單元以及一短期變化量計算單元,該長期變化量計算單元係配置而計算該六維姿態估測資訊與該六維姿態定位資訊之間的偏差在長時間的一長期變化量函數值,該短期變化量計算單元係配置而計算該六維姿態估測資訊與該六維姿態定位資訊之間的偏差在短時間的一短期變化量函數值,以及該定位失效偵測裝置係經配置而在該長期變化量函數值與該短期變化量函數值之間的差異超過該失效判斷閾值時,判斷該自主移動機器人的該主要定位裝置當前為定位失效。In an embodiment of the present invention, a positioning failure detection system is provided, wherein the positioning failure detection device includes a long-term variation calculation unit and a short-term variation calculation unit, and the long-term variation calculation unit is configured to calculate the The deviation between the six-dimensional attitude estimation information and the six-dimensional attitude positioning information is a long-term variation function value in a long period of time, and the short-term variation calculation unit is configured to calculate the six-dimensional attitude estimation information and the six-dimensional attitude The deviation between the positioning information is a short-term variation function value in a short time, and the positioning failure detection device is configured so that the difference between the long-term variation function value and the short-term variation function value exceeds the failure judgment threshold, it is judged that the main positioning device of the autonomous mobile robot is currently in failure of positioning.

在本發明的一實施例中係提供一種定位失效偵測系統,其中該定位失效偵測裝置包括一定位結果輸出單元,連接於該自主移動機器人之一控制系統,該定位結果輸出單元係經配置而在該主要定位裝置被判斷為當前為定位失效時,以該六維姿態估測資訊取代該六維姿態定位資訊,作為一定位結果資訊輸出至該控制系統。In one embodiment of the present invention, a positioning failure detection system is provided, wherein the positioning failure detection device includes a positioning result output unit connected to a control system of the autonomous mobile robot, and the positioning result output unit is configured And when the main positioning device is judged to be a current positioning failure, the six-dimensional attitude estimation information is used to replace the six-dimensional attitude positioning information, and output to the control system as a positioning result information.

在本發明的一實施例中係提供一種定位失效偵測系統,其中該定位失效偵測裝置包括一初始定位給予單元,連接於該自主移動機器人的該主要定位裝置,該初始定位給予單元係經配置而在該主要定位裝置被判斷為當前為定位失效時,以該六維姿態估測資訊作為一初始姿態定位資訊而給予至該主要定位裝置,以供該主要定位裝置後續利用該初始姿態定位資訊對該自主移動機器人進行定位而得到該六維姿態定位資訊。In one embodiment of the present invention, a positioning failure detection system is provided, wherein the positioning failure detection device includes an initial positioning giving unit connected to the main positioning device of the autonomous mobile robot, and the initial positioning giving unit is passed configured to provide the primary positioning device with the six-dimensional attitude estimation information as an initial attitude positioning information for the primary positioning device to use the initial attitude positioning information for subsequent positioning when the primary positioning device is judged to be a current positioning failure The information is used to locate the autonomous mobile robot to obtain the six-dimensional attitude positioning information.

經由本發明所採用之技術手段,定位失效偵測系統能夠有效地偵測自主移動機器人的主要定位裝置的定位失效。再者,在較佳實施例中,本發明的定位失效偵測系統在自主移動機器人的定位異常時,能夠將姿態估測所得到的六維姿態估測資訊作為定位結果資訊,輸出至自主移動機器人的控制系統,以維持自主移動機器人的正確運作。另一方面,也能夠將姿態估測所得到的六維姿態估測資訊作為初始姿態定位資訊,給予至自主移動機器人的主要定位裝置,使主要定位裝置重新定位。另外,在較佳實施例中,本發明的定位失效偵測系統藉由觀測偏差在長期與短期的變化,來推測目前是否遇上定位失效,以有效降低誤判的可能性。Through the technical means adopted in the present invention, the positioning failure detection system can effectively detect the positioning failure of the main positioning device of the autonomous mobile robot. Furthermore, in a preferred embodiment, when the positioning failure detection system of the present invention is abnormal in the positioning of the autonomous mobile robot, it can use the six-dimensional posture estimation information obtained by posture estimation as the positioning result information, and output it to the autonomous mobile robot. The control system of the robot to maintain the correct functioning of the autonomous mobile robot. On the other hand, the six-dimensional attitude estimation information obtained by the attitude estimation can also be used as the initial attitude positioning information, and given to the main positioning device of the autonomous mobile robot, so that the main positioning device can be repositioned. In addition, in a preferred embodiment, the positioning failure detection system of the present invention estimates whether there is a current positioning failure by observing long-term and short-term changes in the deviation, so as to effectively reduce the possibility of misjudgment.

以下根據第1圖至第6圖,而說明本發明的實施方式。該說明並非為限制本發明的實施方式,而為本發明之實施例的一種。Embodiments of the present invention will be described below based on FIGS. 1 to 6 . This description is not intended to limit the implementation of the present invention, but is one of the examples of the present invention.

如第1圖所示,依據本發明的一實施例的基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統100係用於偵測該自主移動機器人(圖未示)的一主要定位裝置A的定位失效,該定位失效偵測系統100包含:一視覺圖像輸入裝置1、一地圖座標迴歸裝置2、及一定位失效偵測裝置3。As shown in FIG. 1 , a positioning failure detection system 100 for an autonomous mobile robot based on deep learning six-dimensional attitude regression according to an embodiment of the present invention is used to detect a main component of the autonomous mobile robot (not shown). The positioning failure of the positioning device A, the positioning failure detection system 100 includes: a visual image input device 1 , a map coordinate return device 2 , and a positioning failure detection device 3 .

如第1圖所示,在本發明的定位失效偵測系統100中,該視覺圖像輸入裝置1設置於該自主移動機器人,該視覺圖像輸入裝置1經配置而取得該自主移動機器人對於預定的一定位環境的一當前視覺圖像資訊I1。As shown in Figure 1, in the positioning failure detection system 100 of the present invention, the visual image input device 1 is arranged on the autonomous mobile robot, and the visual image input device 1 is configured to obtain the predetermined position of the autonomous mobile robot. A current visual image information I1 of a positioning environment.

具體而言,該視覺圖像輸入裝置1為設置於該自主移動機器人,所取得的該當前視覺圖像資訊I1是指自該自主移動機器人的角度所見到的定位環境的圖像資訊。因此,在該當前視覺圖像資訊I1中並不需要含有該自主移動機器人。在本實施例中,該視覺圖像輸入裝置1係為一相機,特別是一單眼相機(monocular camera),該當前視覺圖像資訊I1為單一RGB圖像,即,單張的RGB點陣圖。Specifically, the visual image input device 1 is installed on the autonomous mobile robot, and the obtained current visual image information I1 refers to the image information of the positioning environment seen from the perspective of the autonomous mobile robot. Therefore, the autonomous mobile robot does not need to be included in the current visual image information I1. In this embodiment, the visual image input device 1 is a camera, especially a monocular camera, and the current visual image information I1 is a single RGB image, that is, a single RGB bitmap .

如第1圖及第2圖所示,在本實施例的定位失效偵測系統100中,該地圖座標迴歸裝置2連接於該視覺圖像輸入裝置1,該地圖座標迴歸裝置2係以經訓練的地圖座標迴歸神經網路20來對該當前視覺圖像資訊I1進行特徵擷取處理,並基於該特徵擷取處理所得到的特徵圖資訊M1進行姿態估測處理而得到該自主移動機器人的當前的一六維姿態估測資訊I2。As shown in Figure 1 and Figure 2, in the positioning failure detection system 100 of this embodiment, the map coordinate regression device 2 is connected to the visual image input device 1, and the map coordinate regression device 2 is trained The map coordinate regression neural network 20 is used to perform feature extraction processing on the current visual image information I1, and perform pose estimation processing based on the feature map information M1 obtained by the feature extraction processing to obtain the current state of the autonomous mobile robot. A six-dimensional attitude estimation information I2 of .

詳細而言,該地圖座標迴歸裝置2的地圖座標迴歸(map coordinate regression)方式係藉由學習場景(scene)與世界座標的關係,只需依靠單一RGB圖像的該當前視覺圖像資訊I1就可以輸出相機在世界座標中的六維姿態。該地圖座標迴歸裝置2所使用的經訓練的該地圖座標迴歸神經網路20主要包括二個部分:用於進行該特徵擷取處理的一特徵擷取器21、以及用於進行該姿態估測處理的一姿態迴歸器22。該特徵擷取器21所進行的該特徵擷取處理主要負責提取圖像(該當前視覺圖像資訊I1)中的特徵,例如在該定位環境為建築物內部環境時,所提取的特徵可為牆角、長廊消失線、門窗……等特徵,這些特徵在圖像中的分布可以在後續的該姿態估測處理的階段用來判斷姿態。該姿態迴歸器22所進行的該姿態估測處理主要是該特徵擷取處理所提取的特徵轉換為符合世界座標的值,最後迴歸六維姿態(6D POSE),得到該六維姿態估測資訊I2。Specifically, the map coordinate regression method of the map coordinate regression device 2 learns the relationship between the scene and the world coordinates, and only needs to rely on the current visual image information I1 of a single RGB image. The six-dimensional pose of the camera in world coordinates can be output. The trained map coordinate regression neural network 20 used by the map coordinate regression device 2 mainly includes two parts: a feature extractor 21 for performing the feature extraction process, and a feature extractor 21 for performing the attitude estimation A pose regressor 22 is processed. The feature extraction process performed by the feature extractor 21 is mainly responsible for extracting the features in the image (the current visual image information I1). For example, when the positioning environment is the internal environment of a building, the extracted features can be Features such as wall corners, corridor disappearing lines, doors and windows, etc., the distribution of these features in the image can be used to judge the pose in the subsequent stage of the pose estimation process. The attitude estimation process performed by the attitude regressor 22 is mainly to convert the features extracted by the feature extraction process into values conforming to the world coordinates, and finally return to the six-dimensional attitude (6D POSE) to obtain the six-dimensional attitude estimation information I2.

該特徵擷取器21通常可採用卷積神經網路(Convolutional Neural Network;CNN)或循環神經網路(Recurrent Neural Network;RNN),該姿態迴歸器22則通常可為全連接神經網路(Fully Connected Neural Network;FNN),但本發明並不以此為限。在本實施例中,該特徵擷取器21為一深度可分離卷積神經網路(Depthwise Separable Convolution Neural Network),該姿態迴歸器22為複數層全連接層(fully connected layer)所構成的神經網路。該特徵擷取器21包括有MobileNetV2模組211、212、213及一全域性平均池化層214,以及特別包括有一幾何限制模組23。該MobileNetV2模組211、212、213是由「Google」所推出的第二代行動裝置版電腦視覺神經網路,其透過深度可分離卷積的方式來達到壓縮模型的目的,以減少參數並提升運算速度,除此之外更具備層間的線性轉換方式(linear bottleneck)以及瓶頸(bottleneck)之間的捷徑連接(shortcut connections)的二項特性。該MobileNetV2模組211、212、213的架構及技術細節為本發明所屬技術領域中具有通常知識者所能輕易獲知,故於此不再作進一步贅述。該全域性平均池化層(Global Average Pooling Layer)214係作為該特徵擷取器21的最後一層,以避免經過該MobileNetV2模組211、212、213特徵擷取後的特徵圖因數量過大而造成後續的該姿態估測處理上的問題。The feature extractor 21 can usually use a convolutional neural network (Convolutional Neural Network; CNN) or a recurrent neural network (Recurrent Neural Network; RNN), and the attitude regressor 22 can usually be a fully connected neural network (Fully Connected Neural Network; FNN), but the present invention is not limited thereto. In this embodiment, the feature extractor 21 is a Depthwise Separable Convolution Neural Network (Depthwise Separable Convolution Neural Network), and the attitude regressor 22 is a neural network composed of multiple fully connected layers. network. The feature extractor 21 includes MobileNetV2 modules 211 , 212 , 213 and a global average pooling layer 214 , and especially includes a geometric constraint module 23 . The MobileNetV2 modules 211, 212, and 213 are the second-generation mobile device version of the computer vision neural network launched by "Google", which achieves the purpose of compressing the model through depth-separable convolution to reduce parameters and improve In addition to the computing speed, it also has the binomial characteristics of linear bottlenecks between layers and shortcut connections between bottlenecks. The architecture and technical details of the MobileNetV2 modules 211 , 212 , 213 are easily known to those skilled in the technical field of the present invention, so no further details are given here. The global average pooling layer (Global Average Pooling Layer) 214 is used as the last layer of the feature extractor 21, in order to avoid the feature map after the feature extraction of the MobileNetV2 module 211, 212, 213 due to the excessive quantity. Subsequent problems in the attitude estimation processing.

該幾何限制模組(Geometric Constrained Module;GCM)23係經配置而在該特徵擷取器21進行此刻的該特徵擷取處理的過程中將前一刻的該特徵擷取處理所得到的前一刻的該特徵圖資訊M0增加輸入至該特徵擷取器21中。設置該幾何限制模組23的原因在於,行進中的該自主移動機器人的姿態變化方式受到機構侷限,而影響此刻與前一刻的圖像位置之間的對應關係,若無法使該地圖座標迴歸神經網路20在訓練時有效地學習到這樣的幾何限制,則所得到該六維姿態估測資訊I2將會有準確度不足的問題。使用二張圖像雖然也可學習到該自主移動機器人的姿態變化,但卻會遇上無法正確估測場景的尺度以及需要初始位置才能進行定位的問題。因此,本發明藉由該幾何限制模組23連接前一刻的該特徵圖資訊M0與此刻的該特徵圖資訊M1,經過一個點方向卷積,再對每個像素點加上非線性激活函數,濾掉不需要的資訊。如第2圖所示,由該MobileNetV2模組211、212所得到的該特徵圖資訊M1,與前一刻同樣模組所得到的該特徵圖資訊M0,進入卷積核為1×1的卷積層231,再經過線性整流函數232(Rectified Linear Unit;ReLU)後進入下一階段的該MobileNetV2模組213的特徵擷取。The Geometric Constrained Module (GCM) 23 is configured so that when the feature extractor 21 performs the feature extraction process at the moment, the feature extraction process at the previous moment is obtained by the feature extraction process at the previous moment. The feature map information M0 is added to the feature extractor 21 . The reason for setting the geometric restriction module 23 is that the posture change mode of the autonomous mobile robot in progress is limited by the mechanism, which affects the corresponding relationship between the image position at the moment and the previous moment. If the map coordinates cannot be returned to the neurological The network 20 effectively learns such geometric constraints during training, and the obtained six-dimensional pose estimation information I2 will have insufficient accuracy. Although the posture change of the autonomous mobile robot can be learned by using two images, it will encounter the problem that the scale of the scene cannot be correctly estimated and the initial position is required for positioning. Therefore, the present invention connects the feature map information M0 at the previous moment with the feature map information M1 at the present moment through the geometric restriction module 23, performs a point-wise convolution, and then adds a nonlinear activation function to each pixel, Filter out unwanted information. As shown in Figure 2, the feature map information M1 obtained by the MobileNetV2 modules 211 and 212, and the feature map information M0 obtained by the same module as the previous moment, enter the convolution layer with a convolution kernel of 1×1 231, and then enter the feature extraction of the MobileNetV2 module 213 in the next stage after passing through the linear rectification function 232 (Rectified Linear Unit; ReLU).

如第3圖所示,在該地圖座標迴歸裝置2中,經訓練的該地圖座標迴歸神經網路20係在該地圖座標迴歸神經網路20與一視覺里程計神經網路VO二者的當前視覺圖像資訊輸入模組為相分享權重的狀態下將該地圖座標迴歸神經網路20與該視覺里程計神經網路VO同時進行訓練所得到。該視覺里程計神經網路VO是指基於視覺里程計的神經網路,視覺里程計(Visual Odometry)是藉由圖像分析而對機器人定位的一種方式,其架構及技術細節為本發明所屬技術領域中具有通常知識者所能輕易獲知,故於此不再作進一步贅述。本發明係以輔助學習的方式來訓練該地圖座標迴歸神經網路20,輔助學習方式是藉由同時訓練該視覺里程計神經網路VO與該地圖座標迴歸神經網路20,使二者的特徵擷取器可學習到彼此擷取特徵的傾向。經訓練的該地圖座標迴歸神經網路20能夠藉由該視覺里程計神經網路VO學到每個圖像間的幾何限制,增加預測的準確度。經過輔助學習的經訓練的該地圖座標迴歸神經網路20在應用時(如第2圖所示)會與該視覺里程計神經網路VO分開,因此能夠在不增加參數量與模型架構情況下,訓練出更好的結果。在第3圖的實施例中,該視覺里程計神經網路VO具有類似孿生神經網路的架構,分別用於提取此刻的圖像資訊T1與前一刻的圖像資訊T0。該地圖座標迴歸神經網路20的當前視覺圖像資訊輸入模組(即,該MobileNetV2模組211)與該視覺里程計神經網路VO的當前視覺圖像資訊輸入模組(即,輸入此刻的圖像資訊T1的模組)分享權重(在第3圖中以虛線連結來表示)。藉此,每當更新參數時,分享的權重會考量視覺里程計部分與地圖座標迴歸部分的誤差進行梯度下降,因此能學習提取兩邊所需的特徵。應注意的是,本發明並不以此為限,該地圖座標迴歸神經網路20及該視覺里程計神經網路VO藉可具有第3圖的實施例以外的其它架構。As shown in Fig. 3, in the map coordinate regression device 2, the trained map coordinate regression neural network 20 is in the current state of both the map coordinate regression neural network 20 and a visual odometer neural network VO. The visual image information input module is obtained by simultaneously training the map coordinate regression neural network 20 and the visual odometer neural network VO under the condition of sharing weights. The visual odometry neural network VO refers to a neural network based on visual odometry. Visual odometry (Visual Odometry) is a way of positioning a robot by image analysis. Its architecture and technical details are the technology of the present invention. Those with common knowledge in the field can easily know it, so no further details will be given here. The present invention trains the map coordinate regression neural network 20 in an auxiliary learning manner. The auxiliary learning method is to simultaneously train the visual odometry neural network VO and the map coordinate regression neural network 20 to make the characteristics of the two Extractors can learn each other's propensity to extract features. The trained map coordinate regression neural network 20 can learn the geometric constraints between each image through the visual odometry neural network VO to increase the prediction accuracy. The trained map coordinate regression neural network 20 after assisted learning will be separated from the visual odometry neural network VO when applied (as shown in Figure 2), so it can be used without increasing the amount of parameters and model architecture , to produce better results. In the embodiment shown in FIG. 3 , the visual odometry neural network VO has a structure similar to a twin neural network, and is used to extract the current image information T1 and the previous image information T0 respectively. The current visual image information input module of the map coordinate regression neural network 20 (that is, the MobileNetV2 module 211) and the current visual image information input module of the visual odometry neural network VO (that is, input at the moment Modules of image information T1) share weights (indicated by dotted line links in Figure 3). In this way, whenever the parameters are updated, the shared weight will consider the error between the visual odometry part and the map coordinate regression part for gradient descent, so it can learn to extract the required features on both sides. It should be noted that the present invention is not limited thereto, and the map coordinate regression neural network 20 and the visual odometry neural network VO may also have other architectures than the embodiment shown in FIG. 3 .

如第1圖至第6圖所示,該定位失效偵測裝置3連接於該地圖座標迴歸裝置2及該主要定位裝置A,以取得該地圖座標迴歸裝置2所估測得到的該六維姿態估測資訊I2以及該主要定位裝置A對該自主移動機器人定位所得到的一六維姿態定位資訊I A。該定位失效偵測裝置3經配置而根據該六維姿態估測資訊I2與該六維姿態定位資訊I A之間的偏差B的隨時間的變化量,求得一偏差變化量函數值F B,並且在該偏差變化量函數值F B超過預設的一失效判斷閾值Th時,判斷該自主移動機器人的該主要定位裝置A當前為定位失效。在本實施例中,該主要定位裝置A係為基於地圖定位的裝置,藉由感測環境的光達及預先建置的點雲地圖以定位得到該六維姿態定位資訊I A。當然,該主要定位裝置A也可以是以其它方式進行定位的裝置,只要該主要定位裝置A所定位得到的該六維姿態定位資訊I A與該地圖座標迴歸裝置2所估測得到的該六維姿態估測資訊I2相對應,而得以供該定位失效偵測裝置3進行上述的定位失效的相關計算及判斷即可。 As shown in Figures 1 to 6, the positioning failure detection device 3 is connected to the map coordinate returning device 2 and the main positioning device A to obtain the six-dimensional attitude estimated by the map coordinate returning device 2 The estimated information I2 and a six-dimensional attitude positioning information I A obtained by the main positioning device A for positioning the autonomous mobile robot. The positioning failure detection device 3 is configured to obtain a deviation variation function value F B according to the variation over time of the deviation B between the six-dimensional attitude estimation information I2 and the six-dimensional attitude positioning information I A , and when the deviation variation function value F B exceeds a preset failure judgment threshold Th, it is judged that the main positioning device A of the autonomous mobile robot is currently in a positioning failure. In this embodiment, the main positioning device A is a map-based positioning device, which obtains the six-dimensional attitude positioning information I A by sensing the lidar of the environment and the pre-built point cloud map for positioning. Of course, the main positioning device A can also be a device for positioning in other ways, as long as the six-dimensional posture positioning information I A obtained by the main positioning device A is consistent with the six-dimensional posture positioning information I A estimated by the map coordinate regression device 2 It only needs to correspond to the three-dimensional attitude estimation information I2, so that it can be used by the positioning failure detection device 3 to perform the above-mentioned relative calculation and judgment of the positioning failure.

較佳地,如第4圖所示,在本實施例中,該定位失效偵測裝置3包括一長期變化量計算單元31以及一短期變化量計算單元32,該長期變化量計算單元31係配置而計算該六維姿態估測資訊I2與該六維姿態定位資訊I A之間的偏差B在長時間的一長期變化量函數值F1,該短期變化量計算單元32係配置而計算該六維姿態估測資訊I2與該六維姿態定位資訊I A之間的偏差B在短時間的一短期變化量函數值F2,以及該定位失效偵測裝置3的一判斷單元33係經配置而在該長期變化量函數值F1與該短期變化量函數值F2之間的差異(即,該偏差變化量函數值F B)超過該失效判斷閾值Th時,判斷該自主移動機器人的該主要定位裝置A當前為定位失效。 Preferably, as shown in Figure 4, in this embodiment, the positioning failure detection device 3 includes a long-term variation calculation unit 31 and a short-term variation calculation unit 32, and the long-term variation calculation unit 31 is configured To calculate the long-term variation function value F1 of the deviation B between the six-dimensional attitude estimation information I2 and the six-dimensional attitude positioning information IA , the short-term variation calculation unit 32 is configured to calculate the six-dimensional A short-term variation function value F2 of the deviation B between the attitude estimation information I2 and the six-dimensional attitude positioning information I A in a short period of time, and a judgment unit 33 of the positioning failure detection device 3 is configured to be in the When the difference between the long-term variation function value F1 and the short-term variation function value F2 (that is, the deviation variation function value F B ) exceeds the failure judgment threshold Th, it is judged that the main positioning device A of the autonomous mobile robot is currently Invalid for positioning.

由於若單純以該六維姿態估測資訊I2與該六維姿態定位資訊I A之間的偏差B作為判斷,只要超過特定的閾值就判斷為定位失效,在此種情況下,必須確定該地圖座標迴歸裝置2的每次的估測結果(該六維姿態估測資訊I2)都是極準確的,否則容易造成誤判(false positive)比例極高。因此,在本實施例中,藉由著重於偏差在長期與短期的變化,藉由將偏差累積,比較偏差累積的長期影響與短期影響,來判斷當前是否遇上定位失效。具體而言,該長期變化量函數值F1及該短期變化量函數值F2皆是代表以設定的計算方式所求得的一種代價(cost),在本實施例中係藉由計算當前的偏差B與前次的代價間的差異,並進行累加而得到此次的代價。請參閱第5圖及第6圖,其係顯示偏差B與該長期變化量函數值F1及該短期變化量函數值F2的變化關係。如圖所示,當偏差B變化小時,代價會小幅度震盪在一個區間,但當偏差B越來越大時,代價也能隨之上升。這樣的作法可以避免掉某一次的過大的偏差B就導致誤判的發生。當發生如機器人綁架或是主要定位裝置A的最佳化失敗時,偏差B會提升,代價也會跟著逐步提升。該短期變化量函數值F2表現出對於偏差B的短期變化量敏感的代價,能夠對偏差B的變化更加迅速地反應。該長期變化量函數值F1相較於該短期變化量函數值F2需要更久的時間累積代價,降低誤判的機會。當該長期變化量函數值F1與該短期變化量函數值F2之間的差異(該偏差變化量函數值F B)超過該失效判斷閾值Th時,則判斷為定位失效。從第6圖中可以看出,即使在橫軸的第5個資料點時偏差B突然增加,該偏差變化量函數值F B也不會過度增加,避免因該地圖座標迴歸裝置2的估測失誤而造成誤判。在定位異常狀況發生在橫軸的第11個資料點時,該偏差變化量函數值F B表現出逐步上昇,而能夠藉由該失效判斷閾值Th判斷出定位失效。 Since if the deviation B between the six-dimensional attitude estimation information I2 and the six-dimensional attitude positioning information I A is simply used as a judgment, as long as it exceeds a specific threshold, it will be judged as a positioning failure. In this case, the map must be determined. Each estimation result of the coordinate regression device 2 (the six-dimensional attitude estimation information I2 ) is extremely accurate, otherwise it is easy to cause a high rate of false positives. Therefore, in this embodiment, by emphasizing the long-term and short-term changes of the deviation, by accumulating the deviation and comparing the long-term and short-term effects of the deviation accumulation, it is judged whether there is a positioning failure. Specifically, both the long-term variation function value F1 and the short-term variation function value F2 represent a cost obtained by a set calculation method. In this embodiment, by calculating the current deviation B The difference from the previous cost is accumulated to obtain the current cost. Please refer to FIG. 5 and FIG. 6, which show the variation relationship between the deviation B and the long-term variation function value F1 and the short-term variation function value F2. As shown in the figure, when the deviation B changes small, the cost will fluctuate in a small range, but when the deviation B becomes larger, the cost can also rise accordingly. This approach can avoid the occurrence of misjudgment caused by a certain excessive deviation B. When the robot kidnaps or the optimization failure of the main positioning device A occurs, the deviation B will increase, and the cost will gradually increase. The short-term change amount function value F2 shows the cost of being sensitive to the short-term change amount of the deviation B, and can respond to the change of the deviation B more quickly. Compared with the short-term variation function value F2, the long-term variation function value F1 needs a longer time to accumulate cost, which reduces the chance of misjudgment. When the difference between the long-term variation function value F1 and the short-term variation function value F2 (the deviation variation function value F B ) exceeds the failure judging threshold Th, it is judged as a positioning failure. As can be seen from Figure 6, even if the deviation B suddenly increases at the fifth data point on the horizontal axis, the deviation variation function value F B will not increase excessively, avoiding the estimation caused by the map coordinate regression device 2 Misjudgments caused by mistakes. When the positioning abnormality occurs at the 11th data point on the horizontal axis, the deviation variation function value F B shows a gradual increase, and the positioning failure can be judged by the failure judgment threshold Th.

較佳地,如第4圖所示,在本實施例中,該定位失效偵測裝置3包括一定位結果輸出單元34,連接於該自主移動機器人之一控制系統C,該定位結果輸出單元34係經配置而在該主要定位裝置A被判斷為當前為定位失效時,以該六維姿態估測資訊I2取代該六維姿態定位資訊I A,作為一定位結果資訊I3輸出至該控制系統C,以維持自主移動機器人的正確運作。 Preferably, as shown in Figure 4, in this embodiment, the positioning failure detection device 3 includes a positioning result output unit 34, which is connected to a control system C of the autonomous mobile robot, and the positioning result output unit 34 It is configured to replace the six-dimensional attitude positioning information I A with the six-dimensional attitude estimation information I2 when the main positioning device A is judged to be a current positioning failure, and output it to the control system C as a positioning result information I3 , to maintain the correct operation of the autonomous mobile robot.

較佳地,如第4圖所示,在本實施例中,該定位失效偵測裝置3包括一初始定位給予單元35,連接於該自主移動機器人的該主要定位裝置A,該初始定位給予單元35係經配置而在該主要定位裝置A被判斷為當前為定位失效時,以該六維姿態估測資訊I2作為一初始姿態定位資訊I0而給予至該主要定位裝置A,以供該主要定位裝置A後續利用該初始姿態定位資訊I0對該自主移動機器人進行定位而得到該六維姿態定位資訊I APreferably, as shown in Figure 4, in this embodiment, the positioning failure detection device 3 includes an initial positioning unit 35 connected to the main positioning device A of the autonomous mobile robot, the initial positioning unit 35 is configured so that when the main positioning device A is judged to be a current positioning failure, the six-dimensional attitude estimation information I2 is used as an initial attitude positioning information I0 and given to the main positioning device A for the main positioning The device A subsequently uses the initial attitude positioning information I0 to position the autonomous mobile robot to obtain the six-dimensional attitude positioning information I A .

以上之敘述以及說明僅為本發明之較佳實施例之說明,對於此項技術具有通常知識者當可依據以下所界定申請專利範圍以及上述之說明而作其他之修改,惟此些修改仍應是為本發明之發明精神而在本發明之權利範圍中。The above descriptions and descriptions are only descriptions of the preferred embodiments of the present invention. Those who have common knowledge of this technology may make other modifications according to the scope of the patent application defined below and the above descriptions, but these modifications should still be It is for the inventive spirit of the present invention and within the scope of rights of the present invention.

100:定位失效偵測系統 1:視覺圖像輸入裝置 2:地圖座標迴歸裝置 20:地圖座標迴歸神經網路 21:特徵擷取器 211:MobileNetV2模組 212:MobileNetV2模組 213:MobileNetV2模組 214:全域性平均池化層 22:姿態迴歸器 23:幾何限制模組 231:卷積層 232:線性整流函數 3:定位失效偵測裝置 31:長期變化量計算單元 32:短期變化量計算單元 33:判斷單元 34:定位結果輸出單元 35:初始定位給予單元 A:主要定位裝置 B:偏差 C:控制系統 F1:長期變化量函數值 F2:短期變化量函數值 F B:偏差變化量函數值 I0:初始姿態定位資訊 I1:當前視覺圖像資訊 I2:六維姿態估測資訊 I3:定位結果資訊 I A:六維姿態定位資訊 M0:特徵圖資訊 M1:特徵圖資訊 VO:視覺里程計神經網路100: Positioning failure detection system 1: Visual image input device 2: Map coordinate regression device 20: Map coordinate regression neural network 21: Feature extractor 211: MobileNetV2 module 212: MobileNetV2 module 213: MobileNetV2 module 214 : Global average pooling layer 22: Attitude regressor 23: Geometric restriction module 231: Convolution layer 232: Linear rectification function 3: Positioning failure detection device 31: Long-term variation calculation unit 32: Short-term variation calculation unit 33: Judging unit 34: positioning result output unit 35: initial positioning giving unit A: main positioning device B: deviation C: control system F1: long-term variation function value F2: short-term variation function value F B : deviation variation function value I0: Initial attitude positioning information I1: current visual image information I2: six-dimensional attitude estimation information I3: positioning result information I A : six-dimensional attitude positioning information M0: feature map information M1: feature map information VO: visual odometry neural network

[第1圖]為顯示根據本發明的一實施例的基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統的方塊示意圖; [第2圖]為顯示根據本發明的實施例的定位失效偵測系統中的地圖座標迴歸裝置的經訓練的地圖座標迴歸神經網路的示意圖; [第3圖]為顯示根據本發明的實施例的地圖座標迴歸神經網路與視覺里程計神經網路的示意圖; [第4圖]為顯示根據本發明的實施例的定位失效偵測系統中的定位失效偵測裝置的方塊示意圖; [第5圖]為顯示本發明的實施例的定位失效偵測裝置所得到的六維姿態估測資訊與六維姿態定位資訊之間的偏差、長期變化量函數值、短期變化量函數值的示意圖; [第6圖]為顯示本發明的實施例的定位失效偵測裝置所得到的六維姿態估測資訊與六維姿態定位資訊之間的偏差、長期變化量函數值與短期變化量函數值之間的差異、失效判斷閾值的示意圖。 [Fig. 1] is a schematic block diagram showing a positioning failure detection system for an autonomous mobile robot based on deep learning six-dimensional attitude regression according to an embodiment of the present invention; [Fig. 2] is a schematic diagram showing the trained map coordinate regression neural network of the map coordinate regression device in the positioning failure detection system according to an embodiment of the present invention; [Fig. 3] is a schematic diagram showing a map coordinate regression neural network and a visual odometry neural network according to an embodiment of the present invention; [Fig. 4] is a schematic block diagram showing a location failure detection device in a location failure detection system according to an embodiment of the present invention; [Fig. 5] shows the deviation between the six-dimensional attitude estimation information and the six-dimensional attitude positioning information obtained by the positioning failure detection device of the embodiment of the present invention, the long-term variation function value, and the short-term variation function value schematic diagram; [Figure 6] shows the deviation between the six-dimensional attitude estimation information and the six-dimensional attitude positioning information obtained by the positioning failure detection device according to the embodiment of the present invention, and the difference between the long-term variation function value and the short-term variation function value The schematic diagram of the difference between and the failure judgment threshold.

100:定位失效偵測系統 100: Positioning failure detection system

1:視覺圖像輸入裝置 1: Visual image input device

2:地圖座標迴歸裝置 2: Map coordinate return device

20:地圖座標迴歸神經網路 20: Map Coordinate Regression Neural Network

3:定位失效偵測裝置 3: Positioning failure detection device

A:主要定位裝置 A: Main positioning device

I0:初始姿態定位資訊 I0: Initial attitude positioning information

I1:當前視覺圖像資訊 I1: Current visual image information

I2:六維姿態估測資訊 I2: Six-dimensional attitude estimation information

I3:定位結果資訊 I3: Positioning result information

IA:六維姿態定位資訊 I A : Six-dimensional attitude positioning information

Claims (9)

一種基於深度學習六維姿態迴歸之自主移動機器人之定位失效偵測系統,係用於偵測該自主移動機器人的一主要定位裝置的定位失效,該定位失效偵測系統包含: 一視覺圖像輸入裝置,設置於該自主移動機器人,該視覺圖像輸入裝置經配置而取得該自主移動機器人對於預定的一定位環境的一當前視覺圖像資訊; 一地圖座標迴歸裝置,連接於該視覺圖像輸入裝置,該地圖座標迴歸裝置係以經訓練的地圖座標迴歸神經網路來對該當前視覺圖像資訊進行特徵擷取處理,並基於該特徵擷取處理所得到的特徵圖資訊進行姿態估測處理而得到該自主移動機器人的當前的一六維姿態估測資訊,其中經訓練的該地圖座標迴歸神經網路係在該地圖座標迴歸神經網路與一視覺里程計神經網路二者的當前視覺圖像資訊輸入模組為相分享權重的狀態下將該地圖座標迴歸神經網路與該視覺里程計神經網路同時進行訓練所得到,經訓練的該地圖座標迴歸神經網路包括用於進行該特徵擷取處理的一特徵擷取器以及用於進行該姿態估測處理的一姿態迴歸器,該特徵擷取器包括一幾何限制模組,該幾何限制模組係經配置而在該特徵擷取器進行此刻的該特徵擷取處理的過程中將前一刻的該特徵擷取處理所得到的前一刻的該特徵圖資訊增加輸入至該特徵擷取器中;以及 一定位失效偵測裝置,連接於該地圖座標迴歸裝置及該主要定位裝置,以取得該地圖座標迴歸裝置所估測得到的該六維姿態估測資訊以及該主要定位裝置對該自主移動機器人定位所得到的一六維姿態定位資訊,該定位失效偵測裝置經配置而根據該六維姿態估測資訊與該六維姿態定位資訊之間的偏差的隨時間的變化量,求得一偏差變化量函數值,並且在該偏差變化量函數值超過預設的一失效判斷閾值時,判斷該自主移動機器人的該主要定位裝置當前為定位失效。 A positioning failure detection system for an autonomous mobile robot based on deep learning six-dimensional attitude regression is used to detect a positioning failure of a main positioning device of the autonomous mobile robot. The positioning failure detection system includes: A visual image input device is arranged on the autonomous mobile robot, and the visual image input device is configured to obtain a current visual image information of the autonomous mobile robot for a predetermined positioning environment; A map coordinate regression device, connected to the visual image input device, the map coordinate regression device uses the trained map coordinate regression neural network to perform feature extraction processing on the current visual image information, and based on the feature extraction Taking the processed feature map information and performing attitude estimation processing to obtain the current six-dimensional attitude estimation information of the autonomous mobile robot, wherein the trained map coordinate regression neural network is based on the map coordinate regression neural network The current visual image information input module and a visual odometry neural network are obtained by simultaneously training the map coordinate regression neural network and the visual odometry neural network in the state of sharing weights. After training The map coordinate regression neural network includes a feature extractor for performing the feature extraction process and a pose regressor for performing the pose estimation process, the feature extractor includes a geometric constraint module, The geometric constraint module is configured to add the feature map information of the previous moment obtained by the feature extraction process of the previous moment to the feature when the feature extractor is performing the feature extraction process of the current moment in the fetcher; and A positioning failure detection device, connected to the map coordinate returning device and the main positioning device, to obtain the six-dimensional attitude estimation information estimated by the map coordinate returning device and the main positioning device to locate the autonomous mobile robot For the obtained six-dimensional attitude positioning information, the positioning failure detection device is configured to obtain a deviation change according to the variation over time of the deviation between the six-dimensional attitude estimation information and the six-dimensional attitude positioning information and when the deviation variation function value exceeds a preset failure judgment threshold, it is judged that the main positioning device of the autonomous mobile robot is currently a positioning failure. 如請求項1所述之定位失效偵測系統,其中該視覺圖像輸入裝置為一相機,該當前視覺圖像資訊為單一RGB圖像。The positioning failure detection system as described in Claim 1, wherein the visual image input device is a camera, and the current visual image information is a single RGB image. 如請求項1所述之定位失效偵測系統,其中該特徵擷取器為一深度可分離卷積神經網路。The positioning failure detection system as claimed in claim 1, wherein the feature extractor is a deep separable convolutional neural network. 如請求項3所述之定位失效偵測系統,其中該特徵擷取器包括一MobileNetV2模組。The positioning failure detection system as described in claim 3, wherein the feature extractor includes a MobileNetV2 module. 如請求項3所述之定位失效偵測系統,其中該特徵擷取器包括一全域性平均池化層。The location failure detection system as claimed in claim 3, wherein the feature extractor includes a global average pooling layer. 如請求項1所述之定位失效偵測系統,其中該姿態迴歸器包括一全連接層。The positioning failure detection system according to claim 1, wherein the attitude regressor includes a fully connected layer. 如請求項1所述之定位失效偵測系統,其中該定位失效偵測裝置包括一長期變化量計算單元以及一短期變化量計算單元,該長期變化量計算單元係配置而計算該六維姿態估測資訊與該六維姿態定位資訊之間的偏差在長時間的一長期變化量函數值,該短期變化量計算單元係配置而計算該六維姿態估測資訊與該六維姿態定位資訊之間的偏差在短時間的一短期變化量函數值,以及該定位失效偵測裝置係經配置而在該長期變化量函數值與該短期變化量函數值之間的差異超過該失效判斷閾值時,判斷該自主移動機器人的該主要定位裝置當前為定位失效。The positioning failure detection system as described in claim 1, wherein the positioning failure detection device includes a long-term variation calculation unit and a short-term variation calculation unit, and the long-term variation calculation unit is configured to calculate the six-dimensional attitude estimation The deviation between the measurement information and the six-dimensional attitude positioning information is a long-term variation function value in a long period of time, and the short-term variation calculation unit is configured to calculate the difference between the six-dimensional attitude estimation information and the six-dimensional attitude positioning information The deviation of a short-term variation function value in a short period of time, and the positioning failure detection device is configured so that when the difference between the long-term variation function value and the short-term variation function value exceeds the failure judgment threshold, it is judged The primary positioning device of the autonomous mobile robot is currently a positioning failure. 如請求項1或7所述之定位失效偵測系統,其中該定位失效偵測裝置包括一定位結果輸出單元,連接於該自主移動機器人之一控制系統,該定位結果輸出單元係經配置而在該主要定位裝置被判斷為當前為定位失效時,以該六維姿態估測資訊取代該六維姿態定位資訊,作為一定位結果資訊輸出至該控制系統。The positioning failure detection system as described in claim 1 or 7, wherein the positioning failure detection device includes a positioning result output unit connected to a control system of the autonomous mobile robot, and the positioning result output unit is configured to When the main positioning device is judged that the current positioning fails, the six-dimensional attitude estimation information is used to replace the six-dimensional attitude positioning information, and output to the control system as a positioning result information. 如請求項1或7所述之定位失效偵測系統,其中該定位失效偵測裝置包括一初始定位給予單元,連接於該自主移動機器人的該主要定位裝置,該初始定位給予單元係經配置而在該主要定位裝置被判斷為當前為定位失效時,以該六維姿態估測資訊作為一初始姿態定位資訊而給予至該主要定位裝置,以供該主要定位裝置後續利用該初始姿態定位資訊對該自主移動機器人進行定位而得到該六維姿態定位資訊。The positioning failure detection system as described in claim 1 or 7, wherein the positioning failure detection device includes an initial positioning giving unit connected to the main positioning device of the autonomous mobile robot, and the initial positioning giving unit is configured to When the main positioning device is judged to be a current positioning failure, the six-dimensional attitude estimation information is given to the main positioning device as an initial attitude positioning information, so that the main positioning device can subsequently use the initial attitude positioning information to pair The autonomous mobile robot performs positioning to obtain the six-dimensional attitude positioning information.
TW111121968A 2022-06-14 2022-06-14 Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression TWI795306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111121968A TWI795306B (en) 2022-06-14 2022-06-14 Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111121968A TWI795306B (en) 2022-06-14 2022-06-14 Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression

Publications (2)

Publication Number Publication Date
TWI795306B true TWI795306B (en) 2023-03-01
TW202348371A TW202348371A (en) 2023-12-16

Family

ID=86692370

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111121968A TWI795306B (en) 2022-06-14 2022-06-14 Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression

Country Status (1)

Country Link
TW (1) TWI795306B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210003937A (en) * 2018-05-23 2021-01-12 모비디어스 리미티드 Deep learning system
CN112380923A (en) * 2020-10-26 2021-02-19 天津大学 Intelligent autonomous visual navigation and target detection method based on multiple tasks
CN112560571A (en) * 2020-10-09 2021-03-26 天津大学 Intelligent autonomous visual navigation method based on convolutional neural network
TWI764542B (en) * 2021-01-27 2022-05-11 國立臺灣大學 Autonomous intelligent vehicle real-time visual localization and uncertainty estimation system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210003937A (en) * 2018-05-23 2021-01-12 모비디어스 리미티드 Deep learning system
CN112560571A (en) * 2020-10-09 2021-03-26 天津大学 Intelligent autonomous visual navigation method based on convolutional neural network
CN112380923A (en) * 2020-10-26 2021-02-19 天津大学 Intelligent autonomous visual navigation and target detection method based on multiple tasks
TWI764542B (en) * 2021-01-27 2022-05-11 國立臺灣大學 Autonomous intelligent vehicle real-time visual localization and uncertainty estimation system

Also Published As

Publication number Publication date
TW202348371A (en) 2023-12-16

Similar Documents

Publication Publication Date Title
CN110796063B (en) Method, device, equipment, storage medium and vehicle for detecting parking space
CN110546459B (en) Robot tracking navigation with data fusion
WO2021017212A1 (en) Multi-scene high-precision vehicle positioning method and apparatus, and vehicle-mounted terminal
Christensen et al. Fault detection in autonomous robots based on fault injection and learning
WO2022170847A1 (en) Online calibration method based on laser and visual fusion
CN108981693B (en) VIO rapid joint initialization method based on monocular camera
Chambers et al. Robust multi-sensor fusion for micro aerial vehicle navigation in GPS-degraded/denied environments
US11636612B2 (en) Automated guided vehicle navigation device and method thereof
CN111337018A (en) Positioning method and device, intelligent robot and computer readable storage medium
Motlagh et al. Position Estimation for Drones based on Visual SLAM and IMU in GPS-denied Environment
US20200401152A1 (en) Self-location estimation method
CN111930094A (en) Unmanned aerial vehicle actuator fault diagnosis method based on extended Kalman filtering
CN109739232A (en) Barrier method for tracing, device, car-mounted terminal and storage medium
TWI795306B (en) Localization failure detection system for autonomous mobile robots using deep learning based 6d pose regression
KR20200063538A (en) Method for self-diagnosing localization status and autonomous mobile robot carrying out the same
Silva et al. Saliency-based cooperative landing of a multirotor aerial vehicle on an autonomous surface vehicle
Zamanakos et al. Energy-aware design of vision-based autonomous tracking and landing of a UAV
US10572802B1 (en) Learning state-dependent sensor measurement models for localization
Xu et al. Indoor multi-sensory self-supervised autonomous mobile robotic navigation
CN114282776A (en) Method, device, equipment and medium for cooperatively evaluating automatic driving safety of vehicle and road
Tian et al. Kidnapping detection and recognition in previous unknown environment
CN114037759A (en) Dynamic characteristic point filtering and repositioning method in indoor environment
Mahmoud et al. Dynamic Environments and Robust SLAM: Optimizing Sensor Fusion and Semantics for Wheeled Robots
Korodi et al. Correcting odometry errors for mobile robots using image processing
Kummer et al. Autonomous UAV landing via eye-in-hand visual servoing