TW202124990A - State estimation and sensor fusion methods for autonomous vehicles - Google Patents

State estimation and sensor fusion methods for autonomous vehicles Download PDF

Info

Publication number
TW202124990A
TW202124990A TW108146328A TW108146328A TW202124990A TW 202124990 A TW202124990 A TW 202124990A TW 108146328 A TW108146328 A TW 108146328A TW 108146328 A TW108146328 A TW 108146328A TW 202124990 A TW202124990 A TW 202124990A
Authority
TW
Taiwan
Prior art keywords
state
mobile vehicle
transportation
item
patent application
Prior art date
Application number
TW108146328A
Other languages
Chinese (zh)
Other versions
TWI715358B (en
Inventor
廖歆蘭
林昆賢
張立光
吳韋良
陳一元
Original Assignee
財團法人工業技術研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人工業技術研究院 filed Critical 財團法人工業技術研究院
Priority to TW108146328A priority Critical patent/TWI715358B/en
Priority to CN202010086218.9A priority patent/CN113075923B/en
Application granted granted Critical
Publication of TWI715358B publication Critical patent/TWI715358B/en
Publication of TW202124990A publication Critical patent/TW202124990A/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

State estimation and sensor fusion methods for autonomous vehicles thereof are provided. The autonomous vehicle includes at least one sensor, at least one motor and a processor, and is configured to transfer and transport an object. In the method, a task instruction for moving an object and data required for executing the task instruction are received. The task instruction is divided into a plurality of work stages according to respective mapping locations, and each of the work stages is mapped to one of a transport state and an execution state, so as to establish a semantic hierarchy. A current location of the autonomous vehicle is detected by using the sensor and mapped to one of the work stages in the semantic hierarchy, so as to estimate a current state of the autonomous vehicle.

Description

移動載具及其狀態估測與感測融合切換方法Mobile vehicle and its state estimation and sensing fusion switching method

本發明是有關於一種裝置狀態的估測方法,且特別是有關於一種移動載具及其狀態估測與感測融合切換方法。The present invention relates to a method for estimating the state of a device, and more particularly to a mobile vehicle and a method for fusion switching of state estimation and sensing.

自主移動載具(Automated Guided Vehicle,AGV)是一種移動式機器人,其可通過地板導線、機器視覺或雷射導航等技術在廠區、倉庫內搬運貨物。由於AGV能夠自動裝卸和運輸貨物,讓載卸貨更省力,且能夠彈性調配載卸地點及運輸路徑,以提升交貨效率、解決車道占用等問題。Automated Guided Vehicle (AGV) is a mobile robot that can move goods in factories and warehouses through technologies such as floor wires, machine vision, or laser navigation. Because AGV can automatically load, unload and transport goods, it saves effort on loading and unloading, and can flexibly allocate loading and unloading locations and transportation routes to improve delivery efficiency and solve problems such as lane occupation.

AGV仰賴定位、物件識別等技術來實施貨物搬運,近年來多種定位技術群雄並起,如藍芽(Bluetooth)、無線保真(WiFi)、超寬頻(Ultra-Wideband,UWB)、可見光定位系統(Visible Light Positioning System)、無線射頻辨識(Radio Frequency Identification,RFID)等,依佈建成本、精準度及技術特性,這些定位技術各有其適合應用的場域。由於定位技術的多元性,使得室內外無縫定位之設計,難以單純地採用雙系統間的切換來達成。AGV relies on technologies such as positioning and object recognition to carry out cargo handling. In recent years, a variety of positioning technologies have come together, such as Bluetooth, WiFi, Ultra-Wideband (UWB), and visible light positioning system ( Visible Light Positioning System), Radio Frequency Identification (RFID), etc., depending on deployment cost, accuracy, and technical characteristics, each of these positioning technologies has its own field of application. Due to the diversity of positioning technology, the design of seamless indoor and outdoor positioning is difficult to achieve by simply switching between dual systems.

本發明之目的係提供一種移動載具及其狀態估測與感測融合切換方法,可實現多元定位系統之間的無縫切換。The purpose of the present invention is to provide a mobile vehicle and its state estimation and sensing fusion switching method, which can realize seamless switching between multiple positioning systems.

本發明提供一種移動載具的狀態估測與感測融合切換方法,此移動載具包括至少一個感測器、至少一個致動器及處理器,用以移載及運送物件。此方法包括下列步驟:接收搬運物件的任務指令及執行此任務指令所需的資料;將此任務指令依映射位置區分為多個工作階段,並將各個工作階段映射至運輸狀態及執行狀態其中之一,以建立語義層次;利用感測器估計移動載具的目前位置;以及將此目前位置映射至語義層次中的工作階段其中之一,以估測移動載具的目前狀態。The invention provides a method for switching between state estimation and sensing fusion of a mobile carrier. The mobile carrier includes at least one sensor, at least one actuator and a processor for transferring and transporting objects. This method includes the following steps: receiving a task instruction for moving objects and the data required to execute the task instruction; dividing the task instruction into multiple work stages according to the mapping position, and mapping each work stage to one of the transportation state and the execution state 1. To establish a semantic level; use the sensor to estimate the current position of the mobile vehicle; and map this current position to one of the working stages in the semantic level to estimate the current state of the mobile vehicle.

本發明提供一種移動載具,其包括資料擷取裝置、至少一個感測器、至少一個致動器、儲存裝置及處理器。其中,感測器是用以估計移動載具的目前位置。致動器是用以移載及運送物件。儲存裝置是用以儲存由資料擷取裝置擷取的資料及多個電腦指令或程式。處理器耦接資料擷取裝置、感測器、致動器及儲存裝置,且經配置以執行電腦指令或程式以:利用資料擷取裝置接收搬運物件的任務指令及執行此任務指令所需的資料;將此任務指令依映射位置區分為多個工作階段,並將各個工作階段映射至運輸狀態及執行狀態其中之一,以建立語義層次;以及將感測器所估計的目前位置映射至語義層次中的所述工作階段其中之一,以估測移動載具的目前狀態。The invention provides a mobile vehicle, which includes a data capture device, at least one sensor, at least one actuator, a storage device, and a processor. Among them, the sensor is used to estimate the current position of the mobile vehicle. The actuator is used to transfer and transport objects. The storage device is used to store the data captured by the data capture device and multiple computer commands or programs. The processor is coupled to the data capture device, the sensor, the actuator, and the storage device, and is configured to execute computer commands or programs to: use the data capture device to receive task instructions for moving objects and execute the task instructions Data; divide the task command into multiple work phases based on the mapping position, and map each work phase to one of the transportation state and the execution state to establish a semantic level; and map the current position estimated by the sensor to the semantics One of the working stages in the hierarchy to estimate the current state of the mobile vehicle.

本發明的移動載具及其狀態估測與感測融合切換方法藉由將任務指令區分為多個工作階段並映射至不同狀態以建立語義層次,當移動載具在執行移載和運送物件的任務時,可藉由將估計位置映射至當前狀態並判斷出是否發生狀態轉移,而當發生狀態轉移時,也能夠快速地切換至適合當下狀態的感測組合,以接續執行任務指令。藉此,可有效率地執行移動載具的狀態估測及其感測融合切換,實現定位系統之間的無縫切換。The mobile vehicle and its state estimation and sensing fusion switching method of the present invention establish a semantic level by dividing task commands into multiple work phases and mapping them to different states. When the mobile vehicle is performing transfer and transportation of objects During the task, it is possible to map the estimated position to the current state and determine whether a state transition occurs. When a state transition occurs, it can also quickly switch to a sensing combination suitable for the current state to continue executing task commands. Thereby, the state estimation of the mobile vehicle and its sensing fusion switching can be performed efficiently, and seamless switching between positioning systems can be realized.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the present invention more comprehensible, the following specific embodiments are described in detail in conjunction with the accompanying drawings.

本發明實施例係針對自主移動載具(Automated Guided Vehicle,AGV)設計一個共通架構,其中將所接收的任務指令依其映射位置區分為多個工作階段以建立語義層次(semantic hierarchy),然後將各個工作階段依據其順序及連結關係隨著語義層次映射至狀態層,以建立狀態轉換模型。在實時運作中,自主移動載具即可藉由估計自身當前位置,並將該位置映射至語義層次,以估測目前狀態。此外,自主移動載具可比較目前狀態與先前狀態之間的差異,以判斷是否發生狀態轉移,並在發生狀態轉移時重新排定感測器的優先順序,以有效率地切換到適合目前狀態的控制執行緒以繼續執行搬運工作。The embodiment of the present invention designs a common architecture for autonomous mobile vehicles (Automated Guided Vehicle, AGV), in which the received task instructions are divided into multiple work stages according to their mapping positions to establish a semantic hierarchy, and then Each work stage is mapped to the state layer along with the semantic level according to its sequence and connection relationship to establish a state transition model. In real-time operation, autonomous mobile vehicles can estimate their current position and map the position to the semantic level to estimate the current state. In addition, the autonomous mobile vehicle can compare the difference between the current state and the previous state to determine whether a state transition occurs, and re-prioritize the sensors when a state transition occurs, so as to efficiently switch to the current state Control thread to continue the handling work.

圖1是依照本發明一實施例所繪示之移動載具的方塊圖。請參照圖1,本實施例的移動載具10例如是用以移載及運送物件的自主移動載具、搬運機器人等電子裝置。移動載具10包括資料擷取裝置12、至少一個感測器14、至少一個致動器16、儲存裝置18及處理器20,其功能分述如下。FIG. 1 is a block diagram of a mobile vehicle according to an embodiment of the present invention. Please refer to FIG. 1, the mobile carrier 10 of this embodiment is, for example, an electronic device such as an autonomous mobile carrier, a handling robot, etc., used to transfer and transport objects. The mobile vehicle 10 includes a data acquisition device 12, at least one sensor 14, at least one actuator 16, a storage device 18, and a processor 20, and its functions are described below.

資料擷取裝置12例如是通用序列匯流排(USB)介面、火線(Firewire)介面、雷電(Thunderbolt)介面、讀卡機等介面裝置,其可用以連接隨身碟、行動硬碟、記憶卡等外部裝置以擷取資料。在另一實施例中,資料擷取裝置12例如是鍵盤、滑鼠、觸控板、觸碰螢幕等輸入工具,用以偵測使用者的輸入操作以擷取輸入資料。在又一實施例中,資料擷取裝置12例如是支援乙太網路(Ethernet)等有線網路連結的網路卡或是支援電機和電子工程師協會(Institute of Electrical and Electronics Engineers,IEEE)802.11n/b/g等無線通訊標準的無線網路卡,其可透過有線或無線方式與外部裝置進行網路連線並擷取資料。The data capture device 12 is, for example, an interface device such as a universal serial bus (USB) interface, a Firewire interface, a Thunderbolt interface, a card reader, etc., which can be used to connect external devices such as flash drives, mobile hard drives, memory cards, etc. Device to retrieve data. In another embodiment, the data capture device 12 is an input tool such as a keyboard, a mouse, a touchpad, a touch screen, etc., for detecting the input operation of the user to capture input data. In another embodiment, the data acquisition device 12 is, for example, a network card that supports wired network connections such as Ethernet or supports Institute of Electrical and Electronics Engineers (IEEE) 802.11 A wireless network card with wireless communication standards such as n/b/g, which can connect to an external device through a wired or wireless network and retrieve data.

感測器14例如是無線通訊子系統、全球定位系統(global position system,GPS)、低功耗藍牙(Bluetooth Low Energy,BLE)、慣性測量單元(inertial measurement unit,IMU)、旋轉編碼器(rotary encoder)、相機、光感測器(photodetector)、雷射或其組合,而可感測移動載具10周遭的電磁波、影像、聲波等環境資訊以及移動載具10自身的慣性、位移等,並將所偵測資訊提供處理器20用以估計移動載具10的目前位置及/或狀態。在一實施例中,感測器14可搭配雷射測繪(laser mapper)、測距(odometry)等系統,而可增加移動載具10位置的精準估測。The sensor 14 is, for example, a wireless communication subsystem, a global positioning system (GPS), a Bluetooth low energy (Bluetooth Low Energy, BLE), an inertial measurement unit (IMU), a rotary encoder (rotary encoder) encoder), camera, photodetector (photodetector), laser, or a combination thereof, and can sense environmental information such as electromagnetic waves, images, and sound waves around the mobile vehicle 10, as well as the inertia and displacement of the mobile vehicle 10 itself, and The detected information is provided to the processor 20 to estimate the current position and/or state of the mobile vehicle 10. In one embodiment, the sensor 14 can be used with systems such as laser mapper and odometry to increase the precise estimation of the position of the mobile vehicle 10.

致動器16例如是牙叉(fork)、手臂(arm)、滾輪(roller)、馬達(motor)或其組合,其可組成叉臂式搬運系統,而可根據處理器20下達的控制指令或訊號,對物件進行裝載、卸載及運送等操作動作。The actuator 16 is, for example, a fork, an arm, a roller, a motor, or a combination thereof, which can form a fork-arm conveying system, and can be based on a control command issued by the processor 20 or Signals, loading, unloading and transporting objects.

儲存裝置18可以是任何型態的固定式或可移動式隨機存取記憶體(random access memory,RAM)、唯讀記憶體(read-only memory,ROM)、快閃記憶體(flash memory)或類似元件或上述元件的組合。在本實施例中,儲存裝置18用以儲存由資料擷取裝置12擷取的資料與可供處理器20存取並執行的電腦指令或程式。其中,資料擷取裝置12擷取的資料包含任務指令及用以執行任務指令所需的圖資、識別資訊等資料,而處理器20可利用圖資進行位置估計,並利用識別資訊對移載物品、裝載或卸載地點、裝載或卸載對象進行識別操作。所述裝載對象與卸載對象的識別方法包括生物特徵、物件特徵、環境特徵或識別碼,在此不設限。The storage device 18 can be any type of fixed or removable random access memory (random access memory, RAM), read-only memory (read-only memory, ROM), flash memory (flash memory) or Similar elements or combinations of the above elements. In this embodiment, the storage device 18 is used to store the data retrieved by the data retrieval device 12 and computer commands or programs that can be accessed and executed by the processor 20. Among them, the data captured by the data capture device 12 includes task instructions, map data, identification information and other data required to execute the task instructions, and the processor 20 can use the map data to estimate the position, and use the identification information to transfer data. Perform identification operations on objects, loading or unloading locations, and loading or unloading objects. The method for identifying the loading object and the unloading object includes biological characteristics, object characteristics, environmental characteristics or identification codes, which are not limited here.

處理器20例如是中央處理單元(Central Processing Unit,CPU)或圖形處理單元(Graphics Processing Unit,GPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位訊號處理器(Digital Signal Processor,DSP)、可程式化控制器、特殊應用積體電路(Application Specific Integrated Circuits,ASIC)、可程式化邏輯裝置(Programmable Logic Device,PLD)或其他類似裝置或這些裝置的組合。處理器20連接資料擷取裝置12、感測器14、致動器16及儲存裝置18,其例如從儲存裝置18載入電腦指令或程式,並據以執行本發明的移動載具的狀態估測與感測融合切換方法。以下即舉實施例說明此方法的詳細步驟。The processor 20 is, for example, a central processing unit (CPU) or a graphics processing unit (GPU), or other programmable general-purpose or special-purpose microprocessors (Microprocessor), digital signal processing Digital Signal Processor (DSP), programmable controller, Application Specific Integrated Circuits (ASIC), Programmable Logic Device (PLD) or other similar devices or a combination of these devices . The processor 20 is connected to the data acquisition device 12, the sensor 14, the actuator 16 and the storage device 18. For example, it loads computer instructions or programs from the storage device 18 and executes the state estimation of the mobile vehicle according to the present invention. Fusion switching method of sensing and sensing. The following examples illustrate the detailed steps of this method.

圖2是依照本案一實施例所繪示之移動載具的狀態估測與感測融合切換方法的流程圖。請同時參照圖1及圖2,本實施例的方法適用於圖1的移動載具10,以下即搭配移動載具10中的各項元件說明本發明之狀態估測與感測融合切換方法的詳細步驟。FIG. 2 is a flowchart of a method for fusion switching between state estimation and sensing of a mobile vehicle according to an embodiment of the present case. Please refer to FIGS. 1 and 2 at the same time. The method of this embodiment is applicable to the mobile carrier 10 of FIG. detailed steps.

在步驟S202中,由處理器20利用資料擷取裝置12接收搬運物件的任務指令及執行任務指令所需的資料。其中,所述的任務指令例如是由廠區的管理者下達,用以指示移動載具10對廠區內的物件進行移載及運送等操作。在一實施例中,處理器20例如會將經常讀取或即將使用的資料,例如附近區域的圖資及搬運物品、裝載或卸載地點、裝載或卸載對象的識別資訊儲存在儲存裝置18中,以提供處理器20存取使用。In step S202, the processor 20 utilizes the data capture device 12 to receive the task instruction of the object to be transported and the data required to execute the task instruction. Wherein, the task instruction is, for example, issued by the manager of the factory area to instruct the mobile vehicle 10 to perform operations such as moving and transporting objects in the factory area. In one embodiment, the processor 20, for example, stores data that is frequently read or will be used in the storage device 18, such as maps and objects to be transported in nearby areas, loading or unloading locations, and identification information of loading or unloading objects in the storage device 18. In order to provide the processor 20 for access and use.

在步驟S204中,由處理器20將任務指令依映射位置區分為多個工作階段,並將各工作階段映射至運輸狀態及執行狀態其中之一,以建立語義層次(semantic hierarchy)。其中,所述的任務指令由裝載、卸載及運送中的至少一個工作組成,而處理器20例如會將這些工作分別對應於至少一個控制執行緒,並依據控制執行緒區分工作階段。其中,所述裝載及卸載例如是依據裝載地點、卸載地點、移載物件以及裝載對象與卸載對象的識別來區分工作階段,而所述運送工作則例如是依據運送所經過的至少一個場所各自的地理資訊系統來區分工作階段。In step S204, the processor 20 divides the task instruction into multiple work phases according to the mapping position, and maps each work phase to one of the transportation state and the execution state to establish a semantic hierarchy. Wherein, the task instruction is composed of at least one task of loading, unloading, and transportation, and the processor 20, for example, corresponds these tasks to at least one control thread, and distinguishes the working stages according to the control thread. Wherein, the loading and unloading are based on, for example, the loading location, the unloading location, the transferred object, and the identification of the loading object and the unloading object to distinguish the working stages, and the transportation work is based on, for example, the respective locations of at least one location through which the transportation passes. Geographic Information System to distinguish work phases.

在一實施例中,處理器20將移動載具10的狀態歸為兩類:運輸狀態或執行狀態。在運輸狀態中,處理器20例如會利用路徑規劃模組(path planner)設置路徑,路徑規劃模組規劃路徑係依據如Ghosh and Mount所提出之方法建構可見性圖,並基於可見性圖的邊,利用最短路徑演算法如Dijkstra’s algorithm運算出最佳路徑,並產生低階指令來控制移動載具10的馬達調整方向和速度,以追蹤所規劃的路徑。在運送途中,處理器20會利用感測器14持續感測周遭環境並確認移動載具10是否循跡移動,而當偵測到障礙物時,處理器20即依據測距資料控制馬達降速或停止,同時利用雷射測繪系統測繪出障礙物形狀並輸出至路徑規劃模組以規劃避障路徑。另一方面,在執行狀態中,處理器20例如會啟動相機以執行裝載/卸載對象的識別,並控制移載機械執行物品裝卸。In one embodiment, the processor 20 classifies the state of the mobile vehicle 10 into two categories: transportation state or execution state. In the transportation state, the processor 20, for example, uses a path planner to set a path. The path planning module plans a path to construct a visibility map based on the method proposed by Ghosh and Mount, and based on the edges of the visibility map. , Using the shortest path algorithm such as Dijkstra's algorithm to calculate the best path, and generate low-level instructions to control the motor of the mobile vehicle 10 to adjust the direction and speed to track the planned path. During transportation, the processor 20 will use the sensor 14 to continuously sense the surrounding environment and confirm whether the mobile vehicle 10 is moving. When an obstacle is detected, the processor 20 controls the motor to slow down according to the distance measurement data. Or stop, and use the laser mapping system to survey and map the shape of the obstacle and output it to the path planning module to plan the obstacle avoidance path. On the other hand, in the execution state, the processor 20, for example, activates the camera to perform the recognition of the loading/unloading object, and controls the transfer machine to perform the loading and unloading of the article.

詳言之,本實施例的移動載具的狀態估測與感測融合切換方法在實作狀態分析時,建立語義層次以賦予認知系統的能力。其中,語義層次可基於任務指令動態建立,其中包括映射位置、工作階段及狀態等三種層次。In detail, the state estimation and sensing fusion switching method of the mobile vehicle of this embodiment establishes a semantic level to give the cognitive system the ability when implementing state analysis. Among them, the semantic level can be dynamically established based on task instructions, including three levels of mapping position, work stage, and status.

舉例來說,圖3是依照本案一實施例所繪示之語義層次的示意圖。請參照圖3,語義層次30包括映射位置層32、工作階段層34及狀態層36。其中,映射位置層32包括執行任務指令所涉及的區域或位置,例如座標1~3、圖磗(map tile)1~3及(移載地點/對象)影像1~3。工作階段層34包括多個工作階段,例如包括裝載P1、運送P2~P3及卸載P4。映射位置層32中的各個位置可映射至裝載P1、運送P2~P3及卸載P4其中之一,例如座標3及圖磗3可映射至裝載P1,座標2、影像2及影像3可映射至卸載P4,以此類推。狀態層36則包括執行狀態及運輸狀態,其中裝載P1及卸載P4可映射至執行狀態,而運送P2~P3則可映射至運輸狀態。各種執行狀態及運輸狀態可對應一回饋控制迴圈的執行緒,此執行緒例如耦合特定的感測器14和致動器16,以控制其執行特定操作。For example, FIG. 3 is a schematic diagram illustrating the semantic level according to an embodiment of the present case. Please refer to FIG. 3, the semantic level 30 includes a mapping location layer 32, a work stage layer 34 and a status layer 36. The mapping location layer 32 includes areas or locations involved in executing task instructions, such as coordinates 1-3, map tiles 1-3, and (transfer location/object) images 1-3. The work phase layer 34 includes multiple work phases, for example, including loading P1, transporting P2 to P3, and unloading P4. Each position in the mapping position layer 32 can be mapped to one of loading P1, transportation P2~P3, and unloading P4. For example, coordinate 3 and image 3 can be mapped to loading P1, and coordinate 2, image 2 and image 3 can be mapped to unloading P4, and so on. The state layer 36 includes an execution state and a transportation state. The loading P1 and the unloading P4 can be mapped to the execution state, and the transportation P2 to P3 can be mapped to the transportation state. Various execution states and transportation states can correspond to a thread of a feedback control loop. This thread, for example, couples a specific sensor 14 and an actuator 16 to control it to perform a specific operation.

在一實施例中,處理器20在建立語義層次後,例如會進一步根據工作階段之間的順序及連結關係,將各個工作階段隨著語義層次映射至運輸狀態及執行狀態其中之一,以形成狀態轉移模型(state transition model)。In one embodiment, after the processor 20 establishes the semantic level, for example, further according to the sequence and connection relationship between the work stages, each work stage is mapped to one of the transportation state and the execution state along with the semantic level to form State transition model (state transition model).

舉例來說,圖4是依照本案一實施例所繪示之狀態轉移模型的示意圖。請參照圖4,狀態轉移模型40例如定義語義層次中運輸狀態及執行狀態下各個工作階段之間的轉換。意即,狀態轉移模型40是將工作階段之間的轉換映射到狀態之間的轉換。以圖4為例,狀態轉移模型40記錄映射至運輸狀態的工作階段1~n之間的轉換、映射至執行狀態的工作階段1~m之間的轉換,以及工作階段1~n與工作階段1~m之間的轉換。左下方的表格記錄映射至運輸狀態的工作階段1~n所耦合的感測器和致動器,右下方的表格記錄映射至執行狀態的工作階段1~m所耦合的感測器和致動器。例如,映射至運輸狀態的工作階段1耦合全球定位系統及基地台,映射至運輸狀態的工作階段2耦合光感測器、慣性測量單元及旋轉編碼器,以此類推。For example, FIG. 4 is a schematic diagram of a state transition model according to an embodiment of the present application. Referring to FIG. 4, the state transition model 40 defines, for example, the transition between various work stages in the transportation state and the execution state in the semantic hierarchy. That is, the state transition model 40 maps the transition between work phases to the transition between states. Taking Fig. 4 as an example, the state transition model 40 records the transitions between the work phases 1~n mapped to the transportation state, the transitions between the work phases 1~m mapped to the execution state, and the work phases 1~n and the work phases. Conversion between 1~m. The table on the bottom left maps to the sensors and actuators coupled to the working stage 1~n of the transportation state, and the table on the bottom right maps to the sensors and actuators coupled to the working stage 1~m of the execution state Device. For example, the working phase 1 mapped to the transportation state couples the global positioning system and the base station, the working phase 2 mapped to the transportation state couples the optical sensor, the inertial measurement unit, and the rotary encoder, and so on.

在語義層次及狀態轉移模型建立之後,在實時運作中,移動載具10即可藉由估計自身當前位置,並將該位置映射至語義層次,以估測目前狀態。After the semantic level and the state transition model are established, in real-time operation, the mobile vehicle 10 can estimate its current position and map the position to the semantic level to estimate the current state.

詳言之,在步驟S206中,由處理器20利用感測器14估計移動載具10的目前位置。其中,處理器20例如可使用全球定位系統或基地台定位系統估計室外位置,或使用光感測器、雷射等定位裝置估計室內位置,在此不設限。In detail, in step S206, the processor 20 uses the sensor 14 to estimate the current position of the mobile vehicle 10. Among them, the processor 20 may use a global positioning system or a base station positioning system to estimate an outdoor position, or use a positioning device such as a light sensor or a laser to estimate an indoor position, and there is no limitation here.

最後,在步驟S208中,由處理器20將目前位置映射至語義層次中的工作階段其中之一,以估測移動載具10的目前狀態。以圖3為例,當處理器20估計移動載具10的目前位置而獲得座標3時,即可經由語義層次30,將座標3映射至工作階段中的裝載P1,然後再將裝載P1映射至執行狀態。據此,處理器20可依據其估測的目前狀態,耦合對應的感測器和致動器來執行初級行為或技能。Finally, in step S208, the processor 20 maps the current position to one of the working stages in the semantic hierarchy to estimate the current state of the mobile vehicle 10. Taking Fig. 3 as an example, when the processor 20 estimates the current position of the mobile vehicle 10 to obtain the coordinate 3, it can map the coordinate 3 to the load P1 in the working stage through the semantic level 30, and then map the load P1 to Execution status. Accordingly, the processor 20 can couple corresponding sensors and actuators to perform primary actions or skills according to its estimated current state.

處理器20在估測出移動載具10的目前狀態之後,例如會將此目前狀態與前一時間點所估測的先前狀態進行比較,以判斷是否發生狀態轉移。其中,當判斷發生狀態轉移時,處理器20會依據先前建立的狀態轉移模型,循序切換該狀態轉移下的多個感測組合,以選擇可用的感測組合接續執行任務指令。所述感測組合包括至少一個感測器及/或致動器。而藉由在狀態轉移時重新排定感測訊號源的組合,可以有效率地切換到適合目前狀態之控制執行緒,以接續執行工作。After the processor 20 estimates the current state of the mobile vehicle 10, for example, it compares the current state with the previous state estimated at the previous time point to determine whether a state transition occurs. Wherein, when determining that a state transition occurs, the processor 20 sequentially switches a plurality of sensing combinations under the state transition according to the previously established state transition model, so as to select available sensing combinations to continue to execute the task instruction. The sensing combination includes at least one sensor and/or actuator. By rescheduling the combination of sensing signal sources during state transition, it is possible to efficiently switch to a control thread suitable for the current state to continue the execution of tasks.

舉例來說,圖5A至圖5D是依照本案一實施例所繪示之感測融合切換方法的範例。本實施例的自主移動載具V例如是具備移載機械的自動取送貨車,用以從倉庫送貨至戶外客人。For example, FIGS. 5A to 5D are examples of a sensor fusion switching method according to an embodiment of the present invention. The autonomous mobile vehicle V of this embodiment is, for example, an automatic pick-up and delivery vehicle equipped with a transfer machine for delivering goods from a warehouse to outdoor guests.

請參照圖5A,自主移動載具V接收搬運物件O的任務指令及執行此任務指令所需的資料,包括物件O在貨架S上的位置及物件O的識別碼I(如圖所示的QR碼),接著即進行狀態分析,判斷自身位於倉庫內的貨架S旁,此時即進入執行狀態以進行取貨。其中,自主移動載具V利用相機C拍攝位在貨架S上的物件O的識別碼I以對物件O進行識別,而當確認物件O是任務指令所指示搬運的貨物時,即利用移載機械A對物件O進行取貨。Please refer to Figure 5A, the autonomous mobile vehicle V receives the task instruction to transport the object O and the data required to execute the task instruction, including the position of the object O on the shelf S and the identification code I of the object O (QR as shown in the figure) Code), and then proceed to the state analysis, determine that it is located next to the shelf S in the warehouse, and enter the execution state to pick up the goods at this time. Among them, the autonomous mobile vehicle V uses the camera C to photograph the identification code I of the object O located on the shelf S to identify the object O, and when it is confirmed that the object O is the cargo indicated by the task instruction, it uses the transfer machine A picks up the item O.

請參照圖5B,取貨完畢後,自主移動載具V即由執行狀態切換至運輸狀態,而啟動路徑規劃模組進行送貨路徑的規劃。其中,由於從執行狀態切換至運輸狀態的過程中會觸發狀態轉移,自主移動載具V會循序切換感測組合,直到所切換到的感測組合與現場的定位系統相匹配為止。Referring to FIG. 5B, after the goods are picked up, the autonomous mobile vehicle V is switched from the execution state to the transportation state, and the route planning module is activated to plan the delivery route. Among them, since the state transition is triggered during the process of switching from the execution state to the transportation state, the autonomous mobile vehicle V will sequentially switch the sensing combination until the sensing combination switched to matches the on-site positioning system.

舉例來說,下表1繪示本次狀態轉移下的感測組合,自主移動載具V會在這些感測組合中循序切換,以選擇可用的感測組合接續執行任務指令。其中,自主移動載具V在使用感測組合1後發現無法與現場的定位系統匹配,隨即切換至感測組合2,並發現感測組合2可與現場的定位系統匹配,因此可直接選用感測組合2接續執行任務指令。For example, Table 1 below shows the sensing combinations in this state transition. The autonomous mobile vehicle V will switch among these sensing combinations in order to select the available sensing combinations to continue executing task commands. Among them, the autonomous mobile vehicle V found that it could not be matched with the on-site positioning system after using the sensing set 1, and then switched to the sensing set 2, and found that the sensing set 2 can be matched with the on-site positioning system, so the sensing set can be directly selected. Test set 2 continues to execute task instructions.

1. WiFi、IMU、旋轉編碼器 2. BLE、IMU、旋轉編碼器 3. 光感測器、IMU、旋轉編碼器 表1 1. WiFi, IMU, rotary encoder 2. BLE, IMU, rotary encoder 3. Optical sensor, IMU, rotary encoder Table 1

請參照圖5C,當自主移動載具V根據所規劃路徑移動,而準備從倉庫內移動到室外時,由於當前估計的位置所映射的狀態與前一時間點所估測的狀態相異(即,工作階段由倉庫改變為室外),此時將再度觸發狀態轉移而重新排定感測組合。Referring to Figure 5C, when the autonomous mobile vehicle V moves according to the planned path and is about to move from the warehouse to the outdoors, the state mapped by the current estimated position is different from the state estimated at the previous point in time (ie , The work phase is changed from warehouse to outdoor), at this time, the state transition will be triggered again and the sensing combination will be rescheduled.

舉例來說,下表2繪示本次狀態轉移下的感測組合。自主移動載具V在使用感測組合1後即發現其可與現場定位系統匹配,因此可直接選用感測組合1接續執行任務指令。其中,由於主移動載具V是依據本次狀態轉移(即,工作階段由倉庫改變為室外)下最有可能匹配的感測組合的順序進行切換,因此可有效率地且無縫地切換定位系統。For example, Table 2 below shows the sensing combination in this state transition. The autonomous mobile vehicle V finds that it can be matched with the on-site positioning system after using the sensing set 1, so the sensing set 1 can be directly selected to continue to execute the task instructions. Among them, because the main mobile vehicle V is switched according to the sequence of the most likely matching sensing combination under this state transition (that is, the work phase is changed from the warehouse to the outdoor), it can efficiently and seamlessly switch the positioning system.

1. GPS、基地台 2. BLE、IMU、旋轉編碼器 3. 光感測器、IMU、旋轉編碼器 表2 1. GPS, base station 2. BLE, IMU, rotary encoder 3. Optical sensor, IMU, rotary encoder Table 2

請參照圖5D,自主移動載具V在抵達卸載地點後,即可藉由估計目前位置並將所估計的目前位置映射至語義層次,而估測出目前狀態為執行狀態。而由運輸狀態切換成執行狀態會觸發狀態轉移,此時自主移動載具V即會切換感測組合以執行卸載時所需執行的識別操作。Referring to FIG. 5D, after the autonomous mobile vehicle V arrives at the unloading site, it can estimate the current state as the execution state by estimating the current position and mapping the estimated current position to the semantic level. Switching from the transportation state to the execution state will trigger the state transition. At this time, the autonomous mobile vehicle V will switch the sensing combination to perform the recognition operation that needs to be performed when unloading.

舉例來說,下表3繪示本次狀態轉移下的感測組合。自主移動載具V切換至感測組合1時即啟動相機,由於相機支持卸載時所需進行的卸載對象T的識別操作(例如人臉辨識),因此自主移動載具V可直接選用感測組合1接續執行任務指令。當確認卸載對象T的身分匹配時,自主移動載具V即啟動移載機械A將物件O交付給卸載對象T。For example, Table 3 below shows the sensing combination in this state transition. When the autonomous mobile vehicle V switches to the sensing combination 1, the camera will be activated. Since the camera supports the recognition operation (such as face recognition) of the unloading object T that needs to be performed during unloading, the autonomous mobile vehicle V can directly select the sensing combination 1 Continue to execute task instructions. When it is confirmed that the identity of the unloading object T matches, the autonomous mobile vehicle V starts the transfer machine A to deliver the object O to the unloading object T.

1. 相機 2. GPS、基地台 3. BLE、IMU、旋轉編碼器 表3 1. Camera 2. GPS, base station 3. BLE, IMU, rotary encoder table 3

綜上所述,本發明的移動載具及其狀態估測與感測融合切換方法藉由將任務指令區分為多個工作階段並映射至不同狀態以建立語義層次,當移動載具在執行移載和運送物件的任務時,可藉由將估計位置映射至當前狀態並判斷出是否發生狀態轉移,而當發生狀態轉移時,也能夠快速地切換至適合當下狀態的感測組合,以接續執行任務指令。藉此,可有效率地執行移動載具的狀態估測及其感測融合切換,實現定位系統之間的無縫切換。In summary, the mobile vehicle and its state estimation and sensing fusion switching method of the present invention establishes a semantic level by dividing task commands into multiple work phases and mapping them to different states. When loading and transporting objects, you can map the estimated position to the current state and determine whether a state transition occurs. When a state transition occurs, you can quickly switch to a sensing combination suitable for the current state to continue execution Task instructions. Thereby, the state estimation of the mobile vehicle and its sensing fusion switching can be performed efficiently, and seamless switching between positioning systems can be realized.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention. Anyone with ordinary knowledge in the relevant technical field can make some changes and modifications without departing from the spirit and scope of the present invention. The protection scope of the present invention shall be subject to those defined by the attached patent application scope.

10:移動載具 12:資料擷取裝置 14:感測器 16:致動器 18:儲存裝置 20:處理器 30:語義層次 32:映射位置層 34:工作階段層 36:狀態層 40:狀態轉移模型 A:移載機械 C:相機 I:識別碼 O:物件 P1~P4:工作階段 S:貨架 T:卸載對象 V:自主移動載具 W:倉庫 S202~S208:步驟10: mobile vehicle 12: Data acquisition device 14: Sensor 16: Actuator 18: storage device 20: processor 30: Semantic level 32: Map location layer 34: Work stage layer 36: State layer 40: State transition model A: Transfer machinery C: Camera I: identification code O: Object P1~P4: working stage S: Shelf T: Unload the object V: Autonomous mobile vehicle W: Warehouse S202~S208: steps

圖1是依照本發明一實施例所繪示之移動載具的方塊圖。 圖2是依照本案一實施例所繪示之移動載具的狀態估測與感測融合切換方法的流程圖。 圖3是依照本案一實施例所繪示之語義層次的示意圖。 圖4是依照本案一實施例所繪示之狀態轉移模型的示意圖。 圖5A至圖5D是依照本案一實施例所繪示之感測融合切換方法的範例。FIG. 1 is a block diagram of a mobile vehicle according to an embodiment of the present invention. FIG. 2 is a flowchart of a method for fusion switching between state estimation and sensing of a mobile vehicle according to an embodiment of the present case. Fig. 3 is a schematic diagram of the semantic level drawn according to an embodiment of the present case. Fig. 4 is a schematic diagram of a state transition model drawn according to an embodiment of the present case. 5A to 5D are examples of a sensor fusion switching method according to an embodiment of the present application.

S202~S208:步驟S202~S208: steps

Claims (16)

一種移動載具的狀態估測與感測融合切換方法,所述移動載具包括至少一感測器、至少一致動器及處理器,用以移載及運送物件,所述方法包括下列步驟: 接收搬運所述物件的任務指令及執行所述任務指令所需的資料; 將所述任務指令依映射位置區分為多個工作階段,並映射各所述工作階段至運輸狀態及執行狀態其中之一,以建立語義層次(semantic hierarchy); 利用所述感測器估計所述移動載具的目前位置;以及 將所述目前位置映射至所述語義層次中的所述工作階段其中之一,以估測所述移動載具的目前狀態。A method for switching between state estimation and sensing fusion of a mobile vehicle. The mobile vehicle includes at least one sensor, at least an actuator, and a processor for transferring and transporting objects. The method includes the following steps: Receiving a task instruction for moving the object and the data required to execute the task instruction; Divide the task instruction into multiple work phases according to the mapping position, and map each of the work phases to one of the transportation state and the execution state to establish a semantic hierarchy; Using the sensor to estimate the current position of the mobile vehicle; and The current position is mapped to one of the working stages in the semantic level to estimate the current state of the mobile vehicle. 如申請專利範圍第1項所述的方法,其中在將所述任務指令依映射位置區分為多個工作階段,並映射各所述工作階段至運輸狀態及執行狀態其中之一,以建立語義層次的步驟之後,所述方法更包括: 根據所述工作階段之間的順序及連結關係,將各所述工作階段隨著所述語義層次映射至所述運輸狀態及所述執行狀態其中之一,以形成狀態轉移模型。The method described in item 1 of the scope of patent application, wherein the task instruction is divided into multiple work phases according to the mapping position, and each of the work phases is mapped to one of the transportation state and the execution state to establish a semantic level After the steps, the method further includes: According to the sequence and connection relationship between the work phases, each of the work phases is mapped to one of the transportation state and the execution state along with the semantic level to form a state transition model. 如申請專利範圍第2項所述的方法,其中在估測所述移動載具的所述目前狀態的步驟之後,所述方法更包括: 比較所述目前狀態及前一時間點所估測的先前狀態,以判斷是否發生狀態轉移;以及 當判斷發生所述狀態轉移時,依據所述狀態轉移模型循序切換所述狀態轉移下的多個感測組合,以選擇可用的所述感測組合接續執行所述任務指令,其中各所述感測組合包括所述感測器及所述致動器的至少其中之一。The method described in item 2 of the scope of patent application, wherein after the step of estimating the current state of the mobile vehicle, the method further includes: Compare the current state with the previous state estimated at the previous point in time to determine whether a state transition has occurred; and When it is determined that the state transition occurs, the multiple sensing combinations under the state transition are sequentially switched according to the state transition model to select the available sensing combinations to continue to execute the task instruction, wherein each of the sensing combinations The sensing combination includes at least one of the sensor and the actuator. 如申請專利範圍第1項所述的方法,其中所述任務指令由裝載、卸載及運送中的至少一個工作組成,且將所述任務指令依映射位置區分為多個工作階段,並映射各所述工作階段至運輸狀態及執行狀態其中之一,以建立語義層次的步驟包括: 將所述工作分別對應於至少一控制執行緒,並依據所述控制執行緒區分所述工作階段。The method described in item 1 of the scope of patent application, wherein the task instruction is composed of at least one of loading, unloading, and transportation, and the task instruction is divided into multiple work stages according to the mapping position, and each of the tasks is mapped. The steps from the working stage to the transportation state and the execution state to establish the semantic level include: The tasks are respectively corresponding to at least one control thread, and the work phases are distinguished according to the control thread. 如申請專利範圍第3項所述的方法,其中所述裝載及所述卸載包括依據裝載地點、卸載地點、移載物件以及裝載對象與卸載對象的識別區分所述工作階段。According to the method described in item 3 of the scope of the patent application, the loading and the unloading include distinguishing the working stages according to the loading location, the unloading location, the transfer object, and the identification of the loading object and the unloading object. 如申請專利範圍第4項所述的方法,其中所述裝載對象與所述卸載對象的識別方法包括生物特徵、物件特徵、環境特徵或識別碼。The method according to item 4 of the scope of patent application, wherein the method for identifying the loading object and the unloading object includes biological characteristics, object characteristics, environmental characteristics or identification codes. 如申請專利範圍第3項所述的方法,其中所述運送工作包括依據運送所經過的至少一個場所各自的地理資訊系統區分所述工作階段。The method according to item 3 of the scope of patent application, wherein the transportation work includes distinguishing the work stages according to the respective geographic information systems of at least one place through which the transportation passes. 如申請專利範圍第1項所述的方法,更包括: 利用所述感測器偵測位於所述移動載具的運送路徑上的障礙物;以及 當偵測到所述障礙物時,重新規劃所述運輸狀態下的各所述工作階段的運送路徑。As the method described in item 1 of the scope of patent application, it also includes: Using the sensor to detect obstacles on the transport path of the mobile carrier; and When the obstacle is detected, the transportation path of each working stage in the transportation state is re-planned. 一種移動載具,包括: 資料擷取裝置; 至少一感測器,用以估計所述移動載具的目前位置; 至少一致動器,用以移載及運送物件; 儲存裝置,儲存由所述資料擷取裝置擷取的資料及多個電腦指令或程式;以及 處理器,耦接所述資料擷取裝置、所述感測器、所述致動器及所述儲存裝置,經配置以執行所述電腦指令或程式以: 利用所述資料擷取裝置接收搬運所述物件的任務指令及執行所述任務指令所需的資料; 將所述任務指令依映射位置區分為多個工作階段,並映射各所述工作階段至運輸狀態及執行狀態其中之一,以建立語義層次;以及 將所述感測器所估計的所述目前位置映射至所述語義層次中的所述工作階段其中之一,以估測所述移動載具的目前狀態。A mobile vehicle including: Data capture device; At least one sensor for estimating the current position of the mobile vehicle; At least the actuator is used to transfer and transport objects; A storage device that stores the data and multiple computer commands or programs captured by the data capture device; and The processor is coupled to the data acquisition device, the sensor, the actuator, and the storage device, and is configured to execute the computer command or program to: Using the data capture device to receive a task command for transporting the object and data required to execute the task command; Divide the task instruction into multiple work phases according to the mapping position, and map each of the work phases to one of the transportation state and the execution state to establish a semantic level; and The current position estimated by the sensor is mapped to one of the working stages in the semantic level to estimate the current state of the mobile vehicle. 如申請專利範圍第9項所述的移動載具,其中所述處理器更根據所述工作階段之間的順序及連結關係,將各所述工作階段隨著所述語義層次映射至所述運輸狀態及所述執行狀態其中之一,以形成狀態轉移模型。For the mobile vehicle described in item 9 of the scope of patent application, the processor further maps each of the working stages to the transportation along the semantic level according to the sequence and connection relationship between the working stages. One of the state and the execution state to form a state transition model. 如申請專利範圍第10項所述的移動載具,其中所述處理器更比較所述目前狀態及前一時間點所估測的先前狀態,以判斷是否發生狀態轉移,並在判斷發生所述狀態轉移時,依據所述狀態轉移模型循序切換所述狀態轉移下的多個感測組合,以選擇可用的所述感測組合接續執行所述任務指令,其中各所述感測組合包括所述感測器及所述致動器的至少其中之一。As for the mobile vehicle described in item 10 of the scope of patent application, the processor further compares the current state with the previous state estimated at the previous time point to determine whether a state transition has occurred, and when determining that the state transition occurs, During the state transition, the plurality of sensing combinations under the state transition are sequentially switched according to the state transition model to select the available sensing combinations to successively execute the task instructions, wherein each sensing combination includes the At least one of the sensor and the actuator. 如申請專利範圍第9項所述的移動載具,其中所述任務指令由裝載、卸載及運送中的至少一個工作組成,且所述處理器將所述工作分別對應於至少一控制執行緒,並依據所述控制執行緒區分所述工作階段。The mobile vehicle according to item 9 of the scope of patent application, wherein the task instruction is composed of at least one task of loading, unloading and transportation, and the processor corresponds to at least one control thread, respectively, And distinguish the working stages according to the control thread. 如申請專利範圍第12項所述的移動載具,其中所述裝載及所述卸載包括依據裝載地點、卸載地點、移載物件以及裝載對象與卸載對象的識別區分所述工作階段。As for the mobile vehicle described in item 12 of the scope of patent application, the loading and unloading include distinguishing the working stages according to the loading location, the unloading location, the transferred object, and the identification of the loading object and the unloading object. 如申請專利範圍第13項所述的移動載具,其中所述裝載對象與所述卸載對象的識別方法包括生物特徵、物件特徵、環境特徵或識別碼。According to the mobile vehicle described in item 13 of the scope of patent application, the method for identifying the loading object and the unloading object includes biological characteristics, object characteristics, environmental characteristics or identification codes. 如申請專利範圍第12項所述的移動載具,其中所述運送工作包括依據運送所經過的至少一個場所各自的地理資訊系統區分所述工作階段。According to the mobile vehicle described in item 12 of the scope of the patent application, the transportation work includes distinguishing the work stages according to the respective geographic information systems of at least one place through which the transportation passes. 如申請專利範圍第9項所述的移動載具,其中所述處理器更利用所述感測器偵測位於所述移動載具的運送路徑上的障礙物,並在偵測到所述障礙物時,重新規劃所述運輸狀態下的各所述工作階段的運送路徑。The mobile vehicle as described in claim 9, wherein the processor further uses the sensor to detect obstacles on the transport path of the mobile vehicle, and when the obstacle is detected Re-planning the transportation path of each working stage in the transportation state.
TW108146328A 2019-12-18 2019-12-18 State estimation and sensor fusion methods for autonomous vehicles TWI715358B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW108146328A TWI715358B (en) 2019-12-18 2019-12-18 State estimation and sensor fusion methods for autonomous vehicles
CN202010086218.9A CN113075923B (en) 2019-12-18 2020-02-11 Mobile carrier and state estimation and sensing fusion switching method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108146328A TWI715358B (en) 2019-12-18 2019-12-18 State estimation and sensor fusion methods for autonomous vehicles

Publications (2)

Publication Number Publication Date
TWI715358B TWI715358B (en) 2021-01-01
TW202124990A true TW202124990A (en) 2021-07-01

Family

ID=75237391

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108146328A TWI715358B (en) 2019-12-18 2019-12-18 State estimation and sensor fusion methods for autonomous vehicles

Country Status (2)

Country Link
CN (1) CN113075923B (en)
TW (1) TWI715358B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002023297A1 (en) * 2000-09-11 2002-03-21 Kunikatsu Takase Mobile body movement control system
TW201321292A (en) * 2011-11-16 2013-06-01 Ind Tech Res Inst Transportation method, storage device, container, support plate, and trailer thereof
EP3074832A4 (en) * 2013-11-27 2017-08-30 The Trustees Of The University Of Pennsylvania Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft micro-aerial vehicle (mav)
KR101644270B1 (en) * 2015-05-15 2016-08-01 한경대학교 산학협력단 Unmanned freight transportation system using automatic positioning and moving route correcting
CN111792034B (en) * 2015-05-23 2022-06-24 深圳市大疆创新科技有限公司 Method and system for estimating state information of movable object using sensor fusion
KR101822103B1 (en) * 2015-10-26 2018-01-25 주식회사 가치소프트 System for sorting product using sorting apparatus and method thereof
KR101793932B1 (en) * 2016-06-13 2017-11-07 주식회사 가치소프트 System for arranging product
US10295365B2 (en) * 2016-07-29 2019-05-21 Carnegie Mellon University State estimation for aerial vehicles using multi-sensor fusion
US10866102B2 (en) * 2016-12-23 2020-12-15 X Development Llc Localization of robotic vehicles
US10038979B1 (en) * 2017-01-31 2018-07-31 Qualcomm Incorporated System and method for ranging-assisted positioning of vehicles in vehicle-to-vehicle communications
CN110223212B (en) * 2019-06-20 2021-05-18 上海智蕙林医疗科技有限公司 Dispatching control method and system for transport robot

Also Published As

Publication number Publication date
CN113075923A (en) 2021-07-06
TWI715358B (en) 2021-01-01
CN113075923B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
JP5047709B2 (en) Moving device, system, moving method, and moving program
JP6731423B2 (en) Apparatus and method for navigation control
JP7161040B2 (en) Zone engine for providing contextual enhanced map layers
JP6802137B2 (en) Transport vehicle system, transport vehicle control system and transport vehicle control method
JP5982729B2 (en) Transport management device, transport system, and transport management program
US10507578B1 (en) Optimization of observer robot locations
JP2019529277A (en) Collaborative inventory monitoring
CN103635779A (en) Method and apparatus for facilitating map data processing for industrial vehicle navigation
JP2005320074A (en) Device and program for retrieving/collecting articles
US11797906B2 (en) State estimation and sensor fusion switching methods for autonomous vehicles
CN113654558A (en) Navigation method and device, server, equipment, system and storage medium
US11468770B2 (en) Travel control apparatus, travel control method, and computer program
KR101955628B1 (en) System and method for managing position of material
KR102580082B1 (en) Proximity robot object detection and avoidance
TWI715358B (en) State estimation and sensor fusion methods for autonomous vehicles
JP2021039450A (en) System and method for design assist, and program
US9501755B1 (en) Continuous navigation for unmanned drive units
US20220083062A1 (en) Robot navigation management between zones in an environment
US20220291696A1 (en) Transport system, control apparatus, transport method, and program
US20220317704A1 (en) Transport system, control apparatus, transport method, and program
WO2020032157A1 (en) Article position estimation system and article position estimation method
CN114833823A (en) Intelligent logistics robot based on SLAM navigation, control method and application
US11802948B2 (en) Industrial vehicle distance and range measurement device calibration
CA3184958A1 (en) Industrial vehicle distance and range measurement device calibration