TW202020734A - Vehicle, vehicle positioning system, and vehicle positioning method - Google Patents

Vehicle, vehicle positioning system, and vehicle positioning method Download PDF

Info

Publication number
TW202020734A
TW202020734A TW108112604A TW108112604A TW202020734A TW 202020734 A TW202020734 A TW 202020734A TW 108112604 A TW108112604 A TW 108112604A TW 108112604 A TW108112604 A TW 108112604A TW 202020734 A TW202020734 A TW 202020734A
Authority
TW
Taiwan
Prior art keywords
dimensional
vehicle
static object
point cloud
dimensional image
Prior art date
Application number
TW108112604A
Other languages
Chinese (zh)
Other versions
TWI754808B (en
Inventor
許博鈞
粘為博
吳依玲
林修宇
陳世昕
鄭安凱
楊宗賢
Original Assignee
財團法人工業技術研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人工業技術研究院 filed Critical 財團法人工業技術研究院
Priority to CN201910370531.2A priority Critical patent/CN111238494B/en
Priority to US16/508,471 priority patent/US11024055B2/en
Priority to JP2019136998A priority patent/JP7073315B2/en
Publication of TW202020734A publication Critical patent/TW202020734A/en
Application granted granted Critical
Publication of TWI754808B publication Critical patent/TWI754808B/en

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a vehicle, a vehicle positioning system and a vehicle positioning method. The vehicle positioning system includes a two-dimensional (2D) image sensor,a three-dimensional (3D) sensor and a processor. The 2D image sensor obtains a 2D image data. The 3D sensor obtains a 3D point cloud data. The processor couples to the 2D image sensor and the 3D sensor and configures at least for: fusing the 2D image data and the 3D point cloud data for generating a 3D image data; identifying at least one static object from the 2D image data; obtaining the 3D point cloud data of the static object based on each of the at least one static object in the 3D image data; calculating a vehicle relative coordinate of the vehicle according to the 3D point cloud data of the static object.

Description

載具、載具定位系統及載具定位方法Carrier, carrier positioning system and carrier positioning method

本揭露是有關於一種載具、一種載具定位系統及一種載具定位方法。The present disclosure relates to a vehicle, a vehicle positioning system, and a vehicle positioning method.

自動駕駛技術被期望能夠改善駕駛安全與方便,以減輕駕駛負擔。在自動駕駛方面,環境感知是一個必要的功能,它能夠避免碰撞的發生,而精確的定位也是相當重要的,尤其在城市的環境中,多種複雜物件讓車子行駛在城市道路上,容易造成定位誤差。基於使用感測器的種類,載具定位的方法一般分為主動式感測與被動式感測。被動式感測器例如是攝影機(Camera)或全球定位系統(Global Positioning System;GPS)等,而主動式感測器例如是光達(LiDAR)感測器。然而,攝影機透過影像物件偵測模組能夠辨別影像畫面中的物體,卻無法在三維空間中正確定位,因而無法正確定位載具的位置,造成定位誤差。在一般的GPS定位方法中,若一載具在隧道或是室內停車場等區域,感測器會有因屏蔽問題而接收不到訊號,亦無法精準定位載具的位置等問題。LiDAR感測器能夠偵測物體並且在三維空間中定位,但是無法辨識被偵測到的物體是何種類別。Autonomous driving technology is expected to improve driving safety and convenience to reduce driving burden. In terms of autonomous driving, environmental awareness is a necessary function, which can avoid collisions, and precise positioning is also very important, especially in urban environments, a variety of complex objects make the car drive on urban roads, which is easy to cause positioning error. Based on the types of sensors used, vehicle positioning methods are generally divided into active sensing and passive sensing. The passive sensor is, for example, a camera or a global positioning system (GPS), and the like, and the active sensor is, for example, a LiDAR sensor. However, the camera can identify the objects in the image frame through the image object detection module, but it cannot be correctly positioned in the three-dimensional space, so the position of the carrier cannot be correctly positioned, resulting in positioning errors. In a general GPS positioning method, if a vehicle is in an area such as a tunnel or indoor parking lot, the sensor may not receive a signal due to a shielding problem, and cannot accurately locate the position of the vehicle. The LiDAR sensor can detect objects and locate them in three-dimensional space, but cannot recognize what kind of objects are detected.

傳統自駕車行駛需使用預先建立的地圖資訊,此地圖資訊需要包含多樣的道路資訊,例如道路邊界、紅綠燈、限速號誌等,如此自駕車演算法才能讓自駕車依照指定路線以及交通規則正確行駛。基本建立圖資的方法是在載具上裝載光達以及GPS,在道路上繞行後,離線疊合/整合光達點雲(Point clouds)資訊(即,光達點雲地圖)與GPS座標資訊(即,GPS座標地圖)。然而,自駕車的定位精準度需求需在誤差10公分以下,當自駕車在行駛時,將LiDAR感測器得到的即時之大量點雲資訊與內建之光達點雲地圖進行比對,得到定位資訊;然而,大量點雲資訊中包含冗餘資訊,例如,動態行駛的車輛、行人或是停在路邊的車輛等資訊,容易造成比對誤差以及增加運算量。Traditional self-driving vehicles need to use pre-built map information. This map information needs to include a variety of road information, such as road boundaries, traffic lights, speed limit signs, etc., so that the self-driving algorithm can make the self-driving car correct according to the specified route and traffic rules Driving. The basic method of creating map data is to load LiDAR and GPS on the vehicle, and after detouring on the road, offline overlay/integration of Point cloud information (ie, LiDAR point cloud map) and GPS coordinates Information (ie, GPS coordinate map). However, the positioning accuracy of the self-driving car needs to be within 10 cm of the error. When the self-driving car is driving, the real-time large amount of point cloud information obtained by the LiDAR sensor is compared with the built-in light point cloud map to obtain Positioning information; however, a large amount of point cloud information contains redundant information, such as dynamically moving vehicles, pedestrians, or vehicles parked on the roadside, which can easily cause comparison errors and increase the amount of calculation.

因此,如何在現有的定位技術中,克服攝影機之影像物件偵測的定位誤差以及LiDAR感測器之大量點雲資訊的運算量問題,已成為極欲解決的課題。Therefore, in the existing positioning technology, how to overcome the positioning error of the detection of the image object of the camera and the calculation amount of the large amount of point cloud information of the LiDAR sensor has become a problem to be solved.

有鑑於此,本揭露提供一種載具、載具定位系統及載具定位方法,其可用於解決上述技術問題。In view of this, the present disclosure provides a vehicle, a vehicle positioning system, and a vehicle positioning method, which can be used to solve the above technical problems.

本揭露提供一種載具定位系統,配置於一載具上。載具定位系統包括二維影像感測器、三維感測器及處理器。二維影像感測器,用於得到一二維影像資料;三維感測器,用於得到一三維點雲資料;以及處理器耦合到該二維影像感測器以及該三維感測器,且至少配置成適用於:一對齊模組,用以融合該二維影像資料以及該三維點雲資料,以產生一三維影像資料;一靜態物件辨識模組從該二維影像資料辨識至少一靜態物件,以根據該至少一靜態物件的每一靜態物件,從該三維影像資料中得到該靜態物件的一三維點雲資料;以及一定位模組根據該靜態物件的該三維點雲資料,計算該載具的一載具相對座標。The present disclosure provides a vehicle positioning system, which is configured on a vehicle. The vehicle positioning system includes a two-dimensional image sensor, a three-dimensional sensor and a processor. A two-dimensional image sensor for obtaining a two-dimensional image data; a three-dimensional sensor for obtaining a three-dimensional point cloud data; and a processor coupled to the two-dimensional image sensor and the three-dimensional sensor, and At least configured to be suitable for: an alignment module for fusing the two-dimensional image data and the three-dimensional point cloud data to generate a three-dimensional image data; a static object recognition module recognizes at least one static object from the two-dimensional image data To obtain a 3D point cloud data of the static object from the 3D image data according to each static object of the at least one static object; and a positioning module to calculate the load based on the 3D point cloud data of the static object The relative coordinates of a vehicle.

本揭露提供一種載具定位方法,適用於配置在一載具之一載具定位系統。所述方法包括:得到一二維影像資料;得到一三維點雲資料;融合該二維影像資料以及該三維點雲資料,以產生一三維影像資料;從該二維影像資料辨識至少一靜態物件;根據該至少一靜態物件的每一靜態物件,從該三維影像資料中得到該靜態物件的一三維點雲資料;以及根據該靜態物件的該三維點雲資料,計算該載具的一載具相對座標。The present disclosure provides a vehicle positioning method suitable for a vehicle positioning system configured on a vehicle. The method includes: obtaining a two-dimensional image data; obtaining a three-dimensional point cloud data; fusing the two-dimensional image data and the three-dimensional point cloud data to generate a three-dimensional image data; identifying at least one static object from the two-dimensional image data Obtaining a 3D point cloud data of the static object from the 3D image data according to each static object of the at least one static object; and calculating a vehicle of the vehicle based on the 3D point cloud data of the static object Relative coordinates.

本揭露提供一種載具,配置一載具定位系統於該載具。載具包括二維影像感測器、三維感測器及處理器。處理器耦合到該二維影像感測器以及該三維感測器,且至少配置成適用於:一對齊模組,用以融合該二維影像資料以及該三維點雲資料,以產生一三維影像資料;一靜態物件辨識模組,從該二維影像資料辨識至少一靜態物件,以根據該至少一靜態物件的每一靜態物件,從該三維影像資料中得到該靜態物件的一三維點雲資料;以及一定位模組,根據該靜態物件的該三維點雲資料,計算該載具的一載具相對座標。The present disclosure provides a vehicle equipped with a vehicle positioning system. The vehicle includes a two-dimensional image sensor, a three-dimensional sensor and a processor. The processor is coupled to the two-dimensional image sensor and the three-dimensional sensor, and is at least configured to be suitable for: an alignment module for fusing the two-dimensional image data and the three-dimensional point cloud data to generate a three-dimensional image Data; a static object recognition module that recognizes at least one static object from the two-dimensional image data to obtain a three-dimensional point cloud data of the static object from the three-dimensional image data according to each static object of the at least one static object And a positioning module, based on the three-dimensional point cloud data of the static object, calculating a relative coordinate of the vehicle.

基於上述,本揭露提出的載具、載具定位系統及載具定位方法可讓載具結合二維影像感測器以及三維感測器兩種異質感測器得到三維影像資料,並辨識出二維影像資料靜態物件後,從三維影像資料中得到每一靜態物件的三維點雲資料,並可計算得到載具與靜態物件的載具相對座標,進而達到載具定位。Based on the above, the vehicle, the vehicle positioning system and the vehicle positioning method proposed in the present disclosure can allow the vehicle to combine two heterogeneous sensors, a two-dimensional image sensor and a three-dimensional sensor, to obtain three-dimensional image data and recognize two After the static objects in the dimensional image data, the three-dimensional point cloud data of each static object is obtained from the three-dimensional image data, and the relative coordinates of the vehicle and the static object can be calculated to achieve the vehicle positioning.

為讓本揭露的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the present disclosure more comprehensible, the embodiments are specifically described below in conjunction with the accompanying drawings for detailed description as follows.

圖1是依據本揭露之一實施例繪示的載具定位系統示意圖。請參照圖1之實施例中,載具定位系統100係配置在載具上執行,然而本發明不限於此。本文中所揭示之載具定位系統100、方法等可在替代環境中實施以偵測在車流、視野等等中之物件。舉例而言,本文中所描述之一或多個功能可以以下各項實施:電子器件、行動器件、遊戲主機、汽車系統控制台(例如,ADAS)、可穿戴式器件(例如,個人佩戴式攝影機)、頭戴式顯示器等等。額外實施例包括但不限於機器人或機器人器件、無人航空載具(UAV)及遙控飛機。在圖1之實施例中,載具定位系統100之載具可以是機動車輛(例如,汽車、卡車、機車、巴士或火車)、船舶(例如,船或艇)、航空器(例如,飛機或直升飛機)、太空船(例如,太空梭)、腳踏車或另一運載工具。作為說明性實施例,載具可以是輪式運載工具、履帶式運載工具、軌式運載工具、空中運載工具或滑撬式運載工具。在一些情形中,載具可由一或多個駕駛者操作。舉例而言,載具可包括經組態以提供載具之駕駛者作為輔助之進階駕駛輔助系統(ADAS)。在其他情形中,載具可以是電腦控制之運載工具。此外,儘管圖1之實施例中之載具定位系統100係在載具處執行,但在其他實施例中,本文中所揭示之載具定位系統100可在「雲端」或在載具的外部執行。舉例而言,載具或其他電子器件可將位置資料及/或圖像資料提供至另一器件以執行載具定位。FIG. 1 is a schematic diagram of a vehicle positioning system according to an embodiment of the disclosure. In the embodiment shown in FIG. 1, the vehicle positioning system 100 is configured to be executed on the vehicle, but the invention is not limited thereto. The vehicle positioning system 100, methods, etc. disclosed herein can be implemented in alternative environments to detect objects in traffic, fields of view, etc. For example, one or more of the functions described in this article can be implemented by: electronic devices, mobile devices, game consoles, automotive system consoles (eg, ADAS), wearable devices (eg, personal wearable cameras ), head-mounted displays, etc. Additional embodiments include, but are not limited to robots or robotic devices, unmanned aerial vehicles (UAV), and remotely controlled aircraft. In the embodiment of FIG. 1, the vehicle of the vehicle positioning system 100 may be a motor vehicle (eg, car, truck, locomotive, bus, or train), ship (eg, boat or boat), aircraft (eg, airplane or straight) Helicopter), spacecraft (eg, space shuttle), bicycle or another vehicle. As an illustrative embodiment, the vehicle may be a wheeled vehicle, a tracked vehicle, a rail vehicle, an aerial vehicle, or a skid vehicle. In some situations, the vehicle may be operated by one or more drivers. For example, the vehicle may include an advanced driving assistance system (ADAS) that is configured to provide the driver of the vehicle as assistance. In other cases, the vehicle may be a computer-controlled vehicle. In addition, although the vehicle positioning system 100 in the embodiment of FIG. 1 is implemented at the vehicle, in other embodiments, the vehicle positioning system 100 disclosed herein may be in the "cloud" or outside the vehicle carried out. For example, a vehicle or other electronic device may provide position data and/or image data to another device to perform vehicle positioning.

載具定位系統100包含一或多個二維影像感測器102、一或多個三維感測器104、一處理器106以及一儲存電路108。在以下的實施例中,圖1之載具定位系統100係配置於載具上,但本揭露內容不以此為限,儲存電路108也可不限於包含在載具定位系統100中。The vehicle positioning system 100 includes one or more two-dimensional image sensors 102, one or more three-dimensional sensors 104, a processor 106, and a storage circuit 108. In the following embodiments, the carrier positioning system 100 of FIG. 1 is configured on the carrier, but the disclosure is not limited thereto, and the storage circuit 108 may not be limited to be included in the carrier positioning system 100.

二維影像感測器102為能夠擷取影像的影像擷取裝置、攝像裝置或攝影機,例如感光耦合元件(Charge Coupled Device; CCD)攝影機及/或互補性氧化金屬半導體(Complementary Metal-Oxide Semiconductor; CMOS)攝影機。由於二維影像感測器102可分別設置於載具的不同位置,因此能夠擷取到不同角度、不同視野範圍的影像,例如可依據需要設置前攝像裝置、側攝像裝置及後攝像裝置。The two-dimensional image sensor 102 is an image capturing device, a camera device, or a camera capable of capturing an image, such as a photosensitive coupled device (Charge Coupled Device; CCD) camera and/or a complementary metal oxide semiconductor (Complementary Metal-Oxide Semiconductor; CMOS) camera. Since the two-dimensional image sensors 102 can be installed at different positions of the vehicle, they can capture images with different angles and different fields of view. For example, a front camera device, a side camera device, and a rear camera device can be provided as needed.

二維影像感測器102得到一二維影像資料。二維影像感測器102可將二維影像資料提供至處理器106。二維影像感測器102可持續地、週期性地或偶爾地擷取圖像且可將圖像 (例如,二維影像資料)載入至儲存電路108。The two-dimensional image sensor 102 obtains a two-dimensional image data. The two-dimensional image sensor 102 can provide the two-dimensional image data to the processor 106. The two-dimensional image sensor 102 can continuously, periodically or occasionally capture images and can load images (eg, two-dimensional image data) into the storage circuit 108.

三維感測器104為能夠偵測載具與外部物體之間的距離的感測器,例如光達感測器(LiDAR感測器),三維感測器104可得到掃描範圍內有關於反射物體的光訊號以得到三維點雲資料。三維感測器104可持續地、週期性地或偶爾地擷取三維點雲資料載入至儲存電路108。三維感測器104可將三維點雲資料提供至處理器106,其中每個三維點雲資料可包含有關於載具與反射物體的距離資訊,其中每一三維點雲資料資料包含空間之位置(X,Y,Z)資訊。三維感測器104,如LiDAR感測器,能在較不受光照的影響之下量測載具與反射物體/物件(object)的距離資訊。The 3D sensor 104 is a sensor capable of detecting the distance between the vehicle and an external object, such as a LiDAR sensor (LiDAR sensor). The 3D sensor 104 can obtain information about reflective objects within the scanning range To obtain 3D point cloud data. The three-dimensional sensor 104 continuously, periodically or occasionally captures three-dimensional point cloud data and loads it into the storage circuit 108. The three-dimensional sensor 104 may provide three-dimensional point cloud data to the processor 106, wherein each three-dimensional point cloud data may include distance information about the vehicle and the reflective object, wherein each three-dimensional point cloud data data includes a spatial position ( X, Y, Z) information. The three-dimensional sensor 104, such as a LiDAR sensor, can measure the distance information between the vehicle and the reflective object/object without being affected by light.

處理器106耦合到二維影像感測器102以及三維感測器104,並接收二維影像資料以及三維點雲資料。在一實施例中,處理器106可自儲存電路108擷取二維影像資料以及三維點雲資料。為了說明,儲存電路108或其部分可經組態以儲存自二維影像感測器102接收之二維影像資料以及自三維感測器104接收之三維點雲資料,充當二維影像感測器102與三維感測器104接收資料的循環式緩衝器。The processor 106 is coupled to the two-dimensional image sensor 102 and the three-dimensional sensor 104, and receives two-dimensional image data and three-dimensional point cloud data. In one embodiment, the processor 106 can retrieve two-dimensional image data and three-dimensional point cloud data from the storage circuit 108. To illustrate, the storage circuit 108 or a portion thereof can be configured to store the 2D image data received from the 2D image sensor 102 and the 3D point cloud data received from the 3D sensor 104 to serve as a 2D image sensor 102 and the three-dimensional sensor 104 receive a circular buffer of data.

在一實施例中,處理器106可包括對齊模組110、靜態物件辨識模組112以及定位模組114。對齊模組110、靜態物件辨識模組112以及定位模組114可以是對應於載具之硬體組件、由處理器106執行之軟體(例如,指令) 或是前述硬體組件與軟體的組合。In one embodiment, the processor 106 may include an alignment module 110, a static object recognition module 112, and a positioning module 114. The alignment module 110, the static object recognition module 112, and the positioning module 114 may be hardware components corresponding to the vehicle, software (eg, instructions) executed by the processor 106, or a combination of the foregoing hardware components and software.

載具定位系統100可以預先儲存一預設地圖資訊於儲存電路108中,該預設地圖資訊包含路段的起、終點座標、車道寬、車道數、道路航向角度、道路曲率與路段長等路面資訊,該預設地圖資訊包含經由三維感測器104,例如LiDAR感測器,所取得的三維點雲資訊以及經由GPS所取得的GPS絕對座標(absolute coordinate)資訊。預設地圖資訊更可以經過國土測繪中心的RTK(Real-Time Kinematic,即時動態定位)修正後,經過座標轉換而投影在一絕對座標系統上。The vehicle positioning system 100 can pre-store a preset map information in the storage circuit 108, the preset map information includes road starting and ending coordinates, lane width, lane number, road heading angle, road curvature and road length and other road information The default map information includes three-dimensional point cloud information obtained through a three-dimensional sensor 104, such as a LiDAR sensor, and GPS absolute coordinate information obtained through GPS. The preset map information can also be corrected by RTK (Real-Time Kinematic, real-time dynamic positioning) of the National Land Surveying and Mapping Center, and then projected on an absolute coordinate system after coordinate conversion.

儲存電路108例如是記憶體、硬碟或是其他任何可用於儲存資料的元件,而可用來記錄或儲存多個模組,其中各模組是由一或多個程式碼片段所組成。處理器106耦接儲存電路108,並可藉由存取儲存電路108中的模組來分別執行本揭露所提出的載具定位方法的各個步驟。在不同的實施例中,處理器106可以是一般用途處理器、特殊用途處理器、傳統的處理器、數位訊號處理器、多個微處理器(microprocessor)、一個或多個結合數位訊號處理器核心的微處理器、控制器、微控制器、特殊應用集成電路(Application Specific Integrated Circuit,ASIC)、場可程式閘陣列電路(Field Programmable Gate Array,FPGA)、任何其他種類的積體電路、狀態機、基於進階精簡指令集機器(Advanced RISC Machine,ARM)的處理器以及類似品。The storage circuit 108 is, for example, a memory, a hard disk, or any other component that can be used to store data, and can be used to record or store multiple modules, where each module is composed of one or more code segments. The processor 106 is coupled to the storage circuit 108, and can execute each step of the carrier positioning method proposed by the present disclosure by accessing a module in the storage circuit 108, respectively. In different embodiments, the processor 106 may be a general-purpose processor, a special-purpose processor, a conventional processor, a digital signal processor, multiple microprocessors, or one or more combined digital signal processors Core microprocessor, controller, microcontroller, application specific integrated circuit (ASIC), field programmable gate array (FPGA), any other kind of integrated circuit, status Machine, processor based on Advanced Reduced Instruction Set Machine (Advanced RISC Machine, ARM) and similar products.

對齊模組110係將二維影像感測器102取得之二維影像資料,以及三維感測器104取得之三維點雲資料,藉由一對齊演算法融合二維影像資料與三維點雲資料以得出 一三維影像資料。三維影像資料包含了每一影像像素之顏色資訊(例如,RGB資料)以及深度資料(例如,位置(X,Y,Z)資料),所以此三維影像資料包含了RGBXYZ的影像資料。其中,在一實施例中,說明融合二維影像資料與三維點雲資料之對齊演算法,假設三維點雲資料表示為(x,y,z),二維影像資料之像素點為(u,v),根據下列公式將三維點雲資料(x,y,z)映射到二維影像資料之像素點為(u,v),對齊演算法公式如下所示:

Figure 02_image001
Figure 02_image003
Figure 02_image005
Figure 02_image007
其中,fu fv 分別為水平以及垂直方向之焦距;u0 v0 為影像平面的中心點;求得轉化矩陣M將三維點雲資料(x,y,z )映射到二維影像資料之像素點 (u,v ); 分別為旋轉矩陣以及平移向量。The alignment module 110 combines the two-dimensional image data obtained by the two-dimensional image sensor 102 and the three-dimensional point cloud data obtained by the three-dimensional sensor 104 with an alignment algorithm to fuse the two-dimensional image data and the three-dimensional point cloud data to Get a three-dimensional image data. The three-dimensional image data includes color information (for example, RGB data) and depth data (for example, position (X,Y,Z) data) of each image pixel, so the three-dimensional image data includes RGBXYZ image data. In one embodiment, an alignment algorithm for fusing 2D image data and 3D point cloud data is described. It is assumed that the 3D point cloud data is represented as (x, y, z), and the pixel points of the 2D image data are (u, v), according to the following formula, the pixel points of the three-dimensional point cloud data (x, y, z) mapped to the two-dimensional image data are (u, v), the alignment algorithm formula is as follows:
Figure 02_image001
Figure 02_image003
Figure 02_image005
Figure 02_image007
Where f u and f v are the focal lengths in the horizontal and vertical directions; u 0 and v 0 are the center points of the image plane; the transformation matrix M is obtained to map the three-dimensional point cloud data ( x,y,z ) to the two-dimensional image Pixels of the data ( u, v ); R and T are rotation matrix and translation vector, respectively.

圖2是依據本揭露之一實施例,說明將二維影像資料與三維點雲資料進行對齊處理的示意圖。請同時參照圖1與圖2,其係透過二維影像感測器102取得之二維影像資料影像(圖2之背景影像),以及透過三維感測器104取得之三維點雲資料(請參考圖2之點狀影像),透過對齊演算法,將二維影像資料與三維點雲資料融合之三維影像資料影像,如圖2所示。FIG. 2 is a schematic diagram illustrating alignment processing of two-dimensional image data and three-dimensional point cloud data according to an embodiment of the present disclosure. Please refer to FIGS. 1 and 2 at the same time, which are the two-dimensional image data image (the background image in FIG. 2) obtained by the two-dimensional image sensor 102 and the three-dimensional point cloud data obtained by the three-dimensional sensor 104 (please refer to The point image in Figure 2), through the alignment algorithm, the three-dimensional image data image of the two-dimensional image data and the three-dimensional point cloud data fusion, as shown in Figure 2.

二維影像感測器102取得之二維影像資料,靜態物件辨識模組112可判定與辨識二維影像資料之至少一靜態物件。舉例來說,靜態物件辨識模組112可包含專門偵測靜態物件之深度學習模組,辨識靜態物件的類別,例如,道路號誌、建築物、變電箱、馬路、人行道、橋梁、樹木、電線桿、紐澤西護欄等等,而靜態物件辨識模組112之深度學習模組或深度神經網路(Deep Neural Network)透過影像辨識演算法來辨識影像中的靜態物件,目前已有習知之物件辨識演算法,例如物件輪廓追蹤演算法,在此不贅述。其中,靜態物件辨識模組112偵測靜態物件之搜尋窗(search window)可對應於物件之模型、邊框或定界框(bounding box;BB)等。圖3是依據本揭露之一實施例,說明從二維影像資料辨識出靜態物件的示意圖。請參照圖3,透過靜態物件辨識演算法,辨識二維影像資料中之每一靜態物件,且由定界框標示二維影像資料中之每一靜態物件,如圖3之定界框,每一定界框包含每一靜態物件的資訊,例如,標示物件類別、物件長度、物件寬度、以及於二維影像資料中之位置等資訊。For the two-dimensional image data obtained by the two-dimensional image sensor 102, the static object identification module 112 can determine and identify at least one static object of the two-dimensional image data. For example, the static object identification module 112 may include a deep learning module that specifically detects static objects, identifying the types of static objects, such as road signs, buildings, substations, roads, sidewalks, bridges, trees, Telephone poles, New Jersey guardrails, etc., and the deep learning module or deep neural network of the static object identification module 112 uses image recognition algorithms to identify static objects in the image, which is currently known Object recognition algorithms, such as object contour tracking algorithms, are not described here. Among them, the search window for the static object recognition module 112 to detect the static object may correspond to the model, frame, or bounding box (BB) of the object. FIG. 3 is a schematic diagram illustrating the identification of static objects from two-dimensional image data according to an embodiment of the present disclosure. Please refer to FIG. 3, through the static object identification algorithm, each static object in the two-dimensional image data is identified, and each static object in the two-dimensional image data is marked by a bounding frame, as shown in the bounding frame of FIG. 3, each The bounding box contains information about each static object, for example, information such as the object type, object length, object width, and position in the two-dimensional image data.

當得到二維影像資料中之每一靜態物件後,即可根據上述每一靜態物件的資訊(例如,類別、長度、寬度以及位置等資訊),從三維影像資料中得到每一靜態物件的三維點雲資料(X,Y,Z資料),其中,每一靜態物件對應於三維影像資料中的是物件之大小,因此,每一靜態物件可以包含多個三維點雲資料(深度資訊)。圖4是依據圖3之實施例,說明辨識每一靜態物件後,從三維影像資料中得到每一靜態物件的三維點雲資料的示意圖。請參照圖3與圖4,根據圖3定界框標示之每一靜態物件(包含交通號誌、路燈以及馬路)的資訊,對應得到在三維影像資料中每一靜態物件的三維點雲資料。After each static object in the two-dimensional image data is obtained, the three-dimensional of each static object can be obtained from the three-dimensional image data according to the information of each static object (for example, category, length, width, and position information). Point cloud data (X, Y, Z data), where each static object corresponds to the size of the object in the 3D image data, therefore, each static object can contain multiple 3D point cloud data (depth information). FIG. 4 is a schematic diagram illustrating that after identifying each static object, the three-dimensional point cloud data of each static object is obtained from the three-dimensional image data according to the embodiment of FIG. 3. Please refer to FIG. 3 and FIG. 4, according to the information of each static object (including traffic signs, street lights and roads) marked in the bounding box of FIG. 3, corresponding to the 3D point cloud data of each static object in the 3D image data.

接著,當載具上之載具定位系統100得到每一靜態物件的三維點雲資料,如此,載具定位系統100之定位模組114則可根據靜態物件的三維點雲資料,透過定位演算法,例如,三點定位演算法或是將目前的三維點雲資料與地圖點雲資料進行疊代比對,透過最佳化計算載具點雲資料與至少一靜態物件的三維點雲資料之最小均方距離(mean squared distances;MSD),計算得到載具與靜態物件的載具相對座標,進而達到載具定位。在一實施例中,可以將上述之載具相對座標映(mapping)至於預先儲存的預設地圖資訊,由於該預設地圖資訊包含3D點雲資訊以及GPS絕對座標資訊,因此,透過載具相對座標與預設地圖資訊進行座標定義與轉換,如此可得到載具的三維載具絕對座標。Then, when the vehicle positioning system 100 on the vehicle obtains the three-dimensional point cloud data of each static object, the positioning module 114 of the vehicle positioning system 100 can use the positioning algorithm according to the three-dimensional point cloud data of the static object For example, the three-point positioning algorithm or the iterative comparison of the current 3D point cloud data and the map point cloud data, through optimization to calculate the minimum of the 3D point cloud data of the vehicle point cloud data and at least one static object Mean squared distances (MSD), the relative coordinates of the vehicle and the static object are calculated to achieve the positioning of the vehicle. In one embodiment, the above-mentioned relative mapping of the vehicle can be mapped to the pre-stored preset map information. Since the default map information includes 3D point cloud information and GPS absolute coordinate information, the relative Coordinates are defined and converted with preset map information, so that the three-dimensional absolute coordinates of the vehicle can be obtained.

在另一實施例中,更可以將每一靜態物件的三維點雲資料對映(mapping)至預先儲存預設地圖資訊,由於該預設地圖資訊包含3D點雲資訊以及GPS絕對座標資訊,因此,透過靜態物件的三維點雲資料與預設地圖資訊的靜態三維物件做比對,以得到靜態物件的一三維物件絕對座標,如此可推得載具位於預設地圖資訊之位置,進而得出載具之三維載具絕對座標。In another embodiment, the 3D point cloud data of each static object can be mapped to pre-stored preset map information. Since the preset map information includes 3D point cloud information and GPS absolute coordinate information, , By comparing the 3D point cloud data of the static object with the static 3D object of the preset map information, to obtain the absolute coordinates of the 3D object of the static object, so that the vehicle can be deduced from the position of the preset map information, and then obtained The absolute coordinates of the three-dimensional vehicle.

在本揭露一實施例中,二維影像感測器102取得之二維影像資料一般只能基於物體表面的反射特性,提供以真實世界投影在影像平面上的資訊,來獲得物體的輪廓、邊界、紋理等特徵,所以以二維資訊為基礎的辨識演算法來辨識出實際的三維物體時,無法將物件在三維空間中定位在正確位置上,而存在著無法正確定位載具位置的問題。而三維感測器104能夠得到物件之三維點雲資料,但是無法辨識物件是屬於何種類別,而造成過於龐大之三維點雲資料的運算量。因此,結合二維影像感測器102以及三維感測器104此兩種異質感測器的特性與優勢,並且透過偵測靜態物件的三維點雲資料能進行載具的即時定位。In an embodiment of the present disclosure, the two-dimensional image data obtained by the two-dimensional image sensor 102 can generally only provide information projected on the image plane in the real world based on the reflection characteristics of the object surface to obtain the outline and boundary of the object , Texture and other features, so the identification algorithm based on two-dimensional information to identify the actual three-dimensional object, the object cannot be positioned in the correct position in the three-dimensional space, and there is a problem that the vehicle position cannot be correctly positioned. The three-dimensional sensor 104 can obtain the three-dimensional point cloud data of the object, but cannot identify what type the object belongs to, resulting in an excessively large amount of calculation of the three-dimensional point cloud data. Therefore, combining the characteristics and advantages of the two heterogeneous sensors, the two-dimensional image sensor 102 and the three-dimensional sensor 104, and real-time positioning of the vehicle can be achieved by detecting the three-dimensional point cloud data of static objects.

本揭露透過靜態物件辨識模組112辨識靜態物件而非辨識動態物件,是因為相較於動態物件,靜態物件能夠更容易被偵測與辨識,靜態物件之形狀與顏色之變化通常也較動態物件少,因此靜態物件辨識模組112只需要較少的訓練資料與較低的模型複雜度即可達到不錯的靜態物件辨識率。如此,結合二維影像感測器102之二維影像資料以及三維感測器104之三維點雲資料,透過靜態物件辨識模組112辨識每一靜態物件,並得到每一靜態物件的三維點雲資料,進而計算得到載具與靜態物件的載具相對座標,達到載具定位。The present disclosure recognizes static objects rather than dynamic objects through the static object recognition module 112 because static objects can be more easily detected and recognized than dynamic objects, and changes in shape and color of static objects are usually also more dynamic than dynamic objects Since the static object recognition module 112 requires less training data and lower model complexity, it can achieve a good static object recognition rate. In this way, combining the two-dimensional image data of the two-dimensional image sensor 102 and the three-dimensional point cloud data of the three-dimensional sensor 104, each static object is identified through the static object identification module 112, and a three-dimensional point cloud of each static object is obtained Data, and then calculate the relative coordinates of the vehicle and the static object to achieve the positioning of the vehicle.

圖5是依照本揭露的一實施例,說明載具定位系統的運作的流程示意圖。FIG. 5 is a schematic flowchart illustrating the operation of the vehicle positioning system according to an embodiment of the present disclosure.

請參照圖5,在步驟S501,二維影像感測器102得到二維影像資料,其中此二維影像資料包括至少一靜態物件之場景。在步驟S503,三維感測器104得到三維點雲資料。在步驟S505,對齊模組110藉由對齊演算法融合二維影像資料與三維點雲資料以得到 一三維影像資料。在步驟S507,靜態物件辨識模組112從二維影像資料辨識至少一靜態物件。在步驟S509,根據每一靜態物件從三維影像資料中得到每一靜態物件的三維點雲資料。在步驟S511,定位模組114根據靜態物件的三維點雲資料計算載具的載具相對座標,其中可透過一定位演算法,根據靜態物件的三維點雲資料計算載具於三維影像資料的三維載具相對座標。Referring to FIG. 5, in step S501, the two-dimensional image sensor 102 obtains two-dimensional image data, where the two-dimensional image data includes a scene of at least one static object. In step S503, the three-dimensional sensor 104 obtains three-dimensional point cloud data. In step S505, the alignment module 110 fuses the two-dimensional image data and the three-dimensional point cloud data through the alignment algorithm to obtain a three-dimensional image data. In step S507, the static object recognition module 112 recognizes at least one static object from the two-dimensional image data. In step S509, the 3D point cloud data of each static object is obtained from the 3D image data according to each static object. In step S511, the positioning module 114 calculates the relative coordinates of the vehicle based on the three-dimensional point cloud data of the static object. Among them, a positioning algorithm can be used to calculate the three-dimensional image of the vehicle based on the three-dimensional point cloud data of the static object. The relative coordinates of the vehicle.

圖6是依照本揭露的一實施例,說明載具600可直接(direct)或間接(indirect)與載具定位系統通訊示意圖。圖6之載具定位系統之處理器606與儲存器608可設置於載具600上或設置於載具600遠端之另一地點/位置。假設載具定位系統之處理器606與儲存器608設置於遠端,則載具600則具有與遠端之處理器606與儲存器608通訊的能力。在本實施例中,載具600為一汽車,但本揭露並不以此為限。6 is a schematic diagram illustrating that the carrier 600 can directly or indirectly communicate with the carrier positioning system according to an embodiment of the present disclosure. The processor 606 and the storage 608 of the vehicle positioning system of FIG. 6 may be disposed on the vehicle 600 or at another location/location at the far end of the vehicle 600. Assuming that the processor 606 and the storage 608 of the vehicle positioning system are located at the remote end, the vehicle 600 has the ability to communicate with the remote processor 606 and the storage 608. In this embodiment, the vehicle 600 is a car, but this disclosure is not limited to this.

一或多個二維影像感測器602以及一或多個三維感測器604設置於載具600。在本實施例中,載具定位系統可執行如上述圖1至圖4之功能與運算,將載具600之二維影像感測器602得到之二維影像資料對齊於三維感測器604擷取之三維點雲資料已得到三維影像資料,並根據二維影像資料之每一靜態物件從三維影像資料中得到每一靜態物件的三維點雲資料,進而透過靜態物件的三維點雲資料計算載具的載具相對座標,達到載具定位。One or more two-dimensional image sensors 602 and one or more three-dimensional sensors 604 are disposed on the carrier 600. In this embodiment, the vehicle positioning system can perform the functions and operations as described above in FIGS. 1 to 4, and align the 2D image data obtained by the 2D image sensor 602 of the vehicle 600 with the 3D sensor 604 to capture The obtained 3D point cloud data has obtained 3D image data, and the 3D point cloud data of each static object is obtained from the 3D image data according to each static object of the 2D image data, and then the load is calculated through the 3D point cloud data of the static object The relative coordinates of the carrier of the vehicle can achieve the positioning of the vehicle.

綜上所述,本揭露提出的載具、載具定位系統及載具定位方法可使載具結合二維影像感測器以及三維感測器此兩種異質感測器得到三維影像資料,並且在辨識出二維影像資料的靜態物件後,從三維影像資料中得到每一靜態物件的三維點雲資料,並且可計算得到載具與靜態物件的載具相對座標,進而對映(mapping)至預設的地圖資訊得到載具相對座標,達到載具定位。藉此,可使載具透過專門偵測靜態物件之深度學習模型,縮短靜態物件影像辨識時間,且只需要靜態物件之三維點雲資料,所以進而降低三維點雲資料的運算量,而達到載具的精準定位。In summary, the carrier, carrier positioning system and carrier positioning method proposed in the present disclosure can enable the carrier to combine two heterogeneous sensors, a two-dimensional image sensor and a three-dimensional sensor, to obtain three-dimensional image data, and After identifying the static objects of the two-dimensional image data, the three-dimensional point cloud data of each static object is obtained from the three-dimensional image data, and the relative coordinates of the vehicle and the static object can be calculated, and then mapped to The preset map information obtains the relative coordinates of the vehicle to achieve vehicle positioning. In this way, the vehicle can shorten the image recognition time of the static object through the deep learning model that specifically detects the static object, and only requires the 3D point cloud data of the static object, so the calculation amount of the 3D point cloud data is further reduced, and the load is reached. Precise positioning.

雖然本揭露已以實施例揭露如上,然其並非用以限定本揭露,任何所屬技術領域中具有通常知識者,在不脫離本揭露的精神和範圍內,當可作些許的更動與潤飾,故本揭露的保護範圍當視後附的申請專利範圍所界定者為準。Although this disclosure has been disclosed as above with examples, it is not intended to limit this disclosure. Anyone who has ordinary knowledge in the technical field should make some changes and retouching without departing from the spirit and scope of this disclosure. The scope of protection disclosed in this disclosure shall be subject to the scope defined in the appended patent application.

100:載具定位系統 102:二維影像感測器 104:三維感測器 106:處理器 108:儲存電路 110:對齊模組 112:靜態物件辨識模組 114:定位模組 600:載具 602:二維影像感測器 604:三維感測器 606:處理器 608:儲存器 S501、S503、S505、S507、S509、S511:載具定位系統運作的步驟100: Vehicle positioning system 102: 2D image sensor 104: 3D sensor 106: processor 108: storage circuit 110: Align module 112: Static object recognition module 114: Positioning module 600: Vehicle 602: Two-dimensional image sensor 604: 3D sensor 606: processor 608: storage S501, S503, S505, S507, S509, S511: Steps of vehicle positioning system operation

圖1是依據本揭露之一實施例繪示的載具定位系統示意圖。 圖2是依據本揭露之一實施例,說明將二維影像資料與三維點雲資料進行對齊處理的示意圖。 圖3是依據本揭露之一實施例,說明從二維影像資料辨識出靜態物件的示意圖。 圖4是依據圖3之實施例,說明辨識每一靜態物件後,從三維影像資料中得到每一靜態物件的三維點雲資料的示意圖。 圖5是依照本揭露的一實施例,說明載具定位系統的運作的流程示意圖。 圖6是依照本揭露的一實施例,說明一載具可直接或間接與載具定位系統通訊示意圖。FIG. 1 is a schematic diagram of a vehicle positioning system according to an embodiment of the disclosure. FIG. 2 is a schematic diagram illustrating alignment processing of two-dimensional image data and three-dimensional point cloud data according to an embodiment of the present disclosure. FIG. 3 is a schematic diagram illustrating the identification of static objects from two-dimensional image data according to an embodiment of the present disclosure. FIG. 4 is a schematic diagram illustrating that after identifying each static object, the three-dimensional point cloud data of each static object is obtained from the three-dimensional image data according to the embodiment of FIG. 3. FIG. 5 is a schematic flowchart illustrating the operation of the vehicle positioning system according to an embodiment of the present disclosure. 6 is a schematic diagram illustrating that a vehicle can directly or indirectly communicate with a vehicle positioning system according to an embodiment of the present disclosure.

100:載具定位系統 100: Vehicle positioning system

102:二維影像感測器 102: 2D image sensor

104:三維感測器 104: 3D sensor

106:處理器 106: processor

108:儲存電路 108: storage circuit

110:對齊模組 110: Align module

112:靜態物件辨識模組 112: Static object recognition module

114:定位模組 114: Positioning module

Claims (16)

一種載具定位系統,配置於一載具上,該載具定位系統包括: 一二維影像感測器,用於得到一二維影像資料; 一三維感測器,用於得到一三維點雲資料;以及 一處理器,耦合到該二維影像感測器以及該三維感測器,且至少配置成適用於: 一對齊模組,用以融合該二維影像資料以及該三維點雲資料,以產生一三維影像資料; 一靜態物件辨識模組,從該二維影像資料辨識至少一靜態物件,以根據該至少一靜態物件的每一靜態物件,從該三維影像資料中得到該靜態物件的一三維點雲資料;以及 一定位模組,根據該靜態物件的該三維點雲資料,計算該載具的一載具相對座標。A vehicle positioning system is configured on a vehicle. The vehicle positioning system includes: A two-dimensional image sensor, used to obtain a two-dimensional image data; A three-dimensional sensor for obtaining a three-dimensional point cloud data; and A processor, coupled to the two-dimensional image sensor and the three-dimensional sensor, is at least configured to be suitable for: An alignment module for fusing the two-dimensional image data and the three-dimensional point cloud data to generate a three-dimensional image data; A static object identification module, identifying at least one static object from the two-dimensional image data to obtain a three-dimensional point cloud data of the static object from the three-dimensional image data according to each static object of the at least one static object; and A positioning module calculates a relative coordinate of a vehicle based on the three-dimensional point cloud data of the static object. 如申請專利範圍第1項所述的載具定位系統,其中,該載具相對座標係對映至預先儲存於一儲存電路之一預設地圖資訊,以得到該載具的一三維載具絕對座標。The vehicle positioning system as described in item 1 of the patent application scope, wherein the relative coordinates of the vehicle are mapped to a preset map information pre-stored in a storage circuit to obtain a three-dimensional vehicle absolute coordinate. 如申請專利範圍第1項所述的載具定位系統,其中,該靜態物件的該三維點雲資料係對映至預先儲存於一儲存電路之一預設地圖資訊,以得到該靜態物件的一三維物件絕對座標。The vehicle positioning system as described in item 1 of the patent application scope, wherein the three-dimensional point cloud data of the static object is mapped to preset map information pre-stored in a storage circuit to obtain a static object Absolute coordinates of three-dimensional objects. 如申請專利範圍第3項所述的載具定位系統,其中,該定位模組根據該靜態物件的該三維物件絕對座標,計算該載具的三維載具絕對座標。The vehicle positioning system according to item 3 of the patent application scope, wherein the positioning module calculates the absolute coordinates of the three-dimensional vehicle of the vehicle based on the absolute coordinates of the three-dimensional object of the static object. 如申請專利範圍第1項所述的載具定位系統,其中,該二維影像感測器係一感光耦合元件攝影機或一互補性氧化金屬半導體攝影機。The vehicle positioning system as described in item 1 of the patent application, wherein the two-dimensional image sensor is a photosensitive coupling device camera or a complementary metal oxide semiconductor camera. 如申請專利範圍第1項所述的載具定位系統,其中,該三維感測器係一光達感測器。The vehicle positioning system as described in item 1 of the patent application scope, wherein the three-dimensional sensor is a light sensor. 一種載具定位方法,適用於配置在一載具之一載具定位系統,所述方法包括: 得到一二維影像資料; 得到一三維點雲資料; 融合該二維影像資料以及該三維點雲資料,以產生一三維影像資料; 從該二維影像資料辨識至少一靜態物件; 根據該靜態物件從該三維影像資料中得到該靜態物件的一三維點雲資料;以及 根據該靜態物件的該三維點雲資料,計算該載具的一載具相對座標。A vehicle positioning method is applicable to a vehicle positioning system configured on one of the vehicles. The method includes: Obtain a two-dimensional image data; Get a 3D point cloud data; Fuse the two-dimensional image data and the three-dimensional point cloud data to generate a three-dimensional image data; Identify at least one static object from the two-dimensional image data; Obtaining a three-dimensional point cloud data of the static object from the three-dimensional image data according to the static object; and Based on the three-dimensional point cloud data of the static object, a relative coordinate of the vehicle is calculated. 如申請專利範圍第7項所述的載具定位方法,更包括將該載具相對座標對映至預先儲存之一預設地圖資訊,以得到該載具的一三維載具絕對座標。The method for positioning a vehicle as described in item 7 of the patent application scope further includes mapping the relative coordinates of the vehicle to a preset map information stored in advance to obtain a three-dimensional absolute coordinate of the vehicle. 如申請專利範圍第7項所述的載具定位方法,更包括將該靜態物件的該三維點雲資料對映至預先儲存之一預設地圖資訊,以得到該靜態物件的一三維物件絕對座標。The vehicle positioning method described in item 7 of the patent application scope further includes mapping the three-dimensional point cloud data of the static object to a pre-stored preset map information to obtain a three-dimensional absolute coordinate of the static object . 如申請專利範圍第9項所述的載具定位方法,更包括根據該靜態物件的該三維物件絕對座標,計算該載具的三維載具絕對座標。The method for positioning a vehicle as described in item 9 of the scope of the patent application further includes calculating the absolute coordinates of the three-dimensional vehicle based on the absolute coordinates of the three-dimensional object of the static object. 一種載具,配置一載具定位系統於該載具,包括: 一二維影像感測器,用於得到一二維影像資料; 一三維感測器,用於得到一三維點雲資料;以及 一處理器,耦合到該二維影像感測器以及該三維感測器,且至少配置成適用於: 一對齊模組,用以融合該二維影像資料以及該三維點雲資料,以產生一三維影像資料; 一靜態物件辨識模組,從該二維影像資料辨識至少一靜態物件,以根據該至少一靜態物件的每一靜態物件,從該三維影像資料中得到該靜態物件的一三維點雲資料;以及 一定位模組,根據該靜態物件的該三維點雲資料,計算該載具的一載具相對座標。A carrier equipped with a carrier positioning system includes: A two-dimensional image sensor, used to obtain a two-dimensional image data; A three-dimensional sensor for obtaining a three-dimensional point cloud data; and A processor, coupled to the two-dimensional image sensor and the three-dimensional sensor, is at least configured to be suitable for: An alignment module for fusing the two-dimensional image data and the three-dimensional point cloud data to generate a three-dimensional image data; A static object identification module, identifying at least one static object from the two-dimensional image data to obtain a three-dimensional point cloud data of the static object from the three-dimensional image data according to each static object of the at least one static object; and A positioning module calculates a relative coordinate of a vehicle based on the three-dimensional point cloud data of the static object. 如申請專利範圍第11項所述的載具,其中,該載具相對座標係對映至預先儲存於一儲存電路之一預設地圖資訊,以得到該載具的一三維載具絕對座標。The vehicle according to item 11 of the patent application scope, wherein the relative coordinates of the vehicle are mapped to a preset map information pre-stored in a storage circuit to obtain a three-dimensional absolute coordinate of the vehicle. 如申請專利範圍第11項所述的載具,其中,該靜態物件的該三維點雲資料係對映至預先儲存於一儲存電路之一預設地圖資訊,以得到該靜態物件的一三維物件絕對座標。The vehicle according to item 11 of the patent application scope, wherein the three-dimensional point cloud data of the static object is mapped to a predetermined map information pre-stored in a storage circuit to obtain a three-dimensional object of the static object Absolute coordinates. 如申請專利範圍第13項所述的載具,其中,該定位模組根據該靜態物件的該三維物件絕對座標,計算該載具的一三維載具絕對座標。The vehicle according to item 13 of the patent application scope, wherein the positioning module calculates a three-dimensional absolute coordinate of the vehicle based on the absolute coordinate of the three-dimensional object of the static object. 如申請專利範圍第11項所述的載具,其中,該二維影像感測器係一感光耦合元件攝影機或一互補性氧化金屬半導體攝影機。The carrier as described in item 11 of the patent application scope, wherein the two-dimensional image sensor is a photosensitive coupling device camera or a complementary metal oxide semiconductor camera. 如申請專利範圍第11項所述的載具,其中,該三維感測器係一光達感測器。The vehicle as described in item 11 of the patent application scope, wherein the three-dimensional sensor is a light sensor.
TW108112604A 2018-11-29 2019-04-11 Vehicle, vehicle positioning system, and vehicle positioning method TWI754808B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201910370531.2A CN111238494B (en) 2018-11-29 2019-05-06 Carrier, carrier positioning system and carrier positioning method
US16/508,471 US11024055B2 (en) 2018-11-29 2019-07-11 Vehicle, vehicle positioning system, and vehicle positioning method
JP2019136998A JP7073315B2 (en) 2018-11-29 2019-07-25 Vehicles, vehicle positioning systems, and vehicle positioning methods

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862773124P 2018-11-29 2018-11-29
US62/773,124 2018-11-29

Publications (2)

Publication Number Publication Date
TW202020734A true TW202020734A (en) 2020-06-01
TWI754808B TWI754808B (en) 2022-02-11

Family

ID=72175772

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108112604A TWI754808B (en) 2018-11-29 2019-04-11 Vehicle, vehicle positioning system, and vehicle positioning method

Country Status (1)

Country Link
TW (1) TWI754808B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI758980B (en) * 2020-11-30 2022-03-21 財團法人金屬工業研究發展中心 Environment perception device and method of mobile vehicle
TWI774543B (en) * 2021-08-31 2022-08-11 財團法人車輛研究測試中心 Obstacle detection method
TWI784754B (en) * 2021-04-16 2022-11-21 威盛電子股份有限公司 Electronic device and object detection method
TWI827056B (en) * 2022-05-17 2023-12-21 中光電智能機器人股份有限公司 Automated moving vehicle and control method thereof

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI431250B (en) * 2011-03-01 2014-03-21 Navigation device for integrated traffic image recording and navigation information
TWI535589B (en) * 2013-09-24 2016-06-01 Active automatic driving assistance system and method
US10061027B2 (en) * 2014-02-25 2018-08-28 Adsys Controls, Inc. Laser navigation system and method
US10121082B2 (en) * 2015-10-07 2018-11-06 Honda Motor Co., Ltd. System and method for providing laser camera fusion for identifying and tracking a traffic participant
CN105676643B (en) * 2016-03-02 2018-06-26 厦门大学 A kind of intelligent automobile turns to and braking self-adaptive wavelet base method
JP6368959B2 (en) * 2016-05-19 2018-08-08 本田技研工業株式会社 Vehicle control system, vehicle control method, and vehicle control program
JP7031137B2 (en) * 2017-04-10 2022-03-08 凸版印刷株式会社 Laser scanning device
CN108622093B (en) * 2018-05-04 2020-08-04 奇瑞汽车股份有限公司 Lane keeping control method and device for intelligent vehicle
CN108830159A (en) * 2018-05-17 2018-11-16 武汉理工大学 A kind of front vehicles monocular vision range-measurement system and method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI758980B (en) * 2020-11-30 2022-03-21 財團法人金屬工業研究發展中心 Environment perception device and method of mobile vehicle
US11636690B2 (en) 2020-11-30 2023-04-25 Metal Industries Research & Development Centre Environment perception device and method of mobile vehicle
TWI784754B (en) * 2021-04-16 2022-11-21 威盛電子股份有限公司 Electronic device and object detection method
TWI774543B (en) * 2021-08-31 2022-08-11 財團法人車輛研究測試中心 Obstacle detection method
TWI827056B (en) * 2022-05-17 2023-12-21 中光電智能機器人股份有限公司 Automated moving vehicle and control method thereof

Also Published As

Publication number Publication date
TWI754808B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN111238494B (en) Carrier, carrier positioning system and carrier positioning method
US10943355B2 (en) Systems and methods for detecting an object velocity
US10684372B2 (en) Systems, devices, and methods for autonomous vehicle localization
US11042157B2 (en) Lane/object detection and tracking perception system for autonomous vehicles
TWI754808B (en) Vehicle, vehicle positioning system, and vehicle positioning method
CN107246868B (en) Collaborative navigation positioning system and navigation positioning method
EP3283843B1 (en) Generating 3-dimensional maps of a scene using passive and active measurements
CN110119698B (en) Method, apparatus, device and storage medium for determining object state
US20190278273A1 (en) Odometry system and method for tracking traffic lights
WO2020163311A1 (en) Systems and methods for vehicle navigation
EP4085230A1 (en) Systems and methods for vehicle navigation
US11680801B2 (en) Navigation based on partially occluded pedestrians
CN112017236B (en) Method and device for calculating target object position based on monocular camera
WO2022041706A1 (en) Positioning method, positioning system, and vehicle
WO2021262976A1 (en) Systems and methods for detecting an open door
WO2023065342A1 (en) Vehicle, vehicle positioning method and apparatus, device, and computer-readable storage medium
CN111833443A (en) Landmark position reconstruction in autonomous machine applications
WO2021198775A1 (en) Control loop for navigating a vehicle
Jayasuriya et al. Leveraging deep learning based object detection for localising autonomous personal mobility devices in sparse maps
US20220214187A1 (en) High-definition maps and localization for road vehicles
Jarnea et al. Advanced driver assistance system for overtaking maneuver on a highway
JP7302966B2 (en) moving body
Ma et al. Roadside Bird's Eye View Perception Algorithm for Vehicle Tracking under Vehicle-Road Collaboration
GB2616114A (en) Vehicle navigation with pedestrians and determining vehicle free space
WO2023196288A1 (en) Detecting an open door using a sparse representation