TW202248963A - Compound eyes system, the vehicle using the compound eyes systme and the image processing method for the compound eyes system - Google Patents

Compound eyes system, the vehicle using the compound eyes systme and the image processing method for the compound eyes system Download PDF

Info

Publication number
TW202248963A
TW202248963A TW110120115A TW110120115A TW202248963A TW 202248963 A TW202248963 A TW 202248963A TW 110120115 A TW110120115 A TW 110120115A TW 110120115 A TW110120115 A TW 110120115A TW 202248963 A TW202248963 A TW 202248963A
Authority
TW
Taiwan
Prior art keywords
image processing
eye camera
camera system
compound eye
under test
Prior art date
Application number
TW110120115A
Other languages
Chinese (zh)
Inventor
黃奇卿
Original Assignee
黃奇卿
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 黃奇卿 filed Critical 黃奇卿
Priority to TW110120115A priority Critical patent/TW202248963A/en
Publication of TW202248963A publication Critical patent/TW202248963A/en

Links

Images

Abstract

A compound eye camera system includes a first lens, at least four second lenses, a storage unit and an image processing controller. The storage unit is used to store multiple source image files captured by the first lens or the second lens, The image processing controller is used to identify the image features of at least one object to be measured in the source image files captured in the first shooting area, and uses the reference length as a ruler to construct a 3D spatial digital model; Thus, it can be used to assist vehicle monitoring and automatic driving, so that it has 3D space recognition and multi object monitoring in 3D space, so as to improve the level of unmanned monitoring.

Description

複眼攝像系統,使用複眼攝像系統的車輛及其影像處理方法 Compound eye camera system, vehicle using compound eye camera system and image processing method thereof

本發明係關於一種多鏡頭的複眼攝像系統,特別是一種可用於車輛監控、AI機器人、自動駕駛、掃地機器人、空中無人機、多軸加工機台儀器等不同工業設備的複眼攝像系統及其影像處理方法。 The present invention relates to a multi-lens compound eye camera system, especially a compound eye camera system and its images that can be used in different industrial equipment such as vehicle monitoring, AI robots, automatic driving, sweeping robots, aerial drones, and multi-axis processing machines and instruments. Approach.

自駕車在近年成為發燒議題,不論是GM、Volvo、Toyota等傳統車廠,或是新進業者Telsa、UBER、Waymo、Nuro.ai等,都積極透過實驗車輛路測蒐集環境資訊,建構自駕車所需的AI深度學習所需的路況資料。做為自駕車的「眼睛」,道路實測的自動駕駛車輛同時配備有影像感測等多種感知系統,其中處於關鍵地位的就是光達(LiDAR)感測器。光達的作用是可測量周圍車輛、物體等的距離,建立3D空間影像並加以辨識。現階段主要應用領域包括遙測、工廠自動化設備,以及鐵路、隧道等社會基礎建設的災害防範監視用途等。 Self-driving cars have become a hot topic in recent years. Whether it is traditional car manufacturers such as GM, Volvo, Toyota, or newcomers such as Telsa, UBER, Waymo, Nuro.ai, etc., they are actively collecting environmental information through road tests of experimental vehicles to build self-driving cars. Traffic data required for AI deep learning. As the "eyes" of the self-driving car, the self-driving vehicle measured on the road is also equipped with multiple perception systems such as image sensing, among which the LiDAR sensor plays a key role. The function of the lidar is to measure the distance of surrounding vehicles and objects, etc., and create and identify 3D spatial images. At this stage, the main application areas include telemetry, factory automation equipment, and disaster prevention and monitoring of social infrastructure such as railways and tunnels.

相較於運用可視光範圍的影像感測裝置,由於光達使用近紅外線,不易受環境中光線的干擾,光達感測主要具有多項優點,包括在可 接收反射光的範圍(車用產品約可達200公尺甚至300公尺)內均可辨識,不受環境中光線強弱或陰影的影響;紅外線、毫米波等感測器主要應用於測量距離,若由影像感測建立3D視覺,則需仰賴複數台數的影像感測裝置;光達可使用一台感測器來建立立體環境掃描資訊,使其能在遠距離使用情境下較能維持量測精準度。此外,Google、Waymo、Uber等一些國際大廠也發展包含光達的感測器融合技術,將光達感測技術回傳資訊與其他類型感測器偵測的資訊整合,並給予不同的勘誤邏輯,以提升整體辨識精確度,做為未來自駕車所需之人工智慧深度學習訓練與推論的基礎資訊。 Compared with image sensing devices that use the visible light range, LiDAR uses near-infrared rays and is less susceptible to interference from light in the environment. LiDAR sensing has many advantages, including It can be identified within the range of receiving reflected light (approximately 200 meters or even 300 meters for automotive products), and is not affected by the intensity of light or shadows in the environment; sensors such as infrared rays and millimeter waves are mainly used to measure distances. If the 3D vision is established by image sensing, it needs to rely on multiple image sensing devices; LiDAR can use one sensor to create three-dimensional environment scanning information, so that it can maintain the quality in the long-distance use situation Measurement accuracy. In addition, some international companies such as Google, Waymo, and Uber have also developed sensor fusion technology that includes LiDAR, integrating the information returned by LiDAR sensing technology with the information detected by other types of sensors, and giving different errata Logic, to improve the overall recognition accuracy, as the basic information for artificial intelligence deep learning training and inference required for future self-driving cars.

在此,光達主要結構是由對周圍照射近紅外線雷射光的發射光線模組,以及接收反射物體光線的接收光線模組構成,建立環境立體模型的原理,是根據照射光與接收到光線的時間差計算距離。但,光達易受下雨、霧氣影響而喪失精準度,且其無法辨識物體的材質,導致招牌、看板或是實物形象,光達並無法準確判讀。另外,光達調校費時不易大量量產,致使費用高昂而不易大規模推廣。這也是其重要缺點。 Here, the main structure of the LiDAR is composed of an emitting light module that irradiates near-infrared laser light to the surroundings, and a receiving light module that receives light reflected from objects. The time difference calculates the distance. However, LiDAR is easily affected by rain and fog and loses its accuracy, and it cannot identify the material of objects. As a result, LiDAR cannot accurately interpret signboards, billboards, or physical images. In addition, LiDAR adjustment is time-consuming and difficult to mass-produce, resulting in high cost and difficult to promote on a large scale. This is also its important shortcoming.

因此,如何在成本可控的前提下,達到周邊景物可量可測,清楚辨識周邊景物材質,並建立3D數位化的空間感知系統,進而使該系統可用於車輛監控、自動駕駛、AI機器人、掃地機器人、空中無人機、多軸加工機台儀器等不同工業設備,這是本領域具有通常知識者努力的目標。 Therefore, how to achieve measurable and measurable surrounding scenery under the premise of controllable costs, clearly identify the surrounding scenery materials, and establish a 3D digital space perception system, so that the system can be used for vehicle monitoring, automatic driving, AI robots, Different industrial equipment such as sweeping robots, aerial drones, and multi-axis processing machines and instruments are the goals of those with ordinary knowledge in the field.

本發明主要目的在成本可控的前提下,達到周邊景物可量可測,清楚辨識周邊景物材質,用以建立3D數位化的空間感知系統。 The main purpose of the present invention is to achieve quantifiable and measurable surrounding scenery and clearly identify the material of surrounding scenery under the premise of controllable cost, so as to establish a 3D digital space perception system.

本發明另一目在輔助車輛監控、AI機器人、自動駕駛、掃地機器人、空中無人機、多軸加工機台儀器等不同工業設備,使該設備能具有3D空間識別、3D空間的多物體監控,以提升工業化無人監控水平。 Another object of the present invention is to assist different industrial equipment such as vehicle monitoring, AI robots, automatic driving, sweeping robots, aerial drones, and multi-axis processing machine instruments, so that the equipment can have 3D space recognition and 3D space multi-object monitoring. Improve the level of industrial unmanned monitoring.

為了解決上述及其他問題,本發明提供一種複眼攝像系統,其包括一第一鏡頭、至少四個第二鏡頭、一儲存單元及一影像處理控制器。該第一鏡頭具有一扇形展開的第一拍攝區,該第二鏡頭分佈於該第一鏡頭的周邊,每一第二鏡頭均具有一扇形展開的第二拍攝區,該第一拍攝區的中心拍攝方向與該第二拍攝區的中心拍攝方向夾有一角度,且該第二拍攝區與該第一拍攝區呈部份重疊;該儲存單元用以儲存該第一鏡頭或第二鏡頭所拍攝的多個源影像圖檔;該影像處理控制器用以辨識該些源影像圖檔里至少一待測物的圖像特征,該影像處理控制器用以解析同一時間點拍攝的多個源影像圖檔並生成一相對應的3D圖元,再透過多個不同時間點生成的3D圖元而解析出一帶有3D空間訊息的可攜式圖檔。 In order to solve the above and other problems, the present invention provides a compound eye camera system, which includes a first lens, at least four second lenses, a storage unit and an image processing controller. The first lens has a fan-shaped first shooting area, the second lens is distributed on the periphery of the first lens, each second lens has a fan-shaped second shooting area, and the center of the first shooting area There is an angle between the shooting direction and the central shooting direction of the second shooting area, and the second shooting area partially overlaps with the first shooting area; A plurality of source image files; the image processing controller is used to identify the image features of at least one object under test in the source image files, and the image processing controller is used to analyze the multiple source image files taken at the same time point and A corresponding 3D primitive is generated, and a portable image file with 3D spatial information is analyzed through multiple 3D primitives generated at different time points.

如上所述的複眼攝像系統,其中,該第一拍攝區所拍攝的源影像圖檔內均顯示有一基準長度,該影像處理控制器運用該基準長度為尺標而建構一3D空間數位化模型。 In the above-mentioned compound eye camera system, a reference length is displayed in the source image files captured by the first shooting area, and the image processing controller uses the reference length as a scale to construct a 3D spatial digital model.

如上所述的複眼攝像系統,其中,該儲存單元與該影像處理控制器相耦接,使該3D圖元或該可攜式圖檔傳輸並儲存至該儲存單元。 In the above-mentioned compound eye camera system, wherein the storage unit is coupled to the image processing controller, so that the 3D graphic element or the portable image file is transmitted and stored in the storage unit.

如上所述的複眼攝像系統,其中,該儲存單元里儲存有至少一圖元模板,該圖元模板為該待測物的全部特征或局部特征之二維圖像。 In the above-mentioned compound eye imaging system, at least one graphic element template is stored in the storage unit, and the graphic element template is a two-dimensional image of all or partial features of the object under test.

如上所述的複眼攝像系統,其中,更包括有至少一耦接於該影像處理控制器的警示燈,使該影像處理控制器計算解析該些源影像圖檔 里的待測物後直接用以控制該警示燈。 The above-mentioned compound eye camera system further includes at least one warning light coupled to the image processing controller, so that the image processing controller can calculate and analyze the source image files It is directly used to control the warning light after the object under test in it.

為了解決上述及其他問題,本發明又提供一種如上所述複眼攝像系統的車輛,其中,多個複眼攝像系統分佈於該車輛的車頂部、車體前緣、車體後緣或兩側邊。 In order to solve the above and other problems, the present invention further provides a vehicle with a compound-eye camera system as described above, wherein multiple compound-eye camera systems are distributed on the roof, front edge, rear edge or both sides of the vehicle.

為了解決上述及其他問題,本發明再提供一種複眼攝像系統的影像處理方法,該複眼攝像系統包括有一具有第一拍攝區的第一鏡頭及一具有第二拍攝區的第二鏡頭,該影像處理方法包括有下列步驟:步驟A01:截取多鏡頭、多個時間點的源影像圖檔;步驟A02:辨識並解析該些源影像圖檔,用以生成相對應至少一待測物的3D圖元;步驟A03:計算該待測物的距離;步驟A04:計算該待測物的3D移動向量;步驟A05:選擇性地補償修正該待測物的3D移動向量之誤差;步驟A06:結合該待測物的3D圖元及其相對應的3D移動訊息而解析出帶有3D空間訊息的可攜式圖檔;及步驟A07:建立該待測物移動之3D空間數位化模型,並將該可攜式圖檔疊置於該3D空間數位化模型上。 In order to solve the above and other problems, the present invention further provides an image processing method of a compound eye camera system, the compound eye camera system includes a first lens with a first shooting area and a second lens with a second shooting area, the image processing The method includes the following steps: Step A01: intercept source image files of multiple cameras and multiple time points; Step A02: identify and analyze the source image files to generate 3D primitives corresponding to at least one object under test ; Step A03: Calculate the distance of the object under test; Step A04: Calculate the 3D motion vector of the object under test; Step A05: Selectively compensate and correct the error of the 3D motion vector of the object under test; Step A06: Combine the Analyze the 3D primitive of the object under test and its corresponding 3D movement information to obtain a portable image file with 3D spatial information; and step A07: establish a 3D spatial digital model of the movement of the object under test, and convert the The portable graphic file is superimposed on the 3D space digitized model.

如上所述複眼攝像系統的影像處理方法,其中,更包括步驟A08:發出減速預警信號、剎車預警信號、轉向提示信號或轉向控制信號。 The image processing method of the above-mentioned compound eye camera system further includes step A08: sending out a deceleration warning signal, a brake warning signal, a steering prompt signal or a steering control signal.

如上所述複眼攝像系統的影像處理方法,其中,步驟A02更包括下列子步驟:步驟A021:從該些源影像圖檔裏提取一待測物的圖像特征;步驟A022:將該些圖像特征比對一儲存單元裏多個不同視角的圖元模板;步驟A023:生成一待測物的3D圖元;在進一步實施例里,步驟A022之圖元模板為該待測物的全部特征或局部特征之二維圖像。 The image processing method of the above-mentioned compound eye camera system, wherein, step A02 further includes the following sub-steps: Step A021: extract an image feature of an object under test from the source image files; step A022: extract these images Feature comparison of multiple graphic element templates from different viewing angles in a storage unit; step A023: generating a 3D graphic element of the object to be tested; in a further embodiment, the graphic element template in step A022 is all the features of the object to be measured or Two-dimensional image of local features.

如上所述複眼攝像系統的影像處理方法,其中,步驟A03更 包括下列子步驟:步驟A031:透過該第一鏡頭或第二鏡頭拍攝相同時間點的源影像圖檔以量測該待測物的距離;步驟A032:從該源影像圖檔里量測該待測物的方位角及俯仰角;步驟A033:計算並確認該待測物的空間關係。 The image processing method of the above-mentioned compound eye camera system, wherein, step A03 is further The following sub-steps are included: step A031: take a source image file at the same time point through the first lens or the second lens to measure the distance of the object under test; step A032: measure the object to be measured from the source image file The azimuth and elevation angle of the object to be measured; step A033 : calculating and confirming the spatial relationship of the object to be measured.

如上所述複眼攝像系統的影像處理方法,其中,步驟A031的距離量測,是透過該源影像圖檔里車輛的基準長度比較而得,或是透過該源影像圖檔里的基準長度的倍數尺規標示線而量測;在進一步實施例里,該影像處理控制器以該待測物所呈現之輪廓外觀的形狀中心點位置來比對該倍數尺規標示線。 In the image processing method of the above-mentioned compound eye camera system, the distance measurement in step A031 is obtained by comparing the reference length of the vehicle in the source image file, or through the multiple of the reference length in the source image file In a further embodiment, the image processing controller uses the position of the center point of the outline appearance of the object to be measured to compare the multiple ruler marking line.

如上所述複眼攝像系統的影像處理方法,其中,步驟A031的距離量測,是透過該第一鏡頭及第二鏡頭之觀察角度的三角定位量測而得。 In the image processing method of the compound eye camera system described above, the distance measurement in step A031 is obtained through triangulation measurement of the viewing angles of the first lens and the second lens.

如上所述複眼攝像系統的影像處理方法,其中,步驟A032的方位角或俯仰角量測,是透過該源影像圖檔里的方位尺規標示線或俯仰尺規標示線而量測;在進一步實施例里,該影像處理控制器以該待測物所呈現之輪廓外觀的形狀中心點位置來比對該方位尺規標示線或該俯仰尺規標示線。 In the image processing method of the above-mentioned compound eye camera system, wherein the azimuth or elevation angle measurement in step A032 is measured through the azimuth ruler marking line or the elevation ruler marking line in the source image file; further In an embodiment, the image processing controller compares the marked line of the azimuth ruler or the marked line of the pitch ruler with the position of the shape center point of the outline appearance presented by the object under test.

如上所述複眼攝像系統的影像處理方法,其中,步驟A04更包括下列子步驟:步驟A041:取得不同時間點的待測物位置;步驟A042:計算該待測物的移動向量;步驟A043:連續顯示多個時間點的移動向量。 As mentioned above in the image processing method of the compound eye camera system, step A04 further includes the following sub-steps: Step A041: Obtain the position of the object under test at different time points; Step A042: Calculate the motion vector of the object under test; Step A043: Continue Displays motion vectors for multiple time points.

如上所述複眼攝像系統的影像處理方法,其中,步驟A05更包括下列子步驟:步驟A051:從該源影像圖檔裏提取至少一待測物的轉向特征;步驟A052:標定並生成該待測物的補償修正向量;步驟A053:分配 權重至該待測物的補償修正向量,以校正該待測物的移動路徑。 In the image processing method of the compound eye camera system described above, step A05 further includes the following sub-steps: Step A051: Extract at least one turning feature of the object under test from the source image file; Step A052: Calibrate and generate the object under test The compensation correction vector of the object; step A053: assign The compensation correction vector is weighted to the object under test to correct the moving path of the object under test.

藉此,本發明所述複眼攝像系統及其影像處理方法,可以在成本可控的前提下,達到周邊景物可量可測,清楚辨識周邊景物材質,用以建立3D數位化的空間感知系統,進而輔助車輛監控、自動駕駛、AI機器人、掃地機器人、空中無人機、多軸加工機台儀器等不同工業設備,使該設備能具有3D空間識別、3D空間的多物體監控,以提升工業化無人監控水平。 In this way, the compound eye camera system and its image processing method described in the present invention can achieve measurable and measurable surrounding scenery and clearly identify the surrounding scenery material under the premise of controllable cost, so as to establish a 3D digital space perception system, And then assist vehicle monitoring, automatic driving, AI robots, sweeping robots, aerial drones, multi-axis processing machine instruments and other industrial equipment, so that the equipment can have 3D space recognition and 3D space multi-object monitoring to improve industrial unmanned monitoring Level.

為使能更進一步瞭解本發明之特徵及技術內容,請參閱以下有關本發明之詳細說明與附圖,然而所附圖式僅提供參考與說明用,並非用來對本發明加以限制者。為使能更進一步瞭解本發明的特徵及技術內容,請參閱以下有關本發明的詳細說明與附圖,然而所附圖式僅提供參考與說明用,並非用來對本發明加以限制。 In order to further understand the features and technical content of the present invention, please refer to the following detailed description and drawings related to the present invention. However, the attached drawings are only for reference and illustration, and are not used to limit the present invention. In order to further understand the features and technical content of the present invention, please refer to the following detailed description and accompanying drawings of the present invention. However, the accompanying drawings are provided for reference and illustration only, and are not intended to limit the present invention.

50:複眼攝像系統 50 : compound eye camera system

51:第一鏡頭 51 : first shot

52:第二鏡頭 52 : second shot

53:第一拍攝區 53 : The first shooting area

53A:中心拍攝方向 53A : Center shooting direction

54:第二拍攝區 54 : Second shooting area

54A:中心拍攝方向 54A : Center shooting direction

61:源影像圖檔 61 : source image file

62:3D空間數位化模型 62 : 3D digital space model

63:圖像特征 63 : image features

64:3D圖元 64 : 3D primitives

65:可攜式圖檔 65 : Portable graphic file

66:圖元模板 66 : primitive template

71:基準長度 71 : Reference length

55:儲存單元 55 : storage unit

56:影像處理控制器 56 : image processing controller

57:警示燈 57 : warning lights

91:車輛 91 : vehicle

92:待測物 92 : object to be tested

72:方位尺規標示線 72 : Marking line of azimuth ruler

74:倍數尺規標示線 74 : Marking line of multiple ruler gauge

75:空間網格線 75 : Spatial grid lines

h1:垂直距離 h1 : vertical distance

d:鏡頭間距 d : lens distance

α、β、θ:角度 α, β, θ : angle

圖1A所繪示為複眼攝像系統的結構示意圖。 FIG. 1A is a schematic structural diagram of a compound eye camera system.

圖1B所繪示為複眼攝像系統應用於車輛的使用狀態圖。 FIG. 1B is a diagram showing the usage state of the compound eye camera system applied to a vehicle.

圖1C所繪示為複眼攝像系統的功能方塊圖。 FIG. 1C is a functional block diagram of the compound eye camera system.

圖2A~圖2E所繪示為複眼攝像系統的影像處理方法流程圖。 2A to 2E are flowcharts of the image processing method of the compound eye camera system.

圖3所繪示為影像處理控制器辨識該源影像圖檔裏的圖像特征的示意圖。 FIG. 3 is a schematic diagram of the image processing controller identifying image features in the source image file.

圖4A所繪示為安裝該複眼攝像系統的車輛透過該基準長度而計算距離的示意圖。 FIG. 4A is a schematic diagram of a vehicle installed with the compound eye camera system calculating a distance through the reference length.

圖4B所繪示為影像處理控制器通過該源影像圖檔裏的車輛基準長度而計算距離的示意圖。 FIG. 4B is a schematic diagram of calculating the distance by the image processing controller through the reference length of the vehicle in the source image file.

圖5所繪示為該複眼攝像系統的三角量測法應用示意圖。 FIG. 5 is a schematic diagram showing the application of the triangulation method of the compound eye camera system.

圖6A所繪示為安裝該複眼攝像系統的車輛測量計算待測物的方位角的示意圖。 FIG. 6A is a schematic diagram of measuring and calculating the azimuth angle of the object to be measured by the vehicle installed with the compound eye camera system.

圖6B所繪示為影像處理控制器通過該源影像圖檔而計算該待測物的方位角的示意圖。 FIG. 6B is a schematic diagram of calculating the azimuth angle of the object under test by the image processing controller through the source image file.

圖7A~圖7C所繪示為安裝該複眼攝像系統的車輛在該3D空間數位化模型內的周邊場景態勢感知示意圖。 FIGS. 7A to 7C are schematic diagrams of situational awareness of surrounding scenes of a vehicle installed with the compound eye camera system in the 3D space digital model.

圖8所繪示為該影像處理控制器進行殘圖補盲的需求場景示意圖。 FIG. 8 is a schematic diagram of a demand scene for the image processing controller to perform residual image correction.

圖9所繪示為複眼攝像系統另一實施例的功能方塊圖。 FIG. 9 is a functional block diagram of another embodiment of the compound eye camera system.

請同時參閱圖1A~圖1C,圖1A所繪示為複眼攝像系統的結構示意圖,圖1B所繪示為複眼攝像系統應用於車輛的使用狀態圖,圖1C所繪示為複眼攝像系統的功能方塊圖。如圖所示,一複眼攝像系統50,其包括有一第一鏡頭51、四個第二鏡頭52、一儲存單元55及一影像處理控制器56。該第一鏡頭51具有一扇形展開的第一拍攝區53,多個第二鏡頭52分佈於該第一鏡頭51的周邊,且每一第二鏡頭52均具有一扇形展開的第二拍攝區54,該第一拍攝區53的中心拍攝方向53A與該第二拍攝區54的中心拍攝方向54A夾有一角度θ,且使該第二拍攝區54與該第一拍攝區53呈部份重疊。 其中,該複眼攝像系統50的第一鏡頭51、第二鏡頭52表面處呈弧面設計,可以使該第一拍攝區53的中心拍攝方向53A與該第二拍攝區54的中心拍攝方向54A非指向同一方向,從而使得該第一拍攝區53、第二拍攝區54的整體覆蓋面積更大,避免更多的拍攝死角。該儲存單元55用以儲存該第一鏡頭51或第二鏡頭52所拍攝的多個源影像圖檔61。該儲存單元55與該影像處理控制器56相耦接,該影像處理控制器56用以解析同一時間點拍攝的多個源影像圖檔61並生成一相對應的3D圖元64,再透過多個不同時間點生成的3D圖元64而解析出一帶有3D空間訊息的可攜式圖檔65。該儲存單元55里儲存有多個圖元模板66,該圖元模板66為該源影像圖檔61里某一待測物92的局部特征或全部特征之二維圖像。使該3D圖元64或該可攜式圖檔65傳輸並儲存至該儲存單元55。其中,該源影像圖檔61為該第一鏡頭51或該第二鏡頭52所拍攝下來具有影像格式的圖檔,其格式包括但不限於JPG、JPEG、PSD、TIFF、PDF、BMP、EPS、PNG、GIF、PCX等格式。該3D圖元64則是具備3維空間多視角可視化、可向量化、可解析的數位化檔案。該可攜式圖檔65為帶有3D空間訊息的可攜式電子檔格式,其可被網路傳輸至雲端或其他機器儀器設備上儲存、解析並應用;該3D空間訊息包括3D空間的位置訊息(例如透過GPS定位、北斗定位或其他衛星系統定位)、帶方向的速度向量訊息或加速度向量訊息。 Please refer to Figures 1A to 1C at the same time. Figure 1A shows the schematic structure of the compound eye camera system, Figure 1B shows the application state of the compound eye camera system in vehicles, and Figure 1C shows the function of the compound eye camera system block diagram. As shown in the figure, a compound eye camera system 50 includes a first lens 51 , four second lenses 52 , a storage unit 55 and an image processing controller 56 . The first lens 51 has a fan-shaped first shooting area 53, a plurality of second lenses 52 are distributed around the first lens 51, and each second lens 52 has a fan-shaped second shooting area 54 The central shooting direction 53A of the first shooting area 53 and the central shooting direction 54A of the second shooting area 54 form an angle θ, and the second shooting area 54 and the first shooting area 53 are partially overlapped. Wherein, the first lens 51 and the second lens 52 of the compound eye camera system 50 are designed with arc surfaces, which can make the central shooting direction 53A of the first shooting area 53 and the central shooting direction 54A of the second shooting area 54 be incompatible. pointing in the same direction, so that the overall coverage area of the first shooting area 53 and the second shooting area 54 is larger, and more dead spots for shooting are avoided. The storage unit 55 is used for storing a plurality of source image files 61 captured by the first lens 51 or the second lens 52 . The storage unit 55 is coupled with the image processing controller 56, and the image processing controller 56 is used to analyze multiple source image files 61 shot at the same time point and generate a corresponding 3D primitive 64, and then through multiple 3D primitives 64 generated at different time points are analyzed to obtain a portable graphic file 65 with 3D spatial information. The storage unit 55 stores a plurality of primitive templates 66 , and the primitive templates 66 are two-dimensional images of local features or all features of a certain object under test 92 in the source image file 61 . The 3D graphic element 64 or the portable graphic file 65 is transmitted and stored in the storage unit 55 . Wherein, the source image file 61 is a file in image format captured by the first lens 51 or the second lens 52, and its format includes but not limited to JPG, JPEG, PSD, TIFF, PDF, BMP, EPS, PNG, GIF, PCX and other formats. The 3D graphic element 64 is a digitized file with multi-view visualization in 3D space, vectorizable and analyzable. The portable image file 65 is a portable electronic file format with 3D spatial information, which can be transmitted to the cloud or other machine equipment for storage, analysis and application by the network; the 3D spatial information includes the position of the 3D space Information (such as GPS positioning, Beidou positioning or other satellite system positioning), velocity vector information or acceleration vector information with direction.

上述的複眼攝像系統50可以被應用至汽車監控輔助、自動駕駛、掃地機器人、AI機器人、空中無人機、多軸加工機台儀器等不同工業設備,使這些設備能具有3D空間識別、3D空間的多物體監控,以提升工業化無人監控水平。以下,以該複眼攝像系統50應用至車輛91監控輔助為例, 來說明本發明複眼攝像系統50的影像處理方法,如圖1B所示,多個複眼攝像系統50分佈於該車輛91的車頂部、車體前緣、車體後緣或兩側邊,用以監控該車輛91周邊、上方的3D立體空間狀況。在此特別說明,車輛91的周邊裝設該複眼攝像系統50可以讓該車輛91建立周邊的3D空間態勢感知,得知該車輛91周邊近200公尺以內物體的大小、外觀輪廓、形狀、速度、加速度,以使該車輛91提前針對周邊交通狀況作出反應,防止交通事故。另,該車輛91頂部裝設該複眼攝像系統50,可用以監控該車輛91上方的物體,例如,如果該車輛91經常性行駛於山頂落石或山區土石流高頻發生地區,則該複眼攝像系統50就可以提前預警並得知落石、山崩、走山、土石流,進而讓該車輛91逕行躲避或停車。該複眼攝像系統50及其影像處理方法對於該車輛91的價值就在於,可以讓該車輛91具備周邊3D空間景物的態勢感知,進而提昇汽車自動駕駛的操控性與精準度。 The above-mentioned compound eye camera system 50 can be applied to various industrial equipment such as automobile monitoring assistance, automatic driving, sweeping robot, AI robot, aerial drone, multi-axis processing machine instrument, etc., so that these equipment can have 3D space recognition, 3D space Multi-object monitoring to improve the level of industrial unmanned monitoring. Hereinafter, taking the application of the compound eye camera system 50 to the vehicle 91 monitoring assistance as an example, To illustrate the image processing method of the compound eye camera system 50 of the present invention, as shown in FIG. The 3D three-dimensional space conditions around and above the vehicle 91 are monitored. In particular, the installation of the compound eye camera system 50 around the vehicle 91 can allow the vehicle 91 to establish a 3D space situation awareness around the vehicle 91, and know the size, outline, shape, and speed of objects within 200 meters around the vehicle 91. , acceleration, so that the vehicle 91 can respond to the surrounding traffic conditions in advance to prevent traffic accidents. In addition, the compound eye camera system 50 is installed on the top of the vehicle 91, which can be used to monitor objects above the vehicle 91. For example, if the vehicle 91 often travels in areas where rockfall or mountainous landslides frequently occur, the compound eye camera system 50 will Just can warn in advance and know rockfall, landslide, mountain walk, landslide, and then allow this vehicle 91 to dodge directly or stop. The value of the compound eye camera system 50 and its image processing method for the vehicle 91 lies in that it can enable the vehicle 91 to have situational awareness of the surrounding 3D space scenery, thereby improving the controllability and accuracy of the automatic driving of the vehicle.

為了達到上述提昇汽車自動駕駛的操控性與精準度的目的,本發明再提供該複眼攝像系統50的影像處理方法。請再同時參閱圖2A~圖2E,圖2A~圖2E所繪示為複眼攝像系統50的影像處理方法流程圖;如圖2A所示,透過該第一鏡頭51、第二鏡頭52截取多鏡頭、多個時間點的源影像圖檔61(步驟A01),再透過該影像處理控制器56來辨識並解析該些源影像圖檔61,用以生成相對應至少一待測物92的3D圖元64(步驟A02)。在此,如圖2B所示,該步驟A02的具體細部執行方式包括下列子步驟:從該些源影像圖檔61裏提取一待測物92的圖像特征63(步驟A021),其中,如圖3所示,從該複眼攝像系統50拍攝的源影像圖檔61里,可透過該影像處理控制器56來辨識該些源影像圖檔61里的多個待測物92的圖像特征63。該待測物92可 以是汽車、卡車、機車、交通號誌、電線桿、天橋、路樹…等。不同的待測物92會具有不同的圖像特征63,該圖像特征63為該待測物92的平面影像特征,其包括但不限於顏色特征、紋理特征、灰度值特征、形狀特征、空間對應關係特征、局部特征或全局特征…等。以具體的實物為例,路樹的圖像特征63就是樹葉或樹幹;汽車的圖像特征63就是車殼輪廓及輪胎;卡車的圖像特征63就是集裝箱或高置於輪胎上的卡車駕駛座。因此,透過對該待測物92的圖像特征63的辨識,該複眼攝像系統50即可分辨出該車輛91前方的待測物92是機車、汽車或是行人。接下來,如圖2B所示,將該些圖像特征63比對該儲存單元55裏多個不同視角的圖元模板66(步驟A022),並判斷該待測物92的圖像特征63是否附合該圖元模板66;如果符合,則生成該待測物92的3D圖元64(繪示於圖1C,同時參閱步驟A023),使該3D圖元64與該儲存單元55裏的圖元模板66均對應至該特定的待測物92。在此,該圖元模板66為該待測物92的多個不同視角的二維圖像所結合而成的檔案(亦即,該待測物92的多個不同視角的完整圖像之集合),其可以是一內建或抓取自大數據的圖像式範本檔案;例如,該圖元模板66可以是某一特定物體(例如汽車、機車、卡車、交通號誌、路樹…等)不同視角上的可對比式圖像檔,其目的是提供該複眼攝像系統50從多個不同視角上來進行圖像特征63的參照比對。也因此,僅僅少數幾個特定角度的源影像圖檔61,經過該影像處理控制器56的比對後,即可讓該複眼攝像系統50辨識並確認該待測物92究竟是哪一種物體?甚至是哪個種類、款式的汽車。更進一步地,該圖元模板66可以是該源影像圖檔61里某一待測物92的局部特征,因此可以透過該局部特征的圖元模板66來進行殘像比對、殘像補盲;請同時 參閱圖8,圖8所繪示為該影像處理控制器進行殘圖補盲的需求場景示意圖。如圖8所示,在該複眼攝像系統50的第一拍攝區53里,該行人待測物92被前方的箱型車遮擋住部份影像,造成該影像處理控制器56無法完全辨識該行人待測物92。此時,透過該圖元模板66的殘像、殘影(即,該行人待測物92的局部特征之圖像)建立,即可讓該影像處理控制器56經過特征比對而得知並確認該第一拍攝區53內被遮擋住的物體為何?如此一來,該複眼攝像系統50就可以提早知道被遮擋區後有什麼物體,達到事先預知、早期預警的功能。 In order to achieve the above-mentioned purpose of improving the controllability and precision of the automatic driving of the automobile, the present invention further provides an image processing method of the compound eye camera system 50 . Please refer to Fig. 2A ~ Fig. 2E at the same time, Fig. 2A ~ Fig. 2E are depicted as the flow chart of the image processing method of compound eye camera system 50; , source image files 61 at multiple time points (step A01), and then identify and analyze the source image files 61 through the image processing controller 56 to generate a 3D image corresponding to at least one object under test 92 element 64 (step A02). Here, as shown in FIG. 2B , the detailed implementation of step A02 includes the following sub-steps: extracting an image feature 63 of an object under test 92 from the source image files 61 (step A021), wherein, as As shown in FIG. 3 , from the source image files 61 captured by the compound eye camera system 50 , the image features 63 of a plurality of objects to be tested 92 in the source image files 61 can be identified through the image processing controller 56 . The DUT 92 can be So cars, trucks, locomotives, traffic signs, telephone poles, overpasses, road trees...etc. Different test objects 92 will have different image features 63, and the image features 63 are plane image features of the test object 92, which include but not limited to color features, texture features, gray value features, shape features, Spatial correspondence features, local features or global features...etc. Taking specific objects as an example, the image features 63 of road trees are leaves or trunks; the image features 63 of automobiles are the outline of the car shell and tires; . Therefore, by recognizing the image feature 63 of the object under test 92 , the compound eye camera system 50 can distinguish whether the object under test 92 in front of the vehicle 91 is a locomotive, a car or a pedestrian. Next, as shown in FIG. 2B , these image features 63 are compared with a plurality of primitive templates 66 of different viewing angles in the storage unit 55 (step A022), and it is judged whether the image features 63 of the object to be tested 92 are Attach the graphic element template 66; if it matches, then generate the 3D graphic element 64 (shown in FIG. 1C, see step A023) of the object under test 92, so that the 3D graphic element 64 is consistent with the image in the storage unit 55 Each meta-template 66 corresponds to the specific analyte 92 . Here, the primitive template 66 is a file formed by combining a plurality of two-dimensional images of the object 92 with different viewing angles (that is, a collection of complete images of the object 92 with different viewing angles). ), which can be a built-in or image-based template file captured from big data; for example, the graphic element template 66 can be a specific object (such as a car, a locomotive, a truck, a traffic signal, a road tree, etc. ) Comparable image files at different viewing angles, the purpose of which is to provide the compound eye camera system 50 with a reference comparison of image features 63 from multiple different viewing angles. Therefore, only a few source image files 61 of specific angles, after comparison by the image processing controller 56 , can allow the compound eye camera system 50 to identify and confirm what kind of object the object under test 92 is. Even the type and style of car. Furthermore, the primitive template 66 can be a local feature of a certain object 92 in the source image file 61, so afterimage comparison and afterimage compensation can be performed through the primitive template 66 of the local feature ; Please also Referring to FIG. 8 , FIG. 8 is a schematic diagram of a demand scene for the image processing controller to perform residual image correction. As shown in FIG. 8 , in the first photographing area 53 of the compound eye camera system 50 , the pedestrian to be tested 92 is partially blocked by the van ahead of the image, causing the image processing controller 56 to be unable to fully identify the pedestrian. The analyte92. At this time, through the creation of afterimages and afterimages (that is, images of local features of the pedestrian object to be tested 92 ) of the primitive template 66, the image processing controller 56 can be obtained and obtained through feature comparison. What is the object that is blocked in the first shooting area 53? In this way, the compound eye camera system 50 can know in advance what objects are behind the occluded area, so as to achieve the functions of advance prediction and early warning.

在此補充說明,上述的源影像圖檔61解析,或是該圖像特征63的辨識、比對,其核心技術均是透過圖像匹配來達成。該圖像匹配是指通過一定的匹配演算法,在兩幅或多幅圖像之間識別同質特點或同性特點,如二維圖像匹配中通過比較目標區和搜索區中相同大小的視窗的相關係數,取搜索區中相關係數最大所對應的視窗中心點作為同質特點或同性特點,也就是採用統計學的方法來尋找信號間的相關匹配度。其實質就是在基礎圖元相關、相似的條件下,運用匹配準則來達到最佳搜索效果。一般而言,圖像匹配可分為以灰度為基礎的匹配和以特征為基礎的匹配。 It should be supplemented here that the above-mentioned analysis of the source image file 61 , or the identification and comparison of the image features 63 , are all achieved through image matching. The image matching refers to identifying homogeneous features or homogeneous features between two or more images through a certain matching algorithm, such as comparing the window of the same size in the target area and the search area in two-dimensional image matching For the correlation coefficient, the center point of the window corresponding to the maximum correlation coefficient in the search area is taken as the homogeneity or homogeneity feature, that is, a statistical method is used to find the correlation matching degree between signals. Its essence is to use matching criteria to achieve the best search effect under the condition that the basic primitives are related and similar. In general, image matching can be divided into grayscale-based matching and feature-based matching.

接下來,當確認該待測物92並生成其相對應的3D圖元64之後,即可計算該待測物92的距離(步驟A03);其中,計算該待測物92的距離,如圖2C所示,可先透過該第一鏡頭51或第二鏡頭52拍攝”相同時間點”的源影像圖檔61以量測該待測物92的距離(步驟A031),再來從該源影像圖檔61里量測該待測物92的方位角及俯仰角(步驟A032),如此即可計算並確認該待測物92的空間關係(步驟A033)。其中,如圖4A、圖4B所示,該步 驟A031的距離量測,可以是透過該源影像圖檔61里車輛91的基準長度71,以該基準長度71比較量測該源影像圖檔61里的卡車、汽車之待測物92與安裝該複眼攝像系統50之車輛91的相對距離;亦即,在圖4A的一倍基準長度71、二倍基準長度71、三倍基準長度71、四倍基準長度71之處,均會在圖4B的源影像圖檔61里顯示或標示出該基準長度71的倍數尺規標示線74,以方便該複眼攝像系統50的影像處理控制器56比較或計算該卡車、汽車之待測物92的距離。其中,如圖4A、圖4B所示,該車輛91的基準長度71較佳是該複眼攝像系統50的安裝點至該車輛91最前端的距離;在其他實施例中,該第一鏡頭51所拍攝的源影像圖檔61內該車輛91的基準長度71及其倍數,也可以是透過軟體內建尺度(內建的標準固定長度)而達到,或是在該第一鏡頭51的鏡頭外表上刻上實體刻度線而達到。此外,如圖4B所示,從該源影像圖檔61觀之,如果該待測物92所呈現的面積較大(可能是該待測物92的實際體積大,或是比較靠近該裝設了複眼攝像系統50的車輛91)而跨越了多條倍數尺規標示線74,則該影像處理控制器56會以該待測物92所呈現之輪廓外觀的形狀中心點的位置來比對該倍數尺規標示線74,用以作為該待測物92的距離之判斷。 Next, after confirming the object to be tested 92 and generating its corresponding 3D primitive 64, the distance of the object to be tested 92 can be calculated (step A03); wherein, the distance of the object to be tested 92 is calculated, as shown in FIG. As shown in 2C, the source image file 61 of "the same time point" can be shot through the first lens 51 or the second lens 52 to measure the distance of the object 92 to be measured (step A031), and then the source image Measure the azimuth and elevation angles of the object under test 92 in the image file 61 (step A032 ), so that the spatial relationship of the object under test 92 can be calculated and confirmed (step A033 ). Wherein, as shown in Fig. 4A and Fig. 4B, this step The distance measurement in step A031 can be through the reference length 71 of the vehicle 91 in the source image file 61, using the reference length 71 to compare and measure the truck, the vehicle under test 92 and the installation in the source image file 61. The relative distance of the vehicle 91 of the compound eye camera system 50; that is, at one time of reference length 71, two times of reference length 71, three times of reference length 71, and four times of reference length 71 in Fig. 4A, all will be shown in Fig. 4B In the source image file 61 of the source image, the multiple ruler marking line 74 of the reference length 71 is displayed or marked, so that the image processing controller 56 of the compound eye camera system 50 can compare or calculate the distance of the object 92 of the truck or automobile . Wherein, as shown in Fig. 4A and Fig. 4B, the reference length 71 of the vehicle 91 is preferably the distance from the installation point of the compound eye camera system 50 to the front end of the vehicle 91; The reference length 71 and its multiples of the vehicle 91 in the captured source image file 61 can also be achieved through the software built-in scale (built-in standard fixed length), or on the lens surface of the first lens 51 Achieved by engraving physical graduation marks. In addition, as shown in FIG. 4B , viewed from the source image file 61, if the area of the object under test 92 is large (maybe the actual volume of the object under test 92 is large, or it is relatively close to the device If the vehicle 91 of the compound eye camera system 50 crosses a plurality of multiplier scale marking lines 74, the image processing controller 56 will compare the position of the shape center point of the outline appearance of the object 92 presented by the object under test 92. The marking line 74 of the multiple ruler is used for judging the distance of the object 92 to be measured.

除了運用本車輛91的基準長度71來計算之外,還可以透過三角定位量測法計算該卡車、汽車之待測物92的距離。如圖5所示,該複眼攝像系統50的第一鏡頭51與第二鏡頭52的鏡頭間距d,一機車狀的待測物92與該複眼攝像系統50的垂直距離h1,透過三角函數及三角定位量測法即可得該垂直距離h1=d*[(sinα*sinβ)/sin(α+β)]。也就是說,該第一鏡頭51與該第二鏡頭52的鏡頭間距d已知,然後再透過該複眼攝像系統50來觀察量測該角度 α、β的大小,即可透過計算而得知該垂直距離h1。其中,該機車狀的待測物92當然也可以是汽車、卡車、行人、路樹或交通號誌…等。 In addition to using the reference length 71 of the vehicle 91 to calculate, the distance of the truck or the vehicle to be measured 92 can also be calculated by triangulation. As shown in FIG. 5 , the lens distance d between the first lens 51 and the second lens 52 of the compound eye camera system 50, and the vertical distance h1 between a locomotive-like object 92 to be tested and the compound eye camera system 50, can be obtained through trigonometric functions and trigonometric functions. The vertical distance h1=d*[(sinα*sinβ)/sin(α+β)] can be obtained by positioning measurement method. That is to say, the lens distance d between the first lens 51 and the second lens 52 is known, and then the angle is observed and measured through the compound eye camera system 50 The size of α and β can be used to obtain the vertical distance h1 through calculation. Wherein, the locomotive-shaped object 92 to be tested can of course also be a car, a truck, a pedestrian, a road tree or a traffic signal...etc.

步驟A032的方位角或俯仰角量測,可以是透過該源影像圖檔61里的方位尺規標示線72或俯仰尺規標示線而量測;例如,如圖6A、圖6B所示,從該複眼攝像系統50往該車輛91前方觀之,該源影像圖檔61的畫面可被該方位尺規標示線72分割成多個區域,從該待測物92所處的位置,即可得知該待測物92相對於該複眼攝像系統50的相應方位角(Azimuth angle)。如果該待測物92在該源影像圖檔61里所呈現的面積較大(可能是該待測物92的實際體積大,或是比較靠近該裝設了複眼攝像系統50的車輛91,如圖6B所示)而跨越了多條方位尺規標示線72或俯仰尺規標示線,則該影像處理控制器56會以該待測物92所呈現之輪廓外觀的形狀中心點,來作為該待測物92的方位角/俯仰角判斷。相同的道理,該影像處理控制器56也可以透過多條俯仰尺規標示線,來將該源影像圖檔61分割成多個不同的俯仰角度區域,進而判斷該待測物92所處的俯仰位置。如此一來,透過該影像處理控制器56來解析該待測物92的距離、方位、俯仰,依據球形座標原理來得知並確認該待測物92與本車輛91的相對應空間關係,即可完成步驟A033的執行。 The azimuth or elevation angle measurement in step A032 can be measured through the azimuth ruler marking line 72 or the elevation ruler marking line in the source image file 61; for example, as shown in FIGS. 6A and 6B , from When the compound eye camera system 50 looks toward the front of the vehicle 91, the frame of the source image file 61 can be divided into a plurality of regions by the azimuth ruler marking line 72. From the position of the object to be measured 92, it can be obtained The corresponding azimuth angle (Azimuth angle) of the object under test 92 relative to the compound eye camera system 50 is known. If the area of the object-to-be-tested 92 presented in the source image file 61 is larger (it may be that the actual volume of the object-to-be-tested 92 is large, or it is relatively close to the vehicle 91 on which the compound eye camera system 50 is installed, such as As shown in FIG. 6B ) and straddles a plurality of azimuth ruler marking lines 72 or pitch ruler marking lines, then the image processing controller 56 will use the shape center point of the outline appearance presented by the object under test 92 as the Azimuth/pitch angle judgment of the object 92 to be measured. In the same way, the image processing controller 56 can also divide the source image file 61 into a plurality of regions with different pitch angles through a plurality of pitch ruler marking lines, and then judge the pitch of the object 92 to be measured. Location. In this way, the image processing controller 56 is used to analyze the distance, orientation, and pitch of the object under test 92, and the corresponding spatial relationship between the object under test 92 and the vehicle 91 is known and confirmed according to the principle of spherical coordinates. The execution of step A033 is completed.

接下來,計算該待測物92的3D移動向量(步驟A04),其中,先透過前述步驟A03來取得不同時間點的待測物92位置(步驟A041),再計算該待測物92的移動向量(步驟A042),即可連續顯示多個不同時間點的移動向量(步驟A043)。其中,在步驟A03里從”相同時間點、不同的鏡頭”取得的多個源影像圖檔61,其目的就是透過多個不同位置的第一鏡頭51、第 二鏡頭52的空間位置差異,來精進遠方待測物92的位置計算;其本質是通過多個不同鏡頭位置,而多次數地求取該待測物92的”定位”,透過多次計算而達到精準度提昇的目標。而步驟A04則是透過”不同時間點”所拍攝的源影像圖檔61,來取得某一特定待測物92的移動軌跡、移動向量(∵不同時間點的位置變化,即為待測物92的移動向量)。 Next, calculate the 3D motion vector of the object under test 92 (step A04), wherein the position of the object under test 92 at different time points is obtained through the aforementioned step A03 (step A041), and then calculate the movement of the object under test 92 vector (step A042), that is, to continuously display multiple moving vectors at different time points (step A043). Among them, in step A03, the multiple source image files 61 obtained from "same time point, different shots" are aimed at passing through multiple first shots 51 and second shots at different positions. The difference in the spatial position of the two lenses 52 is used to refine the position calculation of the distant object 92 to be measured; its essence is to obtain the "positioning" of the object 92 to be measured multiple times through multiple different lens positions, and through multiple calculations. achieve the goal of improving accuracy. And step A04 is to use the source image files 61 captured at "different time points" to obtain the moving track and moving vector of a certain object under test 92 (∵ the position change at different time points, that is, the object under test 92 motion vector).

再來,選擇性地補償修正該待測物92的3D移動向量之誤差(步驟A05),其修正誤差的方式包括下列子步驟:從該源影像圖檔61裏提取至少一待測物92的轉向特征(步驟A051),該轉向特征包括但不限於汽車的輪胎轉向、路上行人頭部轉向、或是汽車車身與馬路車道夾有一交錯角度。這些轉向特征即代表著安裝該複眼攝像系統50的車輛91周邊之汽車或行人,具有較強的轉向意圖,可能大幅改變其行進方向,會突然轉向、突然變道,進而造成本車輛91的追撞事故。因此若該複眼攝像系統50能提前預測周邊汽車、行人的轉向意圖,就可以提前作出相應的處置,降低本車輛91與周邊待測物92相撞的機率。當確認本車輛91周邊汽車、行人欲進行轉向之後,即可標定並生成該待測物92的補償修正向量(步驟A052),再分配權重至該待測物92的補償修正向量,以校正該待測物92的移動路徑(步驟A053),也就是說,用以修正前述步驟A04所生成的移動向量,提前預測周邊行人汽車突然轉向、突然變道。在此特別說明,步驟A05的選擇性執行,意即可以執行,也可以不執行。另,如圖2A所示,步驟A05經計算若發現補償修正向量過大,則可回饋至步驟A04,以重新執行步驟A04的待測物92移動向量計算。 Next, selectively compensate and correct the error of the 3D motion vector of the object under test 92 (step A05). Steering characteristics (step A051 ), the steering characteristics include but are not limited to tire steering of a car, head steering of pedestrians on the road, or a staggered angle between the car body and the road lane. These turning characteristics mean that the cars or pedestrians around the vehicle 91 on which the compound eye camera system 50 is installed have strong steering intentions, may change their direction of travel significantly, turn suddenly, change lanes suddenly, and cause the vehicle 91 to chase. crash accident. Therefore, if the compound-eye camera system 50 can predict the turning intentions of surrounding cars and pedestrians in advance, corresponding measures can be taken in advance to reduce the probability of the vehicle 91 colliding with the surrounding object 92 to be tested. After confirming that the surrounding cars and pedestrians of the vehicle 91 intend to turn, the compensation and correction vector of the object under test 92 can be calibrated and generated (step A052), and weights are assigned to the compensation and correction vector of the object under test 92 to correct the The moving path of the object under test 92 (step A053 ), that is to say, is used to correct the moving vector generated in the aforementioned step A04 , so as to predict in advance the sudden turning and lane changing of surrounding pedestrians and cars. It is specifically explained here that the selective execution of step A05 means that it may or may not be executed. In addition, as shown in FIG. 2A , if it is found that the compensation correction vector is too large after calculation in step A05 , it may be fed back to step A04 to re-execute the calculation of the motion vector of the object under test 92 in step A04 .

此時,該複眼攝像系統50已完成周邊汽車行人或交通號誌相 對於本車輛91的距離、移動向量之計算,再來透過該影像處理控制器56來結合該待測物92的3D圖元64及其相對應的3D移動訊息,進而解析出帶有3D空間訊息的可攜式圖檔65(步驟A06);再來,建立該待測物92移動之3D空間數位化模型62,並將該可攜式圖檔65疊置於該3D空間數位化模型62上(步驟A07),使該複眼攝像系統50的影像處理控制器56可以將本車輛91周邊的人車、交通號誌等一切景物,均疊置在該3D空間數位化模型62上。請同時參閱圖/A~圖7C,圖7A~圖7C所繪示為安裝該複眼攝像系統的車輛在該3D空間數位化模型內的周邊場景態勢感知示意圖;如圖7A、圖7B所示,該複眼攝像系統50可以將即車、人、路樹、交通號誌等可能的待測物92檢測出來,化為相應的3D圖元64,並且感知其3D空間位置、移動向量、加速度,最後再將該3D圖元64轉化為帶有3D空間訊息的可攜式圖檔65疊置在一3D空間數位化模型62上,使該複眼攝像系統50建立本車輛91周邊的3D空間態勢感知、3D空間景深估算,偵測出其周邊200公尺內物體的大小、速度、加速度,使該車輛91具有強大的周邊物體監控能力。如圖7A所示,該安裝了複眼攝像系統50的車輛91可以偵測、感知到左後方的機車待測物92與馬路上的車道標示線,進而判斷是否要進行規避閃躲或加速駛離;如圖7B所示,該安裝了複眼攝像系統50的車輛91可以偵測、感知到其周邊的多個汽車待測物92,讓該影像處理控制器56在該車輛91的周邊建立該3D空間數位化模型62及空間座標,使該3D空間數位化模型62擁有虛擬的空間網格線75,讓該複眼攝像系統50可以得知、感知其周邊所有待測物92的相對座標,藉此用以讓該影像處理控制器56來規劃最佳的行進、避行甚至繞行路線,甚至決定是否減速慢行、停車等待或超車前進。最後,請再同時參閱圖7C,該 車輛91即可依據該複眼攝像系統50或該影像處理控制器56的影像監控及判斷,而發出減速預警信號、剎車預警信號、轉向提示信號或轉向控制信號(步驟A08),使該車輛91具備自主控制、自動駕駛的功能。如圖7C左半圖所示,該複眼攝像系統50還可以整合一地圖系統(例如谷歌地圖、百度地圖、高德地圖…等)而得知該車輛91周邊數十公里的道路走向,且同時顯示該影像處理控制器56所生成的多條空間網格線75。還有,在整合了之後,該複眼攝像系統50即可將地圖系統的道路走向、道路規劃以及被偵測、感知到的周邊景物、待測物92,一起呈現在該3D空間數位化模型62里;如此一來,如圖7C右半圖所示,本發明的複眼攝像系統50即可達到:該待測物92的座標、相對距離、移動向量之感知及預測,提早對可能發生的撞擊、相撞提出警示。 At this moment, the compound eye camera system 50 has completed the photographing of surrounding automobiles, pedestrians or traffic signals. For the calculation of the distance and movement vector of the vehicle 91, the image processing controller 56 is used to combine the 3D primitive 64 of the object 92 and its corresponding 3D movement information, and then analyze the 3D spatial information. Portable graphic file 65 (step A06); Next, establish the 3D space digital model 62 of the movement of the object 92 to be tested, and superimpose the portable graphic file 65 on the 3D space digital model 62 (Step A07 ), so that the image processing controller 56 of the compound-eye camera system 50 can superimpose all scenes such as people, vehicles and traffic signs around the vehicle 91 on the 3D space digitized model 62 . Please refer to Figures /A~7C at the same time. Figures 7A~7C are schematic diagrams of situational awareness of the surrounding scenes of the vehicle installed with the compound eye camera system in the 3D space digital model; as shown in Figures 7A and 7B, The compound eye camera system 50 can detect possible objects 92 such as cars, people, road trees, traffic signs, etc., and turn them into corresponding 3D primitives 64, and perceive their 3D spatial positions, moving vectors, and accelerations, and finally Then the 3D graphic element 64 is converted into a portable graphic file 65 with 3D space information and superimposed on a 3D space digital model 62, so that the compound eye camera system 50 establishes the 3D space situation awareness around the vehicle 91, The 3D spatial depth of field estimation can detect the size, speed, and acceleration of objects within 200 meters around it, so that the vehicle 91 has a powerful ability to monitor surrounding objects. As shown in FIG. 7A , the vehicle 91 installed with the compound eye camera system 50 can detect and perceive the locomotive to be tested 92 on the left rear and the lane marking line on the road, and then judge whether to avoid dodge or accelerate away; As shown in FIG. 7B , the vehicle 91 installed with the compound eye camera system 50 can detect and perceive a plurality of automobile objects to be tested 92 around it, and let the image processing controller 56 establish the 3D space around the vehicle 91 The digitized model 62 and the spatial coordinates make the 3D spatial digitized model 62 have a virtual spatial grid line 75, so that the compound eye camera system 50 can know and perceive the relative coordinates of all objects to be measured 92 around it, thereby using To allow the image processing controller 56 to plan the best route to travel, avoid or even go around, and even decide whether to slow down, stop and wait, or overtake. Finally, please also refer to Figure 7C, the The vehicle 91 can send a deceleration warning signal, a brake warning signal, a steering prompt signal or a steering control signal according to the image monitoring and judgment of the compound eye camera system 50 or the image processing controller 56 (step A08), so that the vehicle 91 has Functions of autonomous control and automatic driving. As shown in the left half of Figure 7C, the compound eye camera system 50 can also integrate a map system (such as Google Maps, Baidu Maps, Gaode Maps, etc.) to know the road direction of tens of kilometers around the vehicle 91, and at the same time A plurality of spatial grid lines 75 generated by the image processing controller 56 are displayed. Also, after the integration, the compound eye camera system 50 can present the road direction, road planning, detected and perceived surrounding scenery and the object to be measured 92 of the map system together on the 3D space digital model 62 In this way, as shown in the right half of Figure 7C, the compound eye camera system 50 of the present invention can achieve: the perception and prediction of the coordinates, relative distance, and motion vector of the object to be measured 92, and early detection of possible impacts , Collision warning.

請參閱圖9,圖9所繪示為複眼攝像系統另一實施例的功能方塊圖。如圖9所示,本發明的複眼攝像系統50還可以進一步地增加至少一警示燈57,該警示燈57耦接於該影像處理控制器56,因此,該影像處理控制器56即可用以控制該警示燈57的亮起、熄滅或閃爍。再如圖7A所示,當左後方的機車待測物92靠近該車輛91時,透過該影像處理控制器56計算、解析得知該機車待測物92的距離過近時,該影像處理控制器56即可自主地驅動該警示燈57閃爍(即,無需透過該車輛91的駕駛人來控制),提醒該機車待測物92保持行車間距。也就是說,透過該複眼攝像系統50的影像處理控制器56計算解析該些源影像圖檔61里的待測物92後,該影像處理控制器56就可直接控制該警示燈57發出警示的燈光。所以,該步驟A08的功能,是當該影像處理控制器56判斷周邊待測物92距離過近或速度過快時,用以控 制、驅動該警示燈57來發出減速預警信號、剎車預警信號、轉向提示信號,達到防止撞擊的目的。 Please refer to FIG. 9 , which is a functional block diagram of another embodiment of the compound eye camera system. As shown in Figure 9, the compound eye camera system 50 of the present invention can further increase at least one warning light 57, and the warning light 57 is coupled to the image processing controller 56, so the image processing controller 56 can be used to control The warning lamp 57 lights up, goes out or flickers. As shown in Figure 7A again, when the locomotive to be tested 92 at the left rear is close to the vehicle 91, when the distance of the locomotive to be tested 92 is too short through the calculation and analysis of the image processing controller 56, the image processing control The device 56 can autonomously drive the warning light 57 to flash (that is, without the need for the driver of the vehicle 91 to control), reminding the locomotive to be tested 92 to keep the distance between vehicles. That is to say, after the image processing controller 56 of the compound eye camera system 50 calculates and analyzes the objects 92 in the source image files 61, the image processing controller 56 can directly control the warning light 57 to issue a warning. light. Therefore, the function of step A08 is to control Control, drive this warning light 57 to send deceleration early warning signal, brake early warning signal, turn to prompt signal, reach the purpose of preventing collision.

藉此,本發明所述複眼攝像系統50,使用複眼攝像系統50的車輛91及其影像處理方法,可以不使用激光雷達、紅外雷達、光達等昂貴的設備,而在成本可控的前提下,達到周邊景物可量可測,清楚辨識周邊景物材質,並建立3D數位化的空間感知系統,進而使該系統可用於汽車監控、AI機器人、自動駕駛、掃地機器人、空中無人機、多軸加工機台儀器等不同工業設備。故其具有龐大的商業運用潛力。 Thereby, the compound-eye camera system 50 of the present invention, the vehicle 91 using the compound-eye camera system 50 and its image processing method can not use expensive equipment such as laser radar, infrared radar, lidar, etc., but under the premise of controllable cost , so that the surrounding scenery can be measured and measured, the material of the surrounding scenery can be clearly identified, and a 3D digital space perception system is established, so that the system can be used for car monitoring, AI robots, automatic driving, sweeping robots, aerial drones, and multi-axis processing Different industrial equipment such as machines and instruments. Therefore, it has great potential for commercial application.

本發明以實施例說明如上,然其並非用以限定本發明所主張之專利權利範圍。其專利保護範圍當視後附之申請專利範圍及其等同領域而定。凡本領域具有通常知識者,在不脫離本專利精神或範圍內,所作之更動或潤飾,均屬於本發明所揭示精神下所完成之等效改變或設計,且應包含在下述之申請專利範圍內。 The present invention is described above with examples, but they are not intended to limit the scope of patent rights claimed by the present invention. The scope of its patent protection shall depend on the scope of the appended patent application and its equivalent fields. All changes or modifications made by those with common knowledge in the field without departing from the spirit or scope of this patent belong to the equivalent change or design completed under the spirit disclosed by the present invention, and should be included in the scope of the following patent application Inside.

步驟A01~步驟A08 Step A01~Step A08

Claims (18)

一種複眼攝像系統(50),其包括: A compound eye camera system (50), comprising: 一第一鏡頭(51),其具有一扇形展開的第一拍攝區(53); A first lens (51), which has a fan-shaped first shooting area (53); 至少四個第二鏡頭(52),分佈於該第一鏡頭(51)的周邊,每一第二鏡頭(52)均具有一扇形展開的第二拍攝區(54),該第一拍攝區(53)的中心拍攝方向(53A)與該第二拍攝區(54)的中心拍攝方向(54A)夾有一角度(θ),且該第二拍攝區(54)與該第一拍攝區(53)呈部份重疊; At least four second lenses (52), distributed on the periphery of the first lens (51), each second lens (52) has a fan-shaped second shooting area (54), the first shooting area ( 53) the central shooting direction (53A) and the central shooting direction (54A) of the second shooting area (54) have an angle (θ), and the second shooting area (54) and the first shooting area (53) is partially overlapping; 一儲存單元(55),用以儲存該第一鏡頭(51)或第二鏡頭(52)所拍攝的多個源影像圖檔(61);以及 A storage unit (55), used for storing a plurality of source image files (61) captured by the first lens (51) or the second lens (52); and 一影像處理控制器(56),用以辨識該些源影像圖檔(61)里至少一待測物(92)的圖像特征(63),該影像處理控制器(56)用以解析同一時間點拍攝的多個源影像圖檔(61)並生成一相對應的3D圖元(64),再透過多個不同時間點生成的3D圖元(64)而解析出一帶有3D空間訊息的可攜式圖檔(65)。 An image processing controller (56), used to identify the image features (63) of at least one object under test (92) in the source image files (61), the image processing controller (56) used to analyze the same A plurality of source image files (61) taken at time points and generate a corresponding 3D primitive (64), and then analyze a 3D spatial information through the 3D primitives (64) generated at different time points Portable image files (65). 如請求項1所述之複眼攝像系統(50),其中,該第一拍攝區(53)所拍攝的源影像圖檔(61)內均顯示有一基準長度(71),該影像處理控制器(56)運用該基準長度(71)為尺標而建構一3D空間數位化模型(62)。 The compound eye camera system (50) as described in claim 1, wherein, a reference length (71) is displayed in the source image file (61) captured by the first shooting area (53), and the image processing controller ( 56) Using the reference length (71) as a scale to construct a 3D spatial digital model (62). 如請求項1所述之複眼攝像系統(50),其中,該儲存單元(55)與該影像處理控制器(56)相耦接,使該3D圖元(64)或該可攜式圖檔(65)傳輸並儲存至該儲存單元(55)。 The compound eye camera system (50) according to claim 1, wherein, the storage unit (55) is coupled with the image processing controller (56), so that the 3D graphic element (64) or the portable graphic file (65) transfer and store to the storage unit (55). 如請求項1所述之複眼攝像系統(50),其中,該儲存單元(55)里儲存有至少一圖元模板(66),該圖元模板(66)為該待測物(92)的全部特征或局部特征之二維圖像。 The compound eye camera system (50) according to claim 1, wherein at least one graphic element template (66) is stored in the storage unit (55), and the graphic element template (66) is the object under test (92) Two-dimensional images of all features or partial features. 如請求項1所述之複眼攝像系統(50),其中,更包括有至少一耦接於該影像處理控制器(56)的警示燈(57),使該影像處理控制器(56)計算解 析該些源影像圖檔(61)里的待測物(92)後直接用以控制該警示燈(57)。 The compound eye camera system (50) according to claim 1, further comprising at least one warning light (57) coupled to the image processing controller (56), so that the image processing controller (56) calculates the solution After analyzing the objects to be detected (92) in the source image files (61), they are directly used to control the warning light (57). 一種使用多個如請求項1所述複眼攝像系統(50)的車輛(91),其中,多個複眼攝像系統(50)分佈於該車輛(91)的車頂部、車體前緣、車體後緣或兩側邊。 A vehicle (91) using a plurality of compound eye camera systems (50) as claimed in claim 1, wherein the plurality of compound eye camera systems (50) are distributed on the roof of the vehicle (91), the front edge of the vehicle body, the vehicle body trailing edge or sides. 一種複眼攝像系統(50)的影像處理方法,該複眼攝像系統(50)包括有一具有第一拍攝區(53)的第一鏡頭(51)及一具有第二拍攝區(54)的第二鏡頭(52),該影像處理方法包括有下列步驟: An image processing method for a compound eye camera system (50), the compound eye camera system (50) comprising a first lens (51) having a first shooting area (53) and a second lens having a second shooting area (54) (52), the image processing method includes the following steps: 步驟A01:截取多鏡頭、多個時間點的源影像圖檔(61); Step A01: Intercept source image files of multiple shots and multiple time points (61); 步驟A02:辨識並解析該些源影像圖檔(61),用以生成相對應至少一待測物(92)的3D圖元(64); Step A02: identifying and analyzing the source image files (61) to generate 3D primitives (64) corresponding to at least one object under test (92); 步驟A03:計算該待測物(92)的距離; Step A03: Calculate the distance of the object under test (92); 步驟A04:計算該待測物(92)的3D移動向量; Step A04: calculating the 3D motion vector of the object to be measured (92); 步驟A05:選擇性地補償修正該待測物(92)的3D移動向量之誤差; Step A05: selectively compensating and correcting the error of the 3D motion vector of the object under test (92); 步驟A06:結合該待測物(92)的3D圖元(64)及其相對應的3D移動訊息而解析出帶有3D空間訊息的可攜式圖檔(65);及 Step A06: Combining the 3D primitive (64) of the object under test (92) and its corresponding 3D movement information to analyze a portable graphic file (65) with 3D spatial information; and 步驟A07:建立該待測物(92)移動之3D空間數位化模型(62),並將該可攜式圖檔(65)疊置於該3D空間數位化模型(62)上。 Step A07: Establish a 3D spatial digital model (62) of movement of the object under test (92), and superimpose the portable image file (65) on the 3D spatial digital model (62). 如請求項7所述複眼攝像系統(50)的影像處理方法,其中,更包括步驟A08:發出減速預警信號、剎車預警信號、轉向提示信號或轉向控制信號。 The image processing method of the compound eye camera system (50) according to claim 7, further comprising step A08: sending out a deceleration warning signal, a brake warning signal, a steering prompt signal or a steering control signal. 如請求項7所述複眼攝像系統(50)的影像處理方法,其中,步驟A02更包括下列子步驟: The image processing method of the compound eye camera system (50) as described in Claim 7, wherein step A02 further includes the following sub-steps: 步驟A021:從該些源影像圖檔(61)裏提取一待測物(92)的圖像特征(63); Step A021: extracting image features (63) of an object under test (92) from the source image files (61); 步驟A022:將該些圖像特征(63)比對一儲存單元(55)裏多個不同視角的 圖元模板(66); Step A022: Compare these image features (63) with a plurality of different viewing angles in a storage unit (55) primitivetemplate(66); 步驟A023:生成一待測物(92)的3D圖元(64)。 Step A023: Generate a 3D primitive (64) of the object under test (92). 如請求項9所述複眼攝像系統(50)的影像處理方法,其中,步驟A022里的圖元模板(66)為該待測物(92)的全部特征或局部特征之二維圖像。 The image processing method of the compound eye camera system (50) according to Claim 9, wherein the primitive template (66) in step A022 is a two-dimensional image of all or local features of the object under test (92). 如請求項7所述複眼攝像系統(50)的影像處理方法,其中,步驟A03更包括下列子步驟: The image processing method of the compound eye camera system (50) as described in Claim 7, wherein step A03 further includes the following sub-steps: 步驟A031:透過該第一鏡頭(51)或第二鏡頭(52)拍攝相同時間點的源影像圖檔(61)以量測該待測物(92)的距離; Step A031: shooting the source image file (61) at the same time point through the first lens (51) or the second lens (52) to measure the distance of the object under test (92); 步驟A032:從該源影像圖檔(61)里量測該待測物(92)的方位角及俯仰角; Step A032: Measure the azimuth and elevation angles of the object under test (92) from the source image file (61); 步驟A033:計算並確認該待測物(92)的空間關係。 Step A033: Calculate and confirm the spatial relationship of the object under test (92). 如請求項11所述複眼攝像系統(50)的影像處理方法,其中,步驟A031的距離量測,是透過該源影像圖檔(61)里車輛(91)的一基準長度(71)比較而得,或是透過該源影像圖檔(61)里的一基準長度(71)的倍數尺規標示線(74)而量測。 The image processing method of the compound eye camera system (50) as described in claim 11, wherein the distance measurement in step A031 is achieved by comparing a reference length (71) of the vehicle (91) in the source image file (61) obtained, or measured through a ruler marking line (74) that is a multiple of a reference length (71) in the source image file (61). 如請求項12所述複眼攝像系統(50)的影像處理方法,其中,該影像處理控制器(56)以該待測物(92)所呈現之輪廓外觀的形狀中心點位置來比對該倍數尺規標示線(74)。 The image processing method of the compound eye camera system (50) as described in Claim 12, wherein the image processing controller (56) compares the multiplier with the position of the center point of the outline appearance of the object under test (92) Ruler marking line (74). 如請求項11所述複眼攝像系統(50)的影像處理方法,其中,步驟A031的距離量測,是透過該第一鏡頭(51)及第二鏡頭(52)之觀察角度的三角定位法量測而得。 The image processing method of the compound eye camera system (50) as described in claim 11, wherein the distance measurement in step A031 is the triangulation method measurement of the viewing angles of the first lens (51) and the second lens (52). measured. 如請求項11所述複眼攝像系統(50)的影像處理方法,其中,步驟A032的方位角或俯仰角量測,是透過該源影像圖檔(61)里的方位尺規標示 線(72)或俯仰尺規標示線而量測。 The image processing method of the compound eye camera system (50) as described in claim 11, wherein the azimuth angle or elevation angle measurement in step A032 is marked through the azimuth ruler in the source image file (61) Line (72) or pitch gauge mark line and measure. 如請求項15所述複眼攝像系統(50)的影像處理方法,其中,該影像處理控制器(56)以該待測物(92)所呈現之輪廓外觀的形狀中心點位置來比對該方位尺規標示線(72)或該俯仰尺規標示線。 The image processing method of the compound eye camera system (50) as described in Claim 15, wherein the image processing controller (56) uses the position of the center point of the outline appearance of the object (92) to compare the orientation Ruler marking line (72) or this pitch ruler rule marking line. 如請求項7所述複眼攝像系統(50)的影像處理方法,其中,步驟A04更包括下列子步驟: The image processing method of the compound eye camera system (50) as described in Claim 7, wherein step A04 further includes the following sub-steps: 步驟A041:取得不同時間點的待測物(92)位置; Step A041: obtaining the position of the object to be tested (92) at different time points; 步驟A042:計算該待測物(92)的移動向量; Step A042: Calculating the motion vector of the object under test (92); 步驟A043:連續顯示多個時間點的移動向量。 Step A043: Continuously display motion vectors at multiple time points. 如請求項7所述複眼攝像系統(50)的影像處理方法,其中,步驟A05更包括下列子步驟: The image processing method of the compound eye camera system (50) as described in Claim 7, wherein step A05 further includes the following sub-steps: 步驟A051:從該源影像圖檔(61)裏提取至少一待測物(92)的轉向特征; Step A051: extracting at least one turning feature of the object under test (92) from the source image file (61); 步驟A052:標定並生成該待測物(92)的補償修正向量; Step A052: Calibrate and generate a compensation correction vector for the object under test (92); 步驟A053:分配權重至該待測物(92)的補償修正向量,以校正該待測物(92)的移動路徑。 Step A053: assigning weights to the compensation correction vector of the object under test (92), so as to correct the moving path of the object under test (92).
TW110120115A 2021-06-03 2021-06-03 Compound eyes system, the vehicle using the compound eyes systme and the image processing method for the compound eyes system TW202248963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110120115A TW202248963A (en) 2021-06-03 2021-06-03 Compound eyes system, the vehicle using the compound eyes systme and the image processing method for the compound eyes system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110120115A TW202248963A (en) 2021-06-03 2021-06-03 Compound eyes system, the vehicle using the compound eyes systme and the image processing method for the compound eyes system

Publications (1)

Publication Number Publication Date
TW202248963A true TW202248963A (en) 2022-12-16

Family

ID=85793581

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110120115A TW202248963A (en) 2021-06-03 2021-06-03 Compound eyes system, the vehicle using the compound eyes systme and the image processing method for the compound eyes system

Country Status (1)

Country Link
TW (1) TW202248963A (en)

Similar Documents

Publication Publication Date Title
US11593950B2 (en) System and method for movement detection
JP7073315B2 (en) Vehicles, vehicle positioning systems, and vehicle positioning methods
CN108572663B (en) Target tracking
US11860640B2 (en) Signal processing device and signal processing method, program, and mobile body
CN112665556B (en) Generating a three-dimensional map of a scene using passive and active measurements
US20180273031A1 (en) Travel Control Method and Travel Control Apparatus
US20200241549A1 (en) Information processing apparatus, moving apparatus, and method, and program
US11460851B2 (en) Eccentricity image fusion
JP2019045892A (en) Information processing apparatus, information processing method, program and movable body
WO2019181284A1 (en) Information processing device, movement device, method, and program
US11294387B2 (en) Systems and methods for training a vehicle to autonomously drive a route
CN113673282A (en) Target detection method and device
US11959999B2 (en) Information processing device, information processing method, computer program, and mobile device
CN114442101B (en) Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
CN114518113A (en) Filtering return points in a point cloud based on radial velocity measurements
Moras et al. Drivable space characterization using automotive lidar and georeferenced map information
CN116348739A (en) Map generation system and method based on ray casting and semantic image
JP2004265432A (en) Travel environment recognition device
US20230237783A1 (en) Sensor fusion
CN115704898A (en) Correlation of camera images and radar data in autonomous vehicle applications
Li et al. Pitch angle estimation using a Vehicle-Mounted monocular camera for range measurement
CN113435224A (en) Method and device for acquiring 3D information of vehicle
Rana et al. The perception systems used in fully automated vehicles: a comparative analysis
CN116311216A (en) Three-dimensional object detection
Kotur et al. Camera and LiDAR sensor fusion for 3d object tracking in a collision avoidance system