WO2019194255A1 - 演算処理装置、オブジェクト識別システム、オブジェクト識別方法、自動車、車両用灯具 - Google Patents
演算処理装置、オブジェクト識別システム、オブジェクト識別方法、自動車、車両用灯具 Download PDFInfo
- Publication number
- WO2019194255A1 WO2019194255A1 PCT/JP2019/014889 JP2019014889W WO2019194255A1 WO 2019194255 A1 WO2019194255 A1 WO 2019194255A1 JP 2019014889 W JP2019014889 W JP 2019014889W WO 2019194255 A1 WO2019194255 A1 WO 2019194255A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- dimensional
- object identification
- data
- arithmetic processing
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims description 31
- 238000000034 method Methods 0.000 title claims description 16
- 238000006243 chemical reaction Methods 0.000 claims abstract description 16
- 238000013527 convolutional neural network Methods 0.000 abstract description 14
- 238000010586 diagram Methods 0.000 description 17
- 238000012986 modification Methods 0.000 description 12
- 230000004048 modification Effects 0.000 description 12
- 238000001514 detection method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 239000000470 constituent Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the present invention relates to an object identification system.
- LiDAR Light Detection and Ranging, Laser Imaging and Detection
- cameras millimeter wave radar, and ultrasonic sonar.
- LiDAR is capable of (i) object recognition based on point cloud data, and (ii) high-precision detection even in bad weather due to active sensing, compared to other sensors.
- Iii) has an advantage that a wide range of measurement is possible, and is expected to become the mainstream in the sensing system of automobiles in the future.
- a method for detecting an object a method may be considered in which a feature amount is defined for each category (type) of the object, and the position and category of the object are determined by pattern matching.
- a feature amount is defined for each category (type) of the object, and the position and category of the object are determined by pattern matching.
- the present invention has been made in such a situation, and one of exemplary purposes of an aspect thereof is to provide an arithmetic processing device capable of identifying an object, an object identification system, and an object type identification method.
- An aspect of the present invention relates to an object identification method or system.
- point cloud data acquired by a three-dimensional sensor is converted into two-dimensional image data having a pixel value as a distance. Then, by inputting this image data to a classifier such as a convolutional neural network, the position and category of the object included in the point cloud data are determined.
- a classifier such as a convolutional neural network
- the object can be identified.
- FIGS. 2A and 2B are diagrams illustrating point cloud data generated by a three-dimensional sensor.
- 3A and 3B are diagrams for explaining the relationship between point cloud data and image data. It is a figure explaining the mapping from point cloud data to image data.
- FIGS. 5A and 5B are diagrams showing two typical scenes. It is a figure which shows transition of the loss with respect to learning.
- FIGS. 7A to 7E are diagrams showing some verification results. It is a block diagram of a motor vehicle provided with an object identification system. It is a block diagram which shows a vehicle lamp provided with an object identification system. It is a figure which shows the object identification system which concerns on the modification 5.
- FIG. It is a figure explaining an example of the conversion process of the aspect ratio by an aspect ratio conversion part.
- One embodiment disclosed in this specification relates to an arithmetic processing device.
- the arithmetic processing unit receives a point cloud data acquired by the three-dimensional sensor into a two-dimensional image data having a pixel value as a distance, and a two-dimensional image data as an input, and is included in the point cloud data.
- a classifier is provided for determining the position and category of the object.
- the two-dimensional conversion unit converts coordinates expressed in the Euclidean coordinate system of each point included in the point cloud data into a polar coordinate system (r, ⁇ , ⁇ ), and converts ( ⁇ , ⁇ ) into the pixel position and the distance r. You may convert into the two-dimensional image data used as a pixel value.
- the aspect ratio may be changed by dividing the image data into a plurality of areas and rearranging the plurality of areas. If the aspect ratio suitable for the input of the classifier and the aspect ratio of the original image data are different, the calculation efficiency can be improved by converting the aspect ratio.
- FIG. 1 is a diagram illustrating an object identification system 10 according to an embodiment.
- the object identification system 10 includes a three-dimensional sensor 20 and an arithmetic processing device 40.
- the three-dimensional sensor 20 is a LiDAR, a ToF (Time Of Flight) camera, a stereo camera, or the like, and generates point group data D1 describing a set (point group) of points p that form the surface of the surrounding object OBJ. .
- FIG. 2A and 2B are diagrams illustrating the point cloud data D1 generated by the three-dimensional sensor 20.
- FIG. FIG. 2A is a perspective view showing a relationship between an object and a point group
- FIG. 2B shows a data structure of point group data describing the point group.
- the point group is a set of a plurality of n points p 1 , p 2 ,..., Pn
- the point group data is a plurality of points p 1 , p 2 ,.
- n includes three-dimensional data indicating the position of each Euclidean coordinate system (x, y, z).
- the point number i varies depending on the type and manufacturer of the three-dimensional sensor 20 and may be two-dimensionally numbered.
- the arithmetic processing unit 40 determines the position and category (type, class) of the object OBJ based on the point cloud data D1.
- the object category is exemplified by pedestrians, bicycles, automobiles, utility poles, and the like. About a pedestrian, you may define the pedestrian seen from the front, the pedestrian seen from the back, and the pedestrian seen from the side as the same category. The same applies to automobiles and bicycles. This definition is adopted in the present embodiment.
- the arithmetic processing unit 40 can be implemented by a combination of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a processor (hardware) such as a microcomputer, and a software program executed by the processor (hardware).
- the arithmetic processing unit 40 may be a combination of a plurality of processors.
- the arithmetic processing unit 40 includes a two-dimensional conversion unit 42 and a convolutional neural network 44.
- the two-dimensional conversion unit 42 and the convolutional neural network 44 do not necessarily mean that they are independent in terms of hardware, and may represent functions provided by hardware such as a CPU executing a software program. Good.
- the two-dimensional conversion unit 42 converts the point cloud data D1 acquired by the three-dimensional sensor 20 into two-dimensional image data D2 having the distance r as a pixel value.
- the distance r may be represented by, for example, 8-bit 256 gradations.
- the convolutional neural network 44 receives the image data D2 as input, determines the position and category of the object OBJ included in the point cloud data D1, and outputs final data D3 indicating the likelihood (affiliation probability) for each position and category. It is a classifier.
- the convolutional neural network 44 is implemented based on a prediction model generated by machine learning.
- the convolutional neural network 44 may be a known technique widely used in image recognition, detailed description thereof is omitted.
- 3A and 3B are diagrams for explaining the relationship between the point cloud data D1 and the image data D2.
- the coordinates of each point p i included in the point cloud data D1 are represented by the Euclidean coordinate system (x i , y i , z i ).
- the two-dimensional conversion unit 42 converts the Euclidean coordinate system (x i , y i , z i ) into a polar coordinate system (r i , ⁇ i , ⁇ i ).
- r is a moving radius
- ⁇ is a polar angle (zenith angle)
- ⁇ is an azimuth angle (deflection angle).
- FIG. 2B shows a state where the point p i is mapped.
- FIG. 4 is a diagram for explaining mapping from point cloud data to image data. All the points included in the point cloud data (p 1 to p 19 in this example) are mapped to a two-dimensional data structure, and image data D2 is generated. A maximum value (or 0 or a negative value) may be mapped as a pixel value to a pixel where there is no point p to be mapped.
- the above is the configuration of the object identification system 10. Next, a result of verifying object recognition by the arithmetic processing unit 40 will be described.
- the verification was performed using distance data generated using 3D computer graphics without using the point cloud data D1 generated by the three-dimensional sensor 20.
- the distance data is two-dimensional data of 300 ⁇ 300 pixels, corresponds to the above-described image data D2, and the pixel value is a distance.
- FIGS. 5A and 5B are diagrams showing two typical scenes.
- the scene in FIG. 5A is a highway, where two trucks and two motorcycles are located in front.
- the scene in FIG. 5B is an urban area, where two pedestrians, three cars, and one motorcycle are located in front.
- the upper part of each figure represents distance data, and the lower part represents a camera image projected onto a two-dimensional plane.
- an SSD Single Shot MultiBox Detector
- the SSD is a neural network composed of a plurality of convolution layers, and six convolution layers having different sizes output the position of the object and the likelihood of each category. Although there are a plurality of outputs obtained from these six layers, a final output is obtained by integrating estimation results having a large overlap of object regions by an output layer called Non-Maximum Suppression.
- the simulation data PreScan which is commercially available as an advanced driver assistance system (ADAS) development support tool, was used to collect teacher data.
- the teacher data is a set of annotation data in which the distance data of the two-dimensional structure and the position and category of the object with respect to the data are described.
- the distance data of the two-dimensional structure is preferably the same as that of the three-dimensional sensor 20 used in the object identification system 10, but here, a virtual depth camera is used.
- the number of teacher data finally created is 713.
- FIG. 6 is a diagram illustrating a transition of loss with respect to learning.
- FIGS. 7A to 7E are diagrams showing some verification results.
- the bounding box represents the position of the detected object, and the category and likelihood are shown together. Although some missing items are seen, it can be seen that pedestrians, trucks, cars, and motorcycles can be detected correctly.
- the object identification system 10 can detect a position and identify a category by diverting a convolutional neural network for image data to two-dimensional distance data.
- the object identification system 10 also has an advantage that a clustering process for dividing the point cloud data for each object is unnecessary.
- FIG. 8 is a block diagram of an automobile provided with the object identification system 10.
- the automobile 100 includes headlamps 102L and 102R.
- the object identification system 10 at least the three-dimensional sensor 20 is built in at least one of the headlamps 102L and 102R.
- the headlamp 102 is located at the foremost end of the vehicle body, and is most advantageous as an installation location of the three-dimensional sensor 20 in detecting surrounding objects.
- the arithmetic processing unit 40 may be built in the headlamp 102 or provided on the vehicle side. For example, in the arithmetic processing unit 40, intermediate data may be generated inside the headlamp 102, and final data generation may be left to the vehicle side.
- FIG. 9 is a block diagram showing a vehicular lamp 200 including the object identification system 10.
- the vehicular lamp 200 includes a light source 202, a lighting circuit 204, and an optical system 206. Further, the vehicular lamp 200 is provided with a three-dimensional sensor 20 and an arithmetic processing unit 40. Information regarding the object OBJ detected by the arithmetic processing unit 40 is transmitted to the vehicle ECU 104. The vehicle ECU may perform automatic driving based on this information.
- the information regarding the object OBJ detected by the arithmetic processing device 40 may be used for light distribution control of the vehicular lamp 200.
- the lamp ECU 208 generates an appropriate light distribution pattern based on information regarding the type and position of the object OBJ generated by the arithmetic processing device 40.
- the lighting circuit 204 and the optical system 206 operate so that the light distribution pattern generated by the lamp ECU 208 is obtained.
- the three-dimensional point group data is converted into the polar coordinate system (r, ⁇ , ⁇ ), and ( ⁇ , ⁇ ) is converted into two-dimensional image data having the pixel position and the distance r as the pixel value.
- the conversion to the image data D2 there are some variations regarding the conversion to the image data D2.
- each point included in the three-dimensional point group data may be converted from the Euclidean coordinate system to the cylindrical coordinate system (r, z, ⁇ ), where (z, ⁇ ) is the pixel position and r is the pixel value.
- each point included in the three-dimensional point group data may be projected on a two-dimensional plane, and the distance r may be a pixel value.
- a projection method perspective projection or parallel projection can be used.
- Modification 2 An object may be defined as a different category for each direction in which it is desired. That is, a certain object may be defined as a different category depending on whether the object is directly facing the vehicle or not. This is useful for estimating the moving direction of the object OBJ.
- the arithmetic processing unit 40 may be configured only by hardware using an FPGA, a dedicated application specific integrated circuit (ASIC), or the like.
- Modification 4 Although the in-vehicle object identification system 10 has been described in the embodiment, the application of the present invention is not limited to this. For example, it is fixedly installed on a traffic light, a traffic sign, or other traffic infrastructure, and is also applied to a use for fixed-point observation. Is possible.
- FIG. 10 is a diagram illustrating an object identification system 10A according to the fifth modification.
- Many convolutional neural networks developed for object detection by image recognition support common image resolutions and aspect ratios, and the assumed aspect ratios are 1: 1, 4: 3, or 16: About nine.
- the resolution in the elevation angle direction may be extremely smaller than the resolution in the horizontal direction.
- Qurnergy M8 a LiDAR sold by Quanergy Systems, USA, has a resolution of 10400 in the 360-degree scan lateral direction, but the resolution in the elevation angle direction is only 8, and the aspect ratio (10400: 8) is extremely large.
- the arithmetic processing unit 40A in FIG. 10 includes an aspect ratio conversion unit 46 that converts the aspect ratio of the image data D2 obtained by the two-dimensional conversion unit 42.
- the convolutional neural network 44 receives image data D ⁇ b> 2 ′ whose aspect ratio has been converted.
- FIG. 11 is a diagram for explaining an example of the aspect ratio conversion processing by the aspect ratio conversion unit 46.
- the original image data D2 may be divided into a plurality of regions R1 and R2, and the aspect ratio may be changed by rearranging them.
- the calculation efficiency can be improved by converting the aspect ratio.
- the classifier algorithms are YOLO (You Only Look Once), SSD (Single Shot MultiBox Detector), R-CNN (Region-based Convolutional Neural Network), SPPnet (Spatial Pyramid Pooling), Faster R-CNN, DSSD (Deconvolution- SSD), Mask R-CNN, etc., or algorithms that will be developed in the future. Alternatively, linear SVM or the like may be used.
- the present invention relates to an object identification system.
- DESCRIPTION OF SYMBOLS 10 ... Object identification system, 20 ... Three-dimensional sensor, 40 ... Arithmetic processing unit, 42 ... Two-dimensional conversion part, 44 ... Convolution neural network, D1 ... Point cloud data, D2 ... Image data, 100 ... Car, 102 ... Head Lamp 104, vehicle ECU, 200, vehicle lamp, 202, light source, 204, lighting circuit, 206, optical system, 208, lamp ECU.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
- Lighting Device Outwards From Vehicle And Optical Signal (AREA)
Abstract
Description
本明細書に開示される一実施の形態は、演算処理装置に関する。演算処理装置は、3次元センサにより取得された点群データを、距離を画素値とする2次元の画像データに変換する二次元変換部と、画像データを入力として受け、点群データに含まれるオブジェクトの位置およびカテゴリを判定する分類器を備える。
以下、本発明を好適な実施の形態をもとに図面を参照しながら説明する。各図面に示される同一または同等の構成要素、部材、処理には、同一の符号を付するものとし、適宜重複した説明は省略する。また、実施の形態は、発明を限定するものではなく例示であって、実施の形態に記述されるすべての特徴やその組み合わせは、必ずしも発明の本質的なものであるとは限らない。
実施の形態では、3次元の点群データを極座標系(r,θ,φ)に変換し、(θ,φ)を画素位置、距離rを画素値とする2次元の画像データに変換したがその限りでなく、画像データD2への変換に関して、いくつかの変形例がある。
オブジェクトを、それを望む方向ごとに異なるカテゴリとして定義してもよい。つまり、あるオブジェクトが、自車と正対しているときと、そうでないときとで、別のカテゴリと定義してもよい。これは、オブジェクトOBJの移動方向の推定に役立つ。
演算処理装置40は、FPGAや専用のASIC(Application Specific Integrated Circuit)などを用いてハードウェアのみで構成してもよい。
実施の形態では、車載用のオブジェクト識別システム10を説明したが本発明の適用はその限りでなく、たとえば信号機や交通標識、そのほかの交通インフラに固定的に設置され、定点観測する用途にも適用可能である。
図10は、変形例5に係るオブジェクト識別システム10Aを示す図である。画像認識による物体検出用に開発されている畳み込みニューラルネットワークの多くは、一般的な画像の解像度およびアスペクト比をサポートしており、想定するアスペクト比は、1:1あるいは4:3、あるいは16:9程度である。一方、低価格な3次元センサ20を用いた場合、仰俯角方向(高さ方向)の解像度が横方向の解像度に比べて極端に小さくなる場合がある。たとえば米国Quanergy Systems社が販売するLiDARであるQurnergy M8は360度スキャン横方向に10400の解像度を有するが、仰俯角方向の解像度はわずかに8であり、アスペクト比(10400:8)が極めて大きい。
分類器のアルゴリズムは、YOLO(You Only Look Once)、SSD(Single Shot MultiBox Detector)、R-CNN(Region-based Convolutional Neural Network)、SPPnet(Spatial Pyramid Pooling)、Faster R-CNN、DSSD(Deconvolution -SSD)、Mask R-CNNなどを採用することができ、あるいは、将来開発されるアルゴリズムを採用できる。あるいは、線形SVMなどを用いてもよい。
Claims (10)
- 3次元センサにより取得された点群データを、距離を画素値とする2次元の画像データに変換する二次元変換部と、
前記画像データを入力として受け、前記点群データに含まれるオブジェクトの位置およびカテゴリを判定する分類器と、
を備えることを特徴とする演算処理装置。 - 前記二次元変換部は、前記点群データに含まれる各点のユークリッド座標系の座標を、極座標系(r、θ,φ)に変換し、(θ,φ)を画素位置、距離rを画素値とする2次元の前記画像データに変換することを特徴とする請求項1に記載の演算処理装置。
- 前記画像データを複数の領域に分割し、再配置することでアスペクト比を変化させることを特徴とする請求項1または2に記載の演算処理装置。
- 3次元センサと、
請求項1から3のいずれかに記載の演算処理装置と、
を備えることを特徴とするオブジェクト識別システム。 - 請求項4に記載のオブジェクト識別システムを備えることを特徴とする自動車。
- 前記3次元センサは、前照灯に内蔵されることを特徴とする請求項5に記載の自動車。
- 請求項4に記載のオブジェクト識別システムを備えることを特徴とする車両用灯具。
- 3次元センサにより点群データを生成するステップと、
前記点群データを、距離を画素値とする2次元の画像データに変換するステップと、
分類器に前記画像データを入力することにより、前記点群データに含まれるオブジェクトの位置およびカテゴリを判定するステップと、
を備えることを特徴とするオブジェクト識別方法。 - 前記変換するステップは、
ユークリッド座標系の3次元データの集合である前記点群データを、極座標系(r、θ,φ)に変換するステップと、
(θ,φ)を画素位置、距離rを画素値とする2次元の前記画像データに変換するステップと、
を含むことを特徴とする請求項8に記載のオブジェクト識別方法。 - 前記画像データを複数の領域に分割し、再配置することでアスペクト比を変化させるステップをさらに備えることを特徴とする請求項8または9に記載のオブジェクト識別方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980024210.5A CN111989709B (zh) | 2018-04-05 | 2019-04-03 | 运算处理装置、对象识别系统、对象识别方法、汽车、车辆用灯具 |
EP19781388.4A EP3779871A4 (en) | 2018-04-05 | 2019-04-03 | ARITHMETIC PROCESSING DEVICE, OBJECT IDENTIFICATION SYSTEM, OBJECT IDENTIFICATION METHOD, AUTOMOTIVE, AND VEHICLE LIGHTING APPARATUS |
JP2020512307A JP7217741B2 (ja) | 2018-04-05 | 2019-04-03 | 演算処理装置、オブジェクト識別システム、オブジェクト識別方法、自動車、車両用灯具 |
US17/061,914 US11341604B2 (en) | 2018-04-05 | 2020-10-02 | Processing device for object identification |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-073353 | 2018-04-05 | ||
JP2018073353 | 2018-04-05 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/061,914 Continuation US11341604B2 (en) | 2018-04-05 | 2020-10-02 | Processing device for object identification |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019194255A1 true WO2019194255A1 (ja) | 2019-10-10 |
Family
ID=68100562
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/014889 WO2019194255A1 (ja) | 2018-04-05 | 2019-04-03 | 演算処理装置、オブジェクト識別システム、オブジェクト識別方法、自動車、車両用灯具 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11341604B2 (ja) |
EP (1) | EP3779871A4 (ja) |
JP (1) | JP7217741B2 (ja) |
CN (1) | CN111989709B (ja) |
WO (1) | WO2019194255A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111612689A (zh) * | 2020-05-28 | 2020-09-01 | 上海联影医疗科技有限公司 | 医学图像处理方法、装置、计算机设备和可读存储介质 |
JP2021108091A (ja) * | 2019-12-27 | 2021-07-29 | 財團法人工業技術研究院Industrial Technology Research Institute | 2d画像のラベリング情報に基づく3d画像ラベリング方法及び3d画像ラベリング装置 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110147706B (zh) * | 2018-10-24 | 2022-04-12 | 腾讯科技(深圳)有限公司 | 障碍物的识别方法和装置、存储介质、电子装置 |
CN113436273A (zh) * | 2021-06-28 | 2021-09-24 | 南京冲浪智行科技有限公司 | 一种3d场景定标方法、定标装置及其定标应用 |
TWI769915B (zh) | 2021-08-26 | 2022-07-01 | 財團法人工業技術研究院 | 投射系統及應用其之投射校準方法 |
CN114076595B (zh) * | 2022-01-19 | 2022-04-29 | 浙江吉利控股集团有限公司 | 道路高精度地图生成方法、装置、设备及存储介质 |
TWI843251B (zh) | 2022-10-25 | 2024-05-21 | 財團法人工業技術研究院 | 目標追蹤系統及應用其之目標追蹤方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004362469A (ja) * | 2003-06-06 | 2004-12-24 | Softopia Japan Foundation | アクティブセンサの動物体検出装置及び動物体検出方法、並びに動物体検出プログラム |
JP2009516278A (ja) * | 2005-11-18 | 2009-04-16 | ローベルト ボツシユ ゲゼルシヤフト ミツト ベシユレンクテル ハフツング | 光学式レインセンサが組み込まれているヘッドライトモジュール |
JP2009098023A (ja) | 2007-10-17 | 2009-05-07 | Toyota Motor Corp | 物体検出装置及び物体検出方法 |
JP2013067343A (ja) * | 2011-09-26 | 2013-04-18 | Koito Mfg Co Ltd | 車両用配光制御システム |
JP2017056935A (ja) | 2015-09-14 | 2017-03-23 | トヨタ モーター エンジニアリング アンド マニュファクチャリング ノース アメリカ,インコーポレイティド | 3dセンサにより検出されたオブジェクトの分類 |
JP2017138660A (ja) * | 2016-02-01 | 2017-08-10 | トヨタ自動車株式会社 | 物体検出方法、物体検出装置、およびプログラム |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3761559B1 (ja) * | 2004-10-05 | 2006-03-29 | 株式会社ソニー・コンピュータエンタテインメント | 画像出力方法 |
RU2009104869A (ru) * | 2009-02-13 | 2010-08-20 | Государственное образовательное учреждение высшего профессионального образования Курский государственный технический университет (R | Способ распознавания трехмерных объектов по единственному двумерному изображению и устройство для его реализации |
CN102542260A (zh) * | 2011-12-30 | 2012-07-04 | 中南大学 | 一种面向无人驾驶车的道路交通标志识别方法 |
JP6250575B2 (ja) * | 2015-03-06 | 2017-12-20 | 富士フイルム株式会社 | バックライトユニットおよび画像表示装置 |
DE102015007172A1 (de) * | 2015-06-03 | 2016-02-18 | Daimler Ag | Fahrzeugscheinwerfer und Fahrzeug |
CN204978398U (zh) * | 2015-07-21 | 2016-01-20 | 张进 | 汽车组合激光雷达前大灯及汽车 |
WO2017067764A1 (en) * | 2015-10-19 | 2017-04-27 | Philips Lighting Holding B.V. | Harmonized light effect control across lighting system installations |
US11094208B2 (en) * | 2016-09-30 | 2021-08-17 | The Boeing Company | Stereo camera system for collision avoidance during aircraft surface operations |
CN206690990U (zh) * | 2017-03-07 | 2017-12-01 | 广州法锐科技有限公司 | 一种汽车用组合式激光雷达前照灯 |
CN107180409B (zh) * | 2017-03-31 | 2021-06-25 | 河海大学 | 一种针对弯曲骨架型对象三维点云的广义圆柱投影方法 |
-
2019
- 2019-04-03 JP JP2020512307A patent/JP7217741B2/ja active Active
- 2019-04-03 EP EP19781388.4A patent/EP3779871A4/en not_active Withdrawn
- 2019-04-03 CN CN201980024210.5A patent/CN111989709B/zh active Active
- 2019-04-03 WO PCT/JP2019/014889 patent/WO2019194255A1/ja active Application Filing
-
2020
- 2020-10-02 US US17/061,914 patent/US11341604B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004362469A (ja) * | 2003-06-06 | 2004-12-24 | Softopia Japan Foundation | アクティブセンサの動物体検出装置及び動物体検出方法、並びに動物体検出プログラム |
JP2009516278A (ja) * | 2005-11-18 | 2009-04-16 | ローベルト ボツシユ ゲゼルシヤフト ミツト ベシユレンクテル ハフツング | 光学式レインセンサが組み込まれているヘッドライトモジュール |
JP2009098023A (ja) | 2007-10-17 | 2009-05-07 | Toyota Motor Corp | 物体検出装置及び物体検出方法 |
JP2013067343A (ja) * | 2011-09-26 | 2013-04-18 | Koito Mfg Co Ltd | 車両用配光制御システム |
JP2017056935A (ja) | 2015-09-14 | 2017-03-23 | トヨタ モーター エンジニアリング アンド マニュファクチャリング ノース アメリカ,インコーポレイティド | 3dセンサにより検出されたオブジェクトの分類 |
JP2017138660A (ja) * | 2016-02-01 | 2017-08-10 | トヨタ自動車株式会社 | 物体検出方法、物体検出装置、およびプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3779871A4 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021108091A (ja) * | 2019-12-27 | 2021-07-29 | 財團法人工業技術研究院Industrial Technology Research Institute | 2d画像のラベリング情報に基づく3d画像ラベリング方法及び3d画像ラベリング装置 |
JP7084444B2 (ja) | 2019-12-27 | 2022-06-14 | 財團法人工業技術研究院 | 2d画像のラベリング情報に基づく3d画像ラベリング方法及び3d画像ラベリング装置 |
CN111612689A (zh) * | 2020-05-28 | 2020-09-01 | 上海联影医疗科技有限公司 | 医学图像处理方法、装置、计算机设备和可读存储介质 |
CN111612689B (zh) * | 2020-05-28 | 2024-04-05 | 上海联影医疗科技股份有限公司 | 医学图像处理方法、装置、计算机设备和可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP3779871A4 (en) | 2021-05-19 |
JPWO2019194255A1 (ja) | 2021-04-22 |
JP7217741B2 (ja) | 2023-02-03 |
CN111989709B (zh) | 2024-07-02 |
EP3779871A1 (en) | 2021-02-17 |
CN111989709A (zh) | 2020-11-24 |
US11341604B2 (en) | 2022-05-24 |
US20210019860A1 (en) | 2021-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019194255A1 (ja) | 演算処理装置、オブジェクト識別システム、オブジェクト識別方法、自動車、車両用灯具 | |
El Madawi et al. | Rgb and lidar fusion based 3d semantic segmentation for autonomous driving | |
US10671860B2 (en) | Providing information-rich map semantics to navigation metric map | |
US11458912B2 (en) | Sensor validation using semantic segmentation information | |
CN116685873A (zh) | 一种面向车路协同的感知信息融合表征及目标检测方法 | |
US11783507B2 (en) | Camera calibration apparatus and operating method | |
JP4486997B2 (ja) | 車両周辺監視装置 | |
US10657392B2 (en) | Object detection device, object detection method, and program | |
JP7371053B2 (ja) | 電子機器、移動体、撮像装置、および電子機器の制御方法、プログラム、記憶媒体 | |
JP2017181476A (ja) | 車両位置検出装置、車両位置検出方法及び車両位置検出用コンピュータプログラム | |
CN111033316B (zh) | 识别传感器及其控制方法、汽车、车辆用灯具、对象识别系统、对象的识别方法 | |
JP6458577B2 (ja) | 画像測距装置 | |
WO2019198789A1 (ja) | オブジェクト識別システム、自動車、車両用灯具、オブジェクトのクラスタリング方法 | |
JP2001243456A (ja) | 障害物検出装置及び障害物検出方法 | |
US11915436B1 (en) | System for aligning sensor data with maps comprising covariances | |
JP7288460B2 (ja) | 車載用物体識別システム、自動車、車両用灯具、分類器の学習方法、演算処理装置 | |
CN113611008B (zh) | 一种车辆行驶场景采集方法、装置、设备及介质 | |
KR102368262B1 (ko) | 다중 관측정보를 이용한 신호등 배치정보 추정 방법 | |
US11532100B2 (en) | Method for environmental acquisition, data processing unit | |
WO2022133986A1 (en) | Accuracy estimation method and system | |
JP2022076876A (ja) | 位置推定装置、位置推定方法、及び位置推定プログラム | |
US20230286548A1 (en) | Electronic instrument, movable apparatus, distance calculation method, and storage medium | |
US20230009766A1 (en) | Method and Processing Unit for Processing Sensor Data of Several Different Sensors with an Artificial Neural Network in a Vehicle | |
JP4743797B2 (ja) | 車両周辺監視装置、車両、車両周辺監視プログラム | |
Nedevschi et al. | On-board stereo sensor for intersection driving assistance architecture and specification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19781388 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020512307 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2019781388 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2019781388 Country of ref document: EP Effective date: 20201105 |