WO2019194256A1 - 演算処理装置、オブジェクト識別システム、学習方法、自動車、車両用灯具 - Google Patents
演算処理装置、オブジェクト識別システム、学習方法、自動車、車両用灯具 Download PDFInfo
- Publication number
- WO2019194256A1 WO2019194256A1 PCT/JP2019/014890 JP2019014890W WO2019194256A1 WO 2019194256 A1 WO2019194256 A1 WO 2019194256A1 JP 2019014890 W JP2019014890 W JP 2019014890W WO 2019194256 A1 WO2019194256 A1 WO 2019194256A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- arithmetic processing
- object recognition
- unit
- conversion unit
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 48
- 238000000034 method Methods 0.000 title claims description 11
- 238000006243 chemical reaction Methods 0.000 claims abstract description 51
- 238000013528 artificial neural network Methods 0.000 claims abstract description 14
- 238000011144 upstream manufacturing Methods 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 13
- 101150071665 img2 gene Proteins 0.000 abstract description 14
- 101150013335 img1 gene Proteins 0.000 abstract description 13
- 238000012986 modification Methods 0.000 description 10
- 230000004048 modification Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q1/00—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
- B60Q1/0017—Devices integrating an element dedicated to another function
- B60Q1/0023—Devices integrating an element dedicated to another function the element being a sensor, e.g. distance sensor, camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
Definitions
- the present invention relates to an object identification system.
- the object recognition system includes a sensor and a neural network arithmetic unit that processes the output of the sensor.
- Sensors include LiDAR (Light Detection, Ranging, Laser Imaging, Detection, and Ranging), millimeter wave radar, and ultrasonic sonar. Among them, high resolution cameras are the cheapest available. Yes, it is being installed in vehicles.
- the arithmetic processing unit that processes the output image of the camera is composed of a convolutional neural network (CNN). CNN learns (trains) using images taken in various scenes.
- CNN convolutional neural network
- the vehicle-mounted object recognition system is required to operate with the same accuracy as in the daytime at night.
- there is no sunlight at night and the reflected light of the headlight of the vehicle is photographed by the camera. Therefore, an object in the vicinity of the own vehicle is bright and a distant object is dark, and an image completely different from daytime is acquired.
- automobiles have headlights and tail lamps turned on at night and have different characteristics from those of daytime.
- the present invention has been made in view of the above-mentioned problems, and one of exemplary purposes of an aspect thereof is to provide an object recognition system capable of obtaining a high identification rate in various scenes, an arithmetic processing unit thereof, and a learning method. is there.
- One embodiment of the present invention relates to an arithmetic processing device that recognizes an object based on image data.
- the arithmetic processing unit is an object recognition unit for identifying an object based on image data, and a neural network provided upstream of the object recognition unit, converts a first image acquired by the camera into a second image, A conversion unit that inputs an image to the object recognition unit.
- the arithmetic processing unit includes a neural network, and includes a conversion unit that converts sensor output into intermediate data, and an object recognition unit that identifies an object based on the intermediate data.
- the conversion unit converts the sensor output into intermediate data that would be obtained in an environment in which learning data used for training of the object recognition unit was acquired.
- high discrimination power can be provided in various scenes.
- the arithmetic processing unit recognizes an object based on the image data.
- the arithmetic processing unit is an object recognition unit for identifying an object based on image data, and a neural network provided upstream of the object recognition unit.
- the first image acquired by the camera is converted into a second image, and a second image is obtained.
- a conversion unit that inputs an image to the object recognition unit.
- the second image may be obtained by correcting the gradation of the first image so as to approximate the gradation of learning data used for training of the object recognition unit.
- the identification rate can be increased by inputting an image close to the learning data to the object recognition unit.
- the conversion unit may be trained so as to increase the recognition rate by referring to the recognition rate of the object recognition unit.
- the second image may be an image that may be obtained when the same scene as the first image is captured in an environment where the learning data used for training of the object recognition unit is captured.
- the learning data is taken in the daytime, and the conversion unit may convert the first image taken at night into a second image taken in the daytime.
- the conversion unit may receive a plurality of consecutive frames as input. By using a plurality of consecutive frames as input, the conversion unit can perform conversion processing based on the temporal feature amount.
- the arithmetic processing unit includes a neural network, and includes a conversion unit that converts sensor output into intermediate data, and an object recognition unit that identifies an object based on the intermediate data.
- the conversion unit converts the sensor output into intermediate data that would be obtained in an environment in which learning data used for training of the object recognition unit was acquired. For example, the accuracy of the distance measuring sensor decreases in rainy weather or dense fog. The reduction in accuracy is compensated by the conversion unit, and the identification rate can be improved by providing the object recognition unit with intermediate data acquired as if it was fine weather.
- FIG. 1 is a diagram illustrating an object identification system 10 according to an embodiment.
- the object identification system 10 is mounted on a vehicle and can be used for automatic driving or light distribution control of a headlamp, but its application is not limited. Part of the description of this embodiment is premised on being mounted on a vehicle.
- the object identification system 10 includes a camera 20 and an arithmetic processing unit 40.
- the camera 20 is an image sensor such as a CMOS (Complementary / Metal / Oxide / Semiconductor) sensor or a CCD (Charge / Coupled Device), and outputs image data (first image) IMG1 at a predetermined frame rate.
- CMOS Complementary / Metal / Oxide / Semiconductor
- CCD Charge / Coupled Device
- the arithmetic processing unit 40 recognizes an object based on the image data IMG1. Specifically, the arithmetic processing unit 40 determines the position and category of the object included in the image data IMG1.
- the arithmetic processing unit 40 can be implemented by a combination of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a processor (hardware) such as a microcomputer, and a software program executed by the processor (hardware).
- the arithmetic processing unit 40 may be a combination of a plurality of processors.
- examples of the object category include pedestrians, bicycles, automobiles, and utility poles. About a pedestrian, you may define the pedestrian seen from the front, the pedestrian seen from the back, and the pedestrian seen from the side as the same category. The same applies to automobiles and bicycles.
- the arithmetic processing unit 40 includes a conversion unit 42 and an object recognition unit 44.
- the object recognition unit 44 employs a convolutional neural network algorithm using deep learning.
- FasterRCNN can be used, but not limited to this, YOLO (You Only Look Once), SSD (Single Shot MultiBox Detector), R-CNN (Region-based Convolutional Neural Network), SPPnet (Spatial) Pyramid Pooling), Mask R-CNN, etc., or algorithms developed in the future can be adopted.
- the conversion unit 42 is a neural network provided upstream (previous stage) of the object recognition unit 44, converts the first image IMG1 acquired by the camera 20 into the second image IMG2, and converts the second image IMG2 into the object recognition unit 44. To enter.
- FIG. 2 is a diagram for explaining the conversion process of the conversion unit 42.
- the object recognition unit 44 is trained using an image IMG3 (learning data) taken under a certain environment (referred to as a standard environment).
- image IMG3 learning data
- images obtained by shooting various scenes are used.
- the standard environment is daytime.
- the environment (referred to as the actual environment) from which the second image IMG2 is acquired may be greatly different from the standard environment. Most typically, this is a case where the actual environment is at night. At this time, there is a large gap between the second image IMG2 and the learning data IMG3.
- the conversion unit 42 converts the first image IMG1 into the second image IMG2 so as to fill this gap.
- the converted second image IMG2 is approximate to the learning data IMG3. More specifically, the conversion unit 42 corrects the gradation of the first image IMG1 so as to approximate the gradation of the learning data IMG3, and generates the second image IMG2.
- the second image IMG2 is an image obtained by photographing the same scene as the first image IMG1 in a standard environment where the learning data IMG3 used for training of the object recognition unit 44 is photographed.
- the standard environment in the daytime and the actual environment in the nighttime will be continued with the standard environment in the daytime and the actual environment in the nighttime.
- the conversion unit 42 may adjust brightness and contrast for each pixel, may be adjusted for each area, or may be adjusted uniformly for the entire screen.
- FIG. 3 is a diagram illustrating an example of an image used for learning by the conversion unit 42.
- FIG. 3 shows an image IMG_DAY photographed in the day and an image IMG_LIGHT photographed at night in the same scene.
- a neural network capable of converting a night image into a day image can be constructed.
- FIG. 4 is a flowchart of learning of the arithmetic processing unit 40.
- the object recognition unit 44 is trained with the standard environment image IMG_DAY acquired in a predetermined standard environment (daytime) as learning data (S100).
- the conversion unit 42 is trained using a set of a standard environment image (for example, IMG_DAY) acquired under a standard environment and a real environment image (for example, IMG_LIGHT) acquired under a different environment (S102).
- the above is the configuration of the object identification system 10. According to the object identification system 10, it is possible to recognize an object with high accuracy based on an image obtained in an actual environment different from the standard environment, using the object recognition unit 44 trained in the standard environment.
- the object recognition unit 44 only needs to train using only images taken in the daytime, and does not need to use images taken at night (or can reduce the number thereof), so the cost required for learning is greatly increased. Can be reduced.
- FIG. 5 is a block diagram of an automobile provided with the object identification system 10.
- the automobile 100 includes headlamps 102L and 102R.
- the camera 20 is built in at least one of the headlamps 102L and 102R.
- the headlamp 102 is located at the foremost end of the vehicle body, and is most advantageous as an installation location of the camera 20 in detecting surrounding objects.
- the arithmetic processing unit 40 may be built in the headlamp 102 or provided on the vehicle side.
- the conversion unit 42 that generates the second image IMG2 may be provided inside the headlamp 102, and the object recognition unit 44 may be mounted on the vehicle side.
- FIG. 6 is a block diagram showing a vehicular lamp 200 including the object identification system 10.
- the vehicular lamp 200 includes a light source 202, a lighting circuit 204, and an optical system 206. Further, the vehicular lamp 200 is provided with a camera 20 and an arithmetic processing unit 40. Information regarding the object OBJ detected by the arithmetic processing unit 40 is transmitted to the vehicle ECU 104. The vehicle ECU may perform automatic driving based on this information.
- the information regarding the object OBJ detected by the arithmetic processing device 40 may be used for light distribution control of the vehicular lamp 200.
- the lamp ECU 208 generates an appropriate light distribution pattern based on information regarding the type and position of the object OBJ generated by the arithmetic processing device 40.
- the lighting circuit 204 and the optical system 206 operate so that the light distribution pattern generated by the lamp ECU 208 is obtained.
- Modification 1 In the embodiment, day and night have been described as differences in environment, but this is not the only case.
- the angle of view, field of view, line-of-sight direction, distortion, and the like differ greatly between the imaging system including the camera (standard camera) used to capture the learning data and the camera 20 mounted on the object identification system 10.
- a second image IMG2 that approximates an image that would be obtained when the first image IMG1 acquired by the camera 20 was captured by a standard camera may be generated.
- the conversion unit 42 corrects the shape, not the gradation.
- the image when the camera 20 is built in the headlamp, the image may be distorted by the outer lens.
- the camera that captures the learning data has no distortion.
- the first image IMG1 may be converted by the conversion unit 42 so as to reduce the influence of this distortion.
- the recognition rate of the object recognition unit 44 may be referred to as a parameter, and the neural network of the conversion unit 42 may be optimized so as to increase the recognition rate.
- the first image IMG1 that is the current frame may be input to the conversion unit 42 together with past frames that are continuous with the first image IMG1 to generate the second image IMG2.
- the conversion unit can perform conversion processing based on the temporal feature amount.
- a TOF camera or LiDAR may be used instead of the camera 20.
- the output data of the LiDAR or TOF camera may be handled as image data with the distance as the pixel value.
- the output data of these distance measuring sensors three-dimensional sensors
- the conversion unit 42 converts the output data of the distance measurement sensor into intermediate data that would be obtained in an environment (sunny or cloudy) where the learning data used for the training of the object recognition unit 44 was acquired. Thereby, the identification rate can be improved.
- the arithmetic processing unit 40 may be configured only by hardware using an FPGA, a dedicated application specific integrated circuit (ASIC), or the like.
- Modification 6 Although the in-vehicle object identification system 10 has been described in the embodiment, the application of the present invention is not limited to this. For example, it is fixedly installed on a traffic light, a traffic sign, or other traffic infrastructure, and is also applied to a use for fixed-point observation. Is possible.
- the present invention relates to an object identification system.
- SYMBOLS 10 ... Object identification system, 20 ... Camera, 40 ... Arithmetic processing unit, 42 ... Conversion part, 44 ... Object recognition part, IMG1 ... 1st image, IMG2 ... 2nd image, 100 ... Car, 102 ... Headlamp, 104 DESCRIPTION OF SYMBOLS ... Vehicle ECU, 200 ... Vehicle lamp, 202 ... Light source, 204 ... Lighting circuit, 206 ... Optical system, 208 ... Lamp ECU.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Mechanical Engineering (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
本明細書に開示される一実施の形態は、演算処理装置に関する。演算処理装置は、画像データにもとづき物体を認識する。演算処理装置は、画像データにもとづいて物体を識別する物体認識部と、物体認識部の上流に設けられたニューラルネットワークであり、カメラが取得した第1画像を第2画像に変換し、第2画像を物体認識部に入力する変換部と、を備える。物体認識部の入力として適した階調を有する画像することにより、識別率を高めることができる。
以下、本発明を好適な実施の形態をもとに図面を参照しながら説明する。各図面に示される同一または同等の構成要素、部材、処理には、同一の符号を付するものとし、適宜重複した説明は省略する。また、実施の形態は、発明を限定するものではなく例示であって、実施の形態に記述されるすべての特徴やその組み合わせは、必ずしも発明の本質的なものであるとは限らない。
図5は、オブジェクト識別システム10を備える自動車のブロック図である。自動車100は、前照灯102L,102Rを備える。オブジェクト識別システム10のうち、少なくともカメラ20は、前照灯102L,102Rの少なくとも一方に内蔵される。前照灯102は、車体の最も先端に位置しており、周囲のオブジェクトを検出する上で、カメラ20の設置箇所として最も有利である。演算処理装置40については、前照灯102に内蔵してもよいし、車両側に設けてもよい。たとえば演算処理装置40のうち、第2画像IMG2を生成する変換部42は、前照灯102の内部に設け、物体認識部44は車両側に搭載してもよい。
実施の形態では、環境の違いとして、昼と夜を説明したがその限りでない。学習データを撮影するのに使用したカメラ(標準カメラ)を含む撮像系と、オブジェクト識別システム10に搭載されるカメラ20の間で、画角、視野、視線方向、歪みなどが大きく異なる場合、それらの相違を、環境の違いとして把握できる。この場合、カメラ20が取得した第1画像IMG1を、標準カメラで撮影したときに得られるであろう画像に近似した第2画像IMG2を生成してもよい。この場合、変換部42においては、階調ではなく、形状が補正される。
実施の形態では、変換部42のニューラルネットワークのトレーニングに際して、物体認識部44の認識率をパラメータとして参照し、この認識率が高まるように変換部42のニューラルネットワークを最適化してもよい。
変換部42には、現在のフレームである第1画像IMG1を、それと連続する過去のフレームとともに入力し、第2画像IMG2を生成してもよい。連続する複数のフレームを入力とすることで、変換部が経時的な特徴量にもとづいて、変換処理が可能となる。
カメラ20に代えて、TOFカメラやLiDARを用いてもよい。この場合、LiDARやTOFカメラの出力データを、距離を画素値とする画像データとして扱ってもよい。これらの測距センサ(3次元センサ)の出力データは、降雨、降雪、濃霧の中で、晴天や曇天時とは異なったデータとなる。そこで、変換部42は、測距センサの出力データを、物体認識部44のトレーニングに用いた学習データを取得した環境下(晴天や曇天)において得られるであろう中間データに変換する。これにより、識別率を改善できる。
演算処理装置40は、FPGAや専用のASIC(Application Specific Integrated Circuit)などを用いてハードウェアのみで構成してもよい。
実施の形態では、車載用のオブジェクト識別システム10を説明したが本発明の適用はその限りでなく、たとえば信号機や交通標識、そのほかの交通インフラに固定的に設置され、定点観測する用途にも適用可能である。
Claims (11)
- 画像データにもとづき物体を認識する演算処理装置であって、
前記画像データにもとづいて物体を識別する物体認識部と、
前記物体認識部の上流に設けられたニューラルネットワークであり、カメラが取得した第1画像を第2画像に変換し、前記第2画像を前記物体認識部に入力する変換部と、
を備えることを特徴とする演算処理装置。 - 前記第2画像は、前記第1画像の階調を前記物体認識部のトレーニングに用いた学習データに近似するよう補正して得られることを特徴とする請求項1に記載の演算処理装置。
- 前記第2画像は、前記第1画像と同じシーンを、前記物体認識部のトレーニングに用いた学習データを撮影した環境下において撮影したときに得られるであろう画像であることを特徴とする請求項1または2に記載の演算処理装置。
- 前記学習データは、昼間に撮影されたものであり、
前記変換部は、夜間に撮影された前記第1画像を、昼間に撮影されたような前記第2画像に変換することを特徴とする請求項3に記載の演算処理装置。 - 前記変換部のトレーニングに際して、前記物体認識部の識別率を参照し、当該識別率が高まるように前記変換部のニューラルネットワークが最適化されることを特徴とする請求項1から4のいずれかに記載の演算処理装置。
- 前記変換部は、連続する複数のフレームを入力として受けることを特徴とする請求項1から5のいずれかに記載の演算処理装置。
- カメラと、
請求項1から6のいずれかに記載の演算処理装置と、
を備えることを特徴とするオブジェクト識別システム。 - 請求項7に記載のオブジェクト識別システムを備えることを特徴とする車両用灯具。
- 前照灯に内蔵されるカメラと、
請求項1から6のいずれかに記載の演算処理装置と、
を備えることを特徴とする自動車。 - センサにより取得されるセンサ出力にもとづき物体を認識する演算処理装置であって、
ニューラルネットワークで構成され、前記センサ出力を中間データに変換する変換部と、
前記中間データにもとづいて物体を識別する物体認識部と、
を備え、
前記変換部は、前記センサ出力を、前記物体認識部のトレーニングに用いた学習データを取得した環境下において得られるであろう前記中間データに変換することを特徴とする演算処理装置。 - カメラが取得した画像データにもとづいて物体を認識する演算処理装置の学習方法であって、
前記演算処理装置は、
前記カメラが取得した画像データを変換する変換部と、
前記変換部による変換後の画像データを処理し、物体を識別する物体認識部と、
を備え、
前記学習方法は、
前記物体認識部を、所定の環境下において取得した画像を学習データとしてトレーニングするステップと、
前記所定の環境下において取得した画像と、それと異なる環境下において取得した画像と、のセットを用いて前記変換部をトレーニングするステップと、
を備えることを特徴とする学習方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020512308A JP7268001B2 (ja) | 2018-04-05 | 2019-04-03 | 演算処理装置、オブジェクト識別システム、学習方法、自動車、車両用灯具 |
EP19781869.3A EP3779872A4 (en) | 2018-04-05 | 2019-04-03 | WORK PROCESSING DEVICE, OBJECT IDENTIFICATION SYSTEM, LEARNING PROCEDURE, AUTOMOTIVE AND LIGHTING DEVICE FOR VEHICLE |
CN201980023552.5A CN112005245A (zh) | 2018-04-05 | 2019-04-03 | 运算处理装置、对象识别系统、学习方法、汽车、车辆用灯具 |
US17/061,662 US11676394B2 (en) | 2018-04-05 | 2020-10-02 | Processing device for conversion of images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-073354 | 2018-04-05 | ||
JP2018073354 | 2018-04-05 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/061,662 Continuation US11676394B2 (en) | 2018-04-05 | 2020-10-02 | Processing device for conversion of images |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019194256A1 true WO2019194256A1 (ja) | 2019-10-10 |
Family
ID=68100544
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/014890 WO2019194256A1 (ja) | 2018-04-05 | 2019-04-03 | 演算処理装置、オブジェクト識別システム、学習方法、自動車、車両用灯具 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11676394B2 (ja) |
EP (1) | EP3779872A4 (ja) |
JP (1) | JP7268001B2 (ja) |
CN (1) | CN112005245A (ja) |
WO (1) | WO2019194256A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210111717A (ko) * | 2019-12-13 | 2021-09-13 | 임영한 | 인공지능에 의한 교차로 신호등 |
US20220122360A1 (en) * | 2020-10-21 | 2022-04-21 | Amarjot Singh | Identification of suspicious individuals during night in public areas using a video brightening network system |
WO2022250154A1 (ja) * | 2021-05-28 | 2022-12-01 | 京セラ株式会社 | 学習済みモデル生成装置、学習済みモデル生成方法、及び認識装置 |
JPWO2022250153A1 (ja) * | 2021-05-27 | 2022-12-01 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6642474B2 (ja) * | 2017-02-13 | 2020-02-05 | オムロン株式会社 | 状態判定装置、学習装置、状態判定方法及びプログラム |
WO2019194256A1 (ja) * | 2018-04-05 | 2019-10-10 | 株式会社小糸製作所 | 演算処理装置、オブジェクト識別システム、学習方法、自動車、車両用灯具 |
FR3112009B1 (fr) * | 2020-06-30 | 2022-10-28 | St Microelectronics Grenoble 2 | Procédé de conversion d’une image numérique |
US11367289B1 (en) * | 2021-07-16 | 2022-06-21 | Motional Ad Llc | Machine learning-based framework for drivable surface annotation |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11275376A (ja) * | 1998-03-24 | 1999-10-08 | Konica Corp | 色彩データ保持方法およびカラーマネージメント方法 |
JP2008105518A (ja) * | 2006-10-25 | 2008-05-08 | Calsonic Kansei Corp | カメラ内蔵ランプ |
JP2009017157A (ja) * | 2007-07-04 | 2009-01-22 | Omron Corp | 画像処理装置および方法、並びに、プログラム |
JP2009098023A (ja) | 2007-10-17 | 2009-05-07 | Toyota Motor Corp | 物体検出装置及び物体検出方法 |
JP2017056935A (ja) | 2015-09-14 | 2017-03-23 | トヨタ モーター エンジニアリング アンド マニュファクチャリング ノース アメリカ,インコーポレイティド | 3dセンサにより検出されたオブジェクトの分類 |
JP2017533482A (ja) * | 2015-09-10 | 2017-11-09 | バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド | 車線データの処理方法、装置、記憶媒体及び機器 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05127698A (ja) * | 1991-10-30 | 1993-05-25 | Ricoh Co Ltd | ニユーラルネツトワークによるパターン変換装置及び画像パターン復元装置 |
JP6440303B2 (ja) * | 2014-12-02 | 2018-12-19 | エヌ・ティ・ティ・コムウェア株式会社 | 対象認識装置、対象認識方法、およびプログラム |
KR20180092778A (ko) * | 2017-02-10 | 2018-08-20 | 한국전자통신연구원 | 실감정보 제공 장치, 영상분석 서버 및 실감정보 제공 방법 |
KR101818129B1 (ko) * | 2017-04-25 | 2018-01-12 | 동국대학교 산학협력단 | 나선 신경망 기법을 이용한 보행자 인식 장치 및 방법 |
CN107767408B (zh) | 2017-11-09 | 2021-03-12 | 京东方科技集团股份有限公司 | 图像处理方法、处理装置和处理设备 |
US10984286B2 (en) * | 2018-02-02 | 2021-04-20 | Nvidia Corporation | Domain stylization using a neural network model |
WO2019194256A1 (ja) * | 2018-04-05 | 2019-10-10 | 株式会社小糸製作所 | 演算処理装置、オブジェクト識別システム、学習方法、自動車、車両用灯具 |
-
2019
- 2019-04-03 WO PCT/JP2019/014890 patent/WO2019194256A1/ja active Application Filing
- 2019-04-03 CN CN201980023552.5A patent/CN112005245A/zh active Pending
- 2019-04-03 EP EP19781869.3A patent/EP3779872A4/en not_active Withdrawn
- 2019-04-03 JP JP2020512308A patent/JP7268001B2/ja active Active
-
2020
- 2020-10-02 US US17/061,662 patent/US11676394B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11275376A (ja) * | 1998-03-24 | 1999-10-08 | Konica Corp | 色彩データ保持方法およびカラーマネージメント方法 |
JP2008105518A (ja) * | 2006-10-25 | 2008-05-08 | Calsonic Kansei Corp | カメラ内蔵ランプ |
JP2009017157A (ja) * | 2007-07-04 | 2009-01-22 | Omron Corp | 画像処理装置および方法、並びに、プログラム |
JP2009098023A (ja) | 2007-10-17 | 2009-05-07 | Toyota Motor Corp | 物体検出装置及び物体検出方法 |
JP2017533482A (ja) * | 2015-09-10 | 2017-11-09 | バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド | 車線データの処理方法、装置、記憶媒体及び機器 |
JP2017056935A (ja) | 2015-09-14 | 2017-03-23 | トヨタ モーター エンジニアリング アンド マニュファクチャリング ノース アメリカ,インコーポレイティド | 3dセンサにより検出されたオブジェクトの分類 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3779872A4 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210111717A (ko) * | 2019-12-13 | 2021-09-13 | 임영한 | 인공지능에 의한 교차로 신호등 |
KR102385575B1 (ko) * | 2019-12-13 | 2022-04-11 | 임영한 | 인공지능에 의한 교차로 신호등 |
US20220122360A1 (en) * | 2020-10-21 | 2022-04-21 | Amarjot Singh | Identification of suspicious individuals during night in public areas using a video brightening network system |
JPWO2022250153A1 (ja) * | 2021-05-27 | 2022-12-01 | ||
WO2022250153A1 (ja) * | 2021-05-27 | 2022-12-01 | 京セラ株式会社 | 学習済みモデル生成装置、学習済みモデル生成方法、及び認識装置 |
JP7271810B2 (ja) | 2021-05-27 | 2023-05-11 | 京セラ株式会社 | 学習済みモデル生成装置、学習済みモデル生成方法、及び認識装置 |
WO2022250154A1 (ja) * | 2021-05-28 | 2022-12-01 | 京セラ株式会社 | 学習済みモデル生成装置、学習済みモデル生成方法、及び認識装置 |
JPWO2022250154A1 (ja) * | 2021-05-28 | 2022-12-01 | ||
JP7271809B2 (ja) | 2021-05-28 | 2023-05-11 | 京セラ株式会社 | 学習済みモデル生成装置、学習済みモデル生成方法、及び認識装置 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2019194256A1 (ja) | 2021-04-01 |
EP3779872A1 (en) | 2021-02-17 |
US11676394B2 (en) | 2023-06-13 |
EP3779872A4 (en) | 2021-05-19 |
JP7268001B2 (ja) | 2023-05-02 |
US20210027102A1 (en) | 2021-01-28 |
CN112005245A (zh) | 2020-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019194256A1 (ja) | 演算処理装置、オブジェクト識別システム、学習方法、自動車、車両用灯具 | |
AU2022203095B2 (en) | Real-time HDR video for vehicle control | |
JP6176028B2 (ja) | 車両制御システム、画像センサ | |
JP5680573B2 (ja) | 車両の走行環境認識装置 | |
JP4970516B2 (ja) | 周囲確認支援装置 | |
US20190087944A1 (en) | System and method for image presentation by a vehicle driver assist module | |
JP4491453B2 (ja) | 赤外画像と視覚画像を周辺部に依存して融合させることにより車両の周辺部を可視化するための方法及び装置 | |
CN101088027B (zh) | 用于机动车的立体摄像机 | |
US20120062746A1 (en) | Image Processing Apparatus | |
EP1513103A2 (en) | Image processing system and vehicle control system | |
TWI744876B (zh) | 圖像辨識裝置、固體攝像裝置及圖像辨識方法 | |
JP2020136958A (ja) | イベント信号検出センサ及び制御方法 | |
JP6468568B2 (ja) | 物体認識装置、モデル情報生成装置、物体認識方法、および物体認識プログラム | |
JP2014157433A (ja) | 視差検出装置、視差検出方法 | |
CN104008518B (zh) | 对象检测设备 | |
CN110536814B (zh) | 摄像机装置及与周围环境相适应地检测车辆周围环境区域的方法 | |
US20200118280A1 (en) | Image Processing Device | |
WO2019163315A1 (ja) | 情報処理装置、撮像装置、及び撮像システム | |
JP6789151B2 (ja) | カメラ装置、検出装置、検出システムおよび移動体 | |
CN113632450A (zh) | 摄影系统及图像处理装置 | |
WO2023181662A1 (ja) | 測距装置および測距方法 | |
JP7253693B2 (ja) | 画像処理装置 | |
US20230394844A1 (en) | System for avoiding accidents caused by wild animals crossing at dusk and at night | |
US20230308779A1 (en) | Information processing device, information processing system, information processing method, and information processing program | |
WO2021192714A1 (ja) | レンダリングシステム及び自動運転検証システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19781869 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020512308 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2019781869 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2019781869 Country of ref document: EP Effective date: 20201105 |