WO2021107171A1 - Deep learning processing apparatus and method for multiple sensors for vehicle - Google Patents

Deep learning processing apparatus and method for multiple sensors for vehicle Download PDF

Info

Publication number
WO2021107171A1
WO2021107171A1 PCT/KR2019/016338 KR2019016338W WO2021107171A1 WO 2021107171 A1 WO2021107171 A1 WO 2021107171A1 KR 2019016338 W KR2019016338 W KR 2019016338W WO 2021107171 A1 WO2021107171 A1 WO 2021107171A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sensor
features
input
deep learning
Prior art date
Application number
PCT/KR2019/016338
Other languages
French (fr)
Korean (ko)
Inventor
이상설
장성준
박종희
Original Assignee
전자부품연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 전자부품연구원 filed Critical 전자부품연구원
Publication of WO2021107171A1 publication Critical patent/WO2021107171A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

Definitions

  • the present invention relates to image processing and SoC (System on Chip) technology using artificial intelligence technology, and more particularly, to an apparatus and method for receiving images from heterogeneous sensors and processing them through deep learning.
  • SoC System on Chip
  • the present invention has been devised to solve the above problems, and an object of the present invention is to provide a lightweight embedded deep learning network structure that detects/classifies objects by inputting images generated from heterogeneous sensors installed in a vehicle, and An object of the present invention is to provide a processing apparatus and method using the same.
  • a multi-image processing method comprising: a first input step of receiving a first image of a first type; a second input step of receiving a second type of second image; a third input step of inputting the input first image and the second image together into the artificial intelligence model; extracting the features of the first image and the features of the second image with an artificial intelligence model; and a detection and classification step of detecting an object and classifying the detected object by using the extracted features of the first image and the second image.
  • the multi-image processing method further comprises the step of estimating the illuminance by using the first image, and the third input step is to add the estimated illuminance information together with the first image and the second image to the artificial intelligence model. It may be a step to input into .
  • the extracting may be a step of extracting features of the first image and features of the second image by further using the estimated illuminance information.
  • the multi-image processing method includes the steps of controlling the WDR (Wide Dynamic Range) and lighting of a third sensor that generates a third image that is an image of the inside of a vehicle and a blind spot with reference to estimated illuminance information; may include more.
  • WDR Wide Dynamic Range
  • the multi-image processing method according to the present invention may further include detecting an obstacle in a blind spot by using a third image generated by a third sensor.
  • the multi-image processing method according to the present invention may further include a step of recognizing a situation and determining a risk based on the object detected and classified in the detection and classification step and the obstacle detected in the detection step.
  • the first image may be an RGB image
  • the second image may be an IR image
  • a first sensor for generating a first image of a first type; a second sensor generating a second image of a second type; and inputting the first image generated by the first sensor and the second image generated by the second sensor together into the artificial intelligence model, extracting the features of the first image and the features of the second image with the artificial intelligence model
  • a multi-sensor processing apparatus comprising: an object classifier/detector that detects an object using the extracted features of the first image and the second image and classifies the detected object is provided.
  • 1 is a diagram showing the structure of a deep learning network for single sensor-based object detection
  • FIG. 2 is a diagram showing the structure of a deep learning network for object detection based on multiple sensors for a vehicle according to an embodiment of the present invention
  • FIG. 3 is a block diagram of a multi-sensor processing apparatus according to another embodiment of the present invention.
  • FIG. 5 is a flowchart provided to explain a multi-sensor processing method according to another embodiment of the present invention.
  • a deep learning processing apparatus and method for multiple sensors for a vehicle are provided. Specifically, we present a new structure of a deep learning network that detects and classifies objects by inputting RGB images and IR images, which are images generated by heterogeneous sensors.
  • the deep learning network presented through an embodiment of the present invention is a lightweight embedded deep learning structure capable of multi-sensor-based object recognition, and is applicable to mobile semiconductors.
  • a method for generating a warning by recognizing a dangerous situation in a blind spot inside and outside the vehicle is provided.
  • a deep learning network for single sensor-based object detection presents the structure of a deep learning network for single sensor-based object detection.
  • a deep learning network for feature generation and object classification is separately configured for each sensor.
  • FIG. 2 is a diagram illustrating the structure of a deep learning network for object detection based on multiple sensors for a vehicle according to an embodiment of the present invention.
  • the RGB image generated by the RGB sensor and the IR image generated by the IR sensor are input to the deep learning network for object detection through respective channels.
  • the deep learning network for object detection estimates the illuminance from the input RGB image, and inputs the estimated illuminance information together with the RGB image and the IR image to the fusion layer.
  • the deep learning network for object detection extracts the features of the RGB image and the IR image with one network, and in extracting the features, even the estimated illuminance information is used to compensate for the illuminance effects to generate the image features. .
  • This part has the advantage of scalability as the number of sensors increases, and if the network is appropriately modified, convergence in various layers is possible.
  • the deep learning network for object detection detects an object in the image and classifies the detected object by using the extracted RGB image and the IR image.
  • the detection and classification results are presented as a BB (Boundary Boc) indicating an object in the image and a score indicating accuracy/reliability.
  • FIG. 3 is a block diagram of a multi-sensor processing apparatus according to another embodiment of the present invention.
  • the multi-sensor processing apparatus according to an embodiment of the present invention performs additional application processing such as situation recognition and alarm generation by using the deep learning network for object detection shown in FIG. 2 .
  • the multi-sensor processing apparatus for performing such a function, as shown in FIG. 3 , includes external sensors 110-1, 110-2, ..., 110-n, and internal sensors. 120 , sensor information extraction unit 130 , calibration unit 140 , internal sensor controller 150 , object detection/classifier 160 , obstacle detector 170 and recognition/determining unit 180 . do.
  • the external sensors 110-1, 110-2, ..., 110-n are sensors installed outside the vehicle and display different types of images, such as RGB image, IR image, ..., Lidar image. create There is no limitation on the number and type of the sensors 110-1, 110-2, ..., 110-n, and free implementation is possible as needed.
  • the internal sensor 120 is a wide-angle sensor installed inside the vehicle, and it is possible to generate an image of the outside of the vehicle, in particular, of a blind spot in addition to the inside of the vehicle.
  • the internal sensor 120 may be installed a little in front of the middle of the ceiling inside the vehicle, and may be implemented as an RGBIR sensor.
  • the angle of view of the internal sensor 120 is implemented to be 190 degrees or more, it is possible to generate images not only in the interior of the vehicle, but also in blind spots appearing on the rear side surfaces of the vehicle.
  • the sensor information extraction unit 130 extracts internal parameters and external parameters for the sensors 110 - 1 , 110 - 2 , ..., 110 -n, and 120 .
  • the calibrator 140 calibrates the sensors 110 - 1 , 110 - 2 , 110 - n and 120 with reference to the parameters extracted by the sensor information extracting unit 130 .
  • the object detector/classifier 160 is a deep learning network that detects and classifies an object existing in an image using heterogeneous images generated by the external sensors 110-1, 110-2, and 110-n.
  • the object detection/classifier 160 may be implemented as a deep learning network for object detection shown in FIG. 2 described above.
  • the internal sensor controller 150 is based on the parameter of the internal sensor 120 extracted by the sensor information extraction unit 130 and the illuminance information estimated by the object detection/classifier 160, the intensity of IR illumination or WDR ( Wide Dynamic Range) is actively adjusted.
  • the external image portion generated by the internal sensor 120 may be robust to the external environment (backlight situation, low light situation, etc.).
  • the obstacle detector 170 analyzes the image of the blind spot generated by the internal sensor 120 to detect the obstacle.
  • the recognition/determination unit 180 recognizes the current situation and determines the risk based on the object detection/classification result by the object detection/classifier 160 and the obstacle detection result in the blind spot by the obstacle detector 170 . Thus, an alarm is generated when necessary.
  • FIG. 5 is a flowchart provided to explain a multi-sensor processing method according to another embodiment of the present invention.
  • the sensor information extraction unit 130 extracts internal parameters and external parameters for the sensors 110-1, 110-2, ..., 110-n, 120 (S210),
  • the calibrator 140 calibrates the sensors 110-1, 110-2, 110-n, and 120 with reference to the parameters extracted in step S210 (S220).
  • the object detection/classifier 160 detects and classifies objects existing in the image by inputting heterogeneous images generated by external sensors 110-1, 110-2, and 110-n to the deep learning network for object detection. do (S230).
  • the internal sensor controller 150 actively adjusts the intensity of IR illumination or the WDR of the internal sensor 120 based on the illuminance information estimated by the object detection/classifier 160 (S240).
  • the obstacle detector 170 analyzes the image of the blind spot generated by the internal sensor 120 to detect the obstacle (S250).
  • the recognition/determination unit 180 recognizes the current situation based on the object object detection/classification result by step S230 and the obstacle detection result in the blind spot by step S250, determines the risk, and generates an alarm if necessary. (S260).
  • a deep learning network structure capable of minimizing enormous computations was applied to enable multi-sensor-based object recognition at a level that can be implemented in embedded hardware.
  • a structure that can be applied simultaneously even when a new sensor is added along with a heterogeneous sensor is presented, and a model that can be maintained even with a flexible deep learning device and new sensor input is presented.
  • Embodiments of the present invention can be applied to various external sensor interfaces by applying a fusion technology for deep learning processing between heterogeneous sensors as well as single sensor-based processing.
  • a method for monitoring the outside of the vehicle specifically, the situation/danger of the blind spot by utilizing the sensor installed inside the vehicle, is presented.
  • the technical idea of the present invention can also be applied to a computer-readable recording medium containing a computer program for performing the functions of the apparatus and method according to the present embodiment.
  • the technical ideas according to various embodiments of the present invention may be implemented in the form of computer-readable codes recorded on a computer-readable recording medium.
  • the computer-readable recording medium may be any data storage device readable by the computer and capable of storing data.
  • the computer-readable recording medium may be a ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical disk, hard disk drive, or the like.
  • the computer-readable code or program stored in the computer-readable recording medium may be transmitted through a network connected between computers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

Provided are a lightweight embedded deep learning network structure for detecting/classifying an object by using, as inputs, images generated from heterogeneous sensors installed in a vehicle, and a processing apparatus and method using same. A multiple image processing method according to an embodiment of the present invention comprises: a first input step for receiving, as an input, a first type of first image; a second input step for receiving, as an input, a second type of second image; a third input step for inputting the input first image and second image together into an artificial intelligence model; a step for extracting, via the artificial intelligence model, features of the first image and features of the second image; and a detection and classification step for detecting an object and classifying the detected object, by using the extracted features of the first image and the second image. Accordingly, an object can be detected/classified by receiving, as inputs via a lightweight embedded deep learning network, images generated from heterogenous sensors installed in a vehicle.

Description

차량용 다중 센서를 위한 딥러닝 처리 장치 및 방법Deep learning processing apparatus and method for multi-sensor for vehicle
본 발명은 인공지능 기술을 활용한 영상 처리 및 SoC(System on Chip) 기술에 관한 것으로, 더욱 상세하게는 이종의 센서들로부터 영상들을 입력받아 딥러닝으로 처리하는 장치 및 방법에 관한 것이다.The present invention relates to image processing and SoC (System on Chip) technology using artificial intelligence technology, and more particularly, to an apparatus and method for receiving images from heterogeneous sensors and processing them through deep learning.
카메라 등을 통해 생성한 영상 데이터로부터 추출한 특징 정보를 이용하여, 차량 외부에 대한 객체 검출, 차선 검출, 도로 검출 등 수 많은 응용에 대한 연구와 개발이 진행 중에 있다.Research and development of numerous applications such as object detection outside the vehicle, lane detection, and road detection are in progress by using feature information extracted from image data generated by a camera or the like.
특히, 자율 주행 차량에 적용하기 위하여 외부에 RGB 카메라, 스테레오 카메라, ToF 센서, Lidar 등을 연동하여 외부 상황인지 개발에 대부분의 역량을 할애하고 있다.In particular, most of its capabilities are devoted to developing external context awareness by linking an external RGB camera, a stereo camera, a ToF sensor, and a lidar to apply it to an autonomous vehicle.
하지만, 차량 내부 카메라 시스템은 운전자 상태 검출 정도로 아직 영상 기반의 다양한 센서 응용이 미약한 실정이다. 특히, RGB 영상 센서와 함께 신종 RCCC, RGBIR 등 다기능 센서 신호 처리 전용 프로세서가 없어 고가의 DSP 및 GPU를 활용하고 있다.However, the application of various image-based sensors to the degree of driver state detection in the in-vehicle camera system is still weak. In particular, there is no processor dedicated to multi-function sensor signal processing such as new RCCC and RGBIR along with RGB image sensor, so expensive DSP and GPU are utilized.
뿐만 아니라, 차량 내부 시스템에 적용 가능한 초경량의 딥러닝 하드웨어 플랫폼도 전무한 실정이다.In addition, there is no ultra-light deep learning hardware platform applicable to in-vehicle systems.
본 발명은 상기와 같은 문제점을 해결하기 위하여 안출된 것으로서, 본 발명의 목적은, 차량에 설치된 이종의 센서들로부터 생성되는 영상들을 입력으로 객체를 검출/분류하는 경량의 임베디드향 딥러닝 네트워크 구조 및 이를 이용한 처리 장치와 방법을 제공함에 있다.The present invention has been devised to solve the above problems, and an object of the present invention is to provide a lightweight embedded deep learning network structure that detects/classifies objects by inputting images generated from heterogeneous sensors installed in a vehicle, and An object of the present invention is to provide a processing apparatus and method using the same.
상기 목적을 달성하기 위한 본 발명의 일 실시예에 따른, 다중 영상 처리 방법은 제1 타입의 제1 영상을 입력받는 제1 입력단계; 제2 타입의 제2 영상을 입력받는 제2 입력단계; 입력된 제1 영상과 제2 영상을 인공지능 모델에 함께 입력하는 제3 입력단계; 인공지능 모델로 제1 영상의 특징과 제2 영상의 특징을 추출하는 단계; 및 추출된 제1 영상의 특징과 제2 영상을 이용하여, 객체를 검출하고 검출된 객체를 분류하는 검출 및 분류단계;를 포함한다. According to an embodiment of the present invention, there is provided a multi-image processing method comprising: a first input step of receiving a first image of a first type; a second input step of receiving a second type of second image; a third input step of inputting the input first image and the second image together into the artificial intelligence model; extracting the features of the first image and the features of the second image with an artificial intelligence model; and a detection and classification step of detecting an object and classifying the detected object by using the extracted features of the first image and the second image.
본 발명에 따른 다중 영상 처리 방법은 제1 영상을 이용하여, 조도를 추정하는 단계;를 더 포함하고, 제3 입력단계는, 추정된 조도 정보를 제1 영상 및 제2 영상과 함께 인공지능 모델에 입력하는 단계일 수 있다. The multi-image processing method according to the present invention further comprises the step of estimating the illuminance by using the first image, and the third input step is to add the estimated illuminance information together with the first image and the second image to the artificial intelligence model. It may be a step to input into .
추출 단계는, 추정된 조도 정보를 더 이용하여, 제1 영상의 특징과 제2 영상의 특징을 추출하는 단계일 수 있다. The extracting may be a step of extracting features of the first image and features of the second image by further using the estimated illuminance information.
본 발명에 따른 다중 영상 처리 방법은 추정된 조도 정보를 참조하여, 차량 내부와 사각 지대에 대한 영상인 제3 영상을 생성하는 제3 센서의 WDR(Wide Dynamic Range)과 조명을 제어하는 단계;를 더 포함할 수 있다. The multi-image processing method according to the present invention includes the steps of controlling the WDR (Wide Dynamic Range) and lighting of a third sensor that generates a third image that is an image of the inside of a vehicle and a blind spot with reference to estimated illuminance information; may include more.
본 발명에 따른 다중 영상 처리 방법은 제3 센서에서 생성된 제3 영상을 이용하여, 사각 지대의 장애물을 검출하는 단계;를 더 포함할 수 있다. The multi-image processing method according to the present invention may further include detecting an obstacle in a blind spot by using a third image generated by a third sensor.
본 발명에 따른 다중 영상 처리 방법은 검출 및 분류단계에서 검출 및 분류된 객체와 검출 단계에서 검출된 장애물을 기초로, 상황을 인지하고 위험을 판단하는 단계;를 더 포함할 수 있다. The multi-image processing method according to the present invention may further include a step of recognizing a situation and determining a risk based on the object detected and classified in the detection and classification step and the obstacle detected in the detection step.
제1 영상은, RGB 영상이고, 제2 영상은, IR 영상일 수 있다. The first image may be an RGB image, and the second image may be an IR image.
본 발명의 다른 측면에 따르면, 제1 타입의 제1 영상을 생성하는 제1 센서; 제2 타입의 제2 영상을 생성하는 제2 센서; 및 제1 센서에 의해 생성된 제1 영상과 제2 센서에 의해 생성된 제2 영상을 인공지능 모델에 함께 입력하여, 인공지능 모델로 제1 영상의 특징과 제2 영상의 특징을 추출하고, 추출된 제1 영상의 특징과 제2 영상을 이용하여 객체를 검출하고 검출된 객체를 분류하는 객체 분류/검출기;를 포함하는 것을 특징으로 하는 다중 센서 처리 장치가 제공된다.According to another aspect of the present invention, a first sensor for generating a first image of a first type; a second sensor generating a second image of a second type; and inputting the first image generated by the first sensor and the second image generated by the second sensor together into the artificial intelligence model, extracting the features of the first image and the features of the second image with the artificial intelligence model, A multi-sensor processing apparatus comprising: an object classifier/detector that detects an object using the extracted features of the first image and the second image and classifies the detected object is provided.
이상 설명한 바와 같이, 본 발명의 실시예들에 따르면, 경량의 임베디드향 딥러닝 네트워크로 차량에 설치된 이종의 센서들로부터 생성되는 영상들을 입력받아 객체를 검출/분류할 수 있게 된다.As described above, according to embodiments of the present invention, it is possible to detect/classify objects by receiving images generated from heterogeneous sensors installed in a vehicle through a lightweight embedded deep learning network.
또한, 본 발명의 실시예들에 따르면, 차량 내부에 설치된 센서를 활용하여 외부 상황의 인지 및 위험 모니터링이 가능해진다.In addition, according to embodiments of the present invention, it is possible to recognize an external situation and monitor a risk by using a sensor installed inside the vehicle.
도 1은 단일 센서 기반 객체 검출용 딥러닝 네트워크의 구조를 도시한 도면,1 is a diagram showing the structure of a deep learning network for single sensor-based object detection;
도 2는 본 발명의 일 실시예에 따른 차량용 다중 센서 기반의 객체 검출용 딥러닝 네트워크의 구조를 도시한 도면,2 is a diagram showing the structure of a deep learning network for object detection based on multiple sensors for a vehicle according to an embodiment of the present invention;
도 3은 본 발명의 다른 실시예에 따른 다중 센서 처리 장치의 블럭도,3 is a block diagram of a multi-sensor processing apparatus according to another embodiment of the present invention;
도 4는 내부 센서의 설치예를 나타낸 도면, 그리고,4 is a view showing an example of the installation of the internal sensor, and,
도 5는 본 발명의 또 다른 실시예에 따른 다중 센서 처리 방법의 설명에 제공되는 흐름도이다.5 is a flowchart provided to explain a multi-sensor processing method according to another embodiment of the present invention.
이하에서는 도면을 참조하여 본 발명을 보다 상세하게 설명한다.Hereinafter, the present invention will be described in more detail with reference to the drawings.
본 발명의 실시예에서는, 차량용 다중 센서를 위한 딥러닝 처리 장치 및 방법을 제시한다. 구체적으로, 이종의 센서로 생성한 영상들인 RGB 영상과 IR 영상을 입력으로 객체를 검출/분류하는 딥러닝 네트워크의 새로운 구조를 제시한다.In an embodiment of the present invention, a deep learning processing apparatus and method for multiple sensors for a vehicle are provided. Specifically, we present a new structure of a deep learning network that detects and classifies objects by inputting RGB images and IR images, which are images generated by heterogeneous sensors.
본 발명의 실시예를 통해 제시하는 딥러닝 네트워크는, 다중 센서 기반의 객체 인식이 가능한 경량의 임베디드향 딥러닝 구조로, 모바일 반도체에 적용 가능하다.The deep learning network presented through an embodiment of the present invention is a lightweight embedded deep learning structure capable of multi-sensor-based object recognition, and is applicable to mobile semiconductors.
나아가, 본 발명의 실시예에서는, 차량 내부와 차량 외부 사각 지대에서의 위험 상황까지 인지하여 경고를 발생시키는 방법을 제시한다.Furthermore, in an embodiment of the present invention, a method for generating a warning by recognizing a dangerous situation in a blind spot inside and outside the vehicle is provided.
도 1은 단일 센서 기반 객체 검출용 딥러닝 네트워크의 구조를 제시하였다. 도시된 객체 검출용 딥러닝 네트워크에서는 특징 생성과 객체 분류를 위한 딥러닝 네트워크가 센서 마다 별개로 구성된다.1 presents the structure of a deep learning network for single sensor-based object detection. In the illustrated deep learning network for object detection, a deep learning network for feature generation and object classification is separately configured for each sensor.
나아가, 조도 예측을 위한 별도의 레이어를 추가하여야 하고, 각각의 센서들에서 생성된 영상들을 기반으로 딥러닝 처리된 각각의 결과들을 융합하여 최종 결과를 추출하여야 한다.Furthermore, it is necessary to add a separate layer for illuminance prediction, and to extract the final result by fusion of the deep learning-processed results based on the images generated by each sensor.
딥러닝 처리가 2번에 걸쳐 이루어진다는 점에서, 모바일 단말에서의 임베디드 시스템에서 구현하기에는 다소 부적합하다.In that deep learning processing is performed twice, it is somewhat inappropriate to implement in an embedded system in a mobile terminal.
도 2는 본 발명의 일 실시예에 따른 차량용 다중 센서 기반의 객체 검출용 딥러닝 네트워크의 구조를 도시한 도면이다.2 is a diagram illustrating the structure of a deep learning network for object detection based on multiple sensors for a vehicle according to an embodiment of the present invention.
도시된 바와 같이, RGB 센서에서 생성된 RGB 영상과 IR 센서에서 생성된 IR 영상이 각각의 채널로 객체 검출용 딥러닝 네트워크에 입력된다.As shown, the RGB image generated by the RGB sensor and the IR image generated by the IR sensor are input to the deep learning network for object detection through respective channels.
객체 검출용 딥러닝 네트워크는 입력된 RGB 영상으로부토 조도(Light Condition)를 추정하고, 추정된 조도 정보를 RGB 영상 및 IR 영상과 함께 퓨전 레이어에 입력시킨다.The deep learning network for object detection estimates the illuminance from the input RGB image, and inputs the estimated illuminance information together with the RGB image and the IR image to the fusion layer.
이에 의해, 객체 검출용 딥러닝 네트워크는, 하나의 네트워크로 RGB 영상와 IR 영상의 특징을 추출하며, 특징을 추출함에 있어서는 추정된 조도 정보까지 이용함으로써 조도 영향들을 보상하여 영상의 특징을 생성할 수 있다.Thereby, the deep learning network for object detection extracts the features of the RGB image and the IR image with one network, and in extracting the features, even the estimated illuminance information is used to compensate for the illuminance effects to generate the image features. .
이 부분은 센서의 증가에 따른 확장성이 있다는 장점이 있고, 해당 네트워크를 적절히 수정할 경우 다양한 레이어에서의 융합이 가능하다.This part has the advantage of scalability as the number of sensors increases, and if the network is appropriately modified, convergence in various layers is possible.
다음, 객체 검출용 딥러닝 네트워크는, 추출된 RGB 영상의 특징과 IR 영상의 특징을 이용하여, 영상에 존재하는 객체를 검출하고, 검출된 객체를 분류한다. 검출 및 분류 결과는 영상에서 객체를 표시한 BB(Boundary Boc)와 정확도/신뢰도를 나타내는 스코어로 제시된다.Next, the deep learning network for object detection detects an object in the image and classifies the detected object by using the extracted RGB image and the IR image. The detection and classification results are presented as a BB (Boundary Boc) indicating an object in the image and a score indicating accuracy/reliability.
도 3은 본 발명의 다른 실시예에 따른 다중 센서 처리 장치의 블럭도이다. 본 발명의 실시예에 따른 다중 센서 처리 장치는, 도 2에 도시된 객체 검출용 딥러닝 네트워크를 이용하여 상황 인지와 경보 발생 등의 추가적인 응용 처리를 수행한다.3 is a block diagram of a multi-sensor processing apparatus according to another embodiment of the present invention. The multi-sensor processing apparatus according to an embodiment of the present invention performs additional application processing such as situation recognition and alarm generation by using the deep learning network for object detection shown in FIG. 2 .
이와 같은 기능을 수행하는 본 발명의 실시예에 따른 다중 센서 처리 장치는, 도 3에 도시된 바와 같이, 외부 센서들(110-1, 110-2, ..., 110-n), 내부 센서(120), 센서 정보 추출부(130), 캘리브레이션부(140), 내부 센서 제어기(150), 객체 검출/분류기(160), 장애물 검출기(170) 및 인지/판단부(180)를 포함하여 구성된다.The multi-sensor processing apparatus according to an embodiment of the present invention for performing such a function, as shown in FIG. 3 , includes external sensors 110-1, 110-2, ..., 110-n, and internal sensors. 120 , sensor information extraction unit 130 , calibration unit 140 , internal sensor controller 150 , object detection/classifier 160 , obstacle detector 170 and recognition/determining unit 180 . do.
외부 센서들(110-1, 110-2, ..., 110-n)은 차량 외부에 설치되어 있는 센서들로 이종의 영상들, 이를 테면, RGB 영상, IR 영상, ..., Lidar 영상을 생성한다. 센서들(110-1, 110-2, ..., 110-n)의 개수와 종류에 대한 제한은 없으며, 필요에 따른 자유로운 구현이 가능하다.The external sensors 110-1, 110-2, ..., 110-n are sensors installed outside the vehicle and display different types of images, such as RGB image, IR image, ..., Lidar image. create There is no limitation on the number and type of the sensors 110-1, 110-2, ..., 110-n, and free implementation is possible as needed.
내부 센서(120)은 차량 내부에 설치된 광각의 센서로, 차량 내부 외에도 차량 외부, 특히, 사각 지대의 영상 생성이 가능하다.The internal sensor 120 is a wide-angle sensor installed inside the vehicle, and it is possible to generate an image of the outside of the vehicle, in particular, of a blind spot in addition to the inside of the vehicle.
도 4는 내부 센서(120)의 설치예를 나타낸 도면이다. 도시된 바와 같이, 내부 센서(120)는 차량 내부 천정 중간 보다 조금 앞부분에 설치할 수 있으며, RGBIR 센서로 구현할 수 있다.4 is a view showing an example of installation of the internal sensor 120 . As shown, the internal sensor 120 may be installed a little in front of the middle of the ceiling inside the vehicle, and may be implemented as an RGBIR sensor.
내부 센서(120)의 화각을 190도 이상으로 구현한다면, 차량의 내부는 물론 차량 후방 측면들에 나타나는 사각 지대까지 영상으로 생성할 수 있다.If the angle of view of the internal sensor 120 is implemented to be 190 degrees or more, it is possible to generate images not only in the interior of the vehicle, but also in blind spots appearing on the rear side surfaces of the vehicle.
센서 정보 추출부(130)는 센서들(110-1, 110-2, ..., 110-n, 120)에 대한 내부 파라미터와 외부 파라미터를 추출한다. 캘리브레이션부(140)는 센서 정보 추출부(130)에 의해 추출된 파라미터들을 참조하여, 센서들(110-1, 110-2, 110-n, 120)에 대한 캘리브레이션을 수행한다.The sensor information extraction unit 130 extracts internal parameters and external parameters for the sensors 110 - 1 , 110 - 2 , ..., 110 -n, and 120 . The calibrator 140 calibrates the sensors 110 - 1 , 110 - 2 , 110 - n and 120 with reference to the parameters extracted by the sensor information extracting unit 130 .
객체 검출/분류기(160)는 외부 센서들(110-1, 110-2, 110-n)에 의해 생성된 이종의 영상들을 이용하여 영상에 존재하는 객체를 검출하고 분류하는 딥러닝 네트워크이다. 객체 검출/분류기(160)는 전술한 도 2에 제시된 객체 검출용 딥러닝 네트워크로 구현할 수 있다.The object detector/classifier 160 is a deep learning network that detects and classifies an object existing in an image using heterogeneous images generated by the external sensors 110-1, 110-2, and 110-n. The object detection/classifier 160 may be implemented as a deep learning network for object detection shown in FIG. 2 described above.
내부 센서 제어기(150)는 센서 정보 추출부(130)에 의해 추출된 내부 센서(120)의 파라미터와 객체 검출/분류기(160)에 의해 추정된 조도 정보를 기초로, IR 조명의 강도나 WDR(Wide Dynamic Range)을 능동적으로 조정한다.The internal sensor controller 150 is based on the parameter of the internal sensor 120 extracted by the sensor information extraction unit 130 and the illuminance information estimated by the object detection/classifier 160, the intensity of IR illumination or WDR ( Wide Dynamic Range) is actively adjusted.
차량 내부의 경우 IR cut filter를 사용하지 않는 RGBIR 센서의 경우 신호 처리 특성상 정상 조도에서의 색편향 문제 및 저조도 환경에서의 색 잡음, IR 투과율과 반사율이 서로 다른 물체들이 혼재된 환경이므로, 내부 센서 제어기(150)에 의한 조명과 셔터의 능동적 조정을 통해 균일한 화질을 확보한다.In the case of an RGBIR sensor that does not use an IR cut filter inside a vehicle, due to the characteristics of signal processing, color deflection problems in normal illumination, color noise in low illumination environments, and objects with different IR transmittance and reflectance are mixed. A uniform image quality is ensured through active adjustment of illumination and shutter by (150).
이를 통해, 내부 센서(120)에 의해 생성되는 외부 영상 부분이 외부 환경(역광 상황, 저조도 상황 등)에 강인해질 수 있다.Through this, the external image portion generated by the internal sensor 120 may be robust to the external environment (backlight situation, low light situation, etc.).
장애물 검출기(170)는 내부 센서(120)에 의해 생성된 사각 지대의 영상을 분석하여, 장애물을 검출한다.The obstacle detector 170 analyzes the image of the blind spot generated by the internal sensor 120 to detect the obstacle.
인지/판단부(180)는 객체 검출/분류기(160)에 의한 객체 객체 검출/분류 결과와 장애물 검출기(170)에 의한 사각 지대의 장애물 검출 결과를 기초로, 현재 상황을 인지하고, 위험을 판단하여, 필요시 경보를 발생시킨다.The recognition/determination unit 180 recognizes the current situation and determines the risk based on the object detection/classification result by the object detection/classifier 160 and the obstacle detection result in the blind spot by the obstacle detector 170 . Thus, an alarm is generated when necessary.
도 5는 본 발명의 또 다른 실시예에 따른 다중 센서 처리 방법의 설명에 제공되는 흐름도이다.5 is a flowchart provided to explain a multi-sensor processing method according to another embodiment of the present invention.
도시된 바와 같이, 먼저, 센서 정보 추출부(130)가 센서들(110-1, 110-2, ..., 110-n, 120)에 대한 내부 파라미터와 외부 파라미터를 추출하고(S210), 캘리브레이션부(140)는 S210단계에서 추출된 파라미터들을 참조하여, 센서들(110-1, 110-2, 110-n, 120)에 대한 캘리브레이션을 수행한다(S220).As shown, first, the sensor information extraction unit 130 extracts internal parameters and external parameters for the sensors 110-1, 110-2, ..., 110-n, 120 (S210), The calibrator 140 calibrates the sensors 110-1, 110-2, 110-n, and 120 with reference to the parameters extracted in step S210 (S220).
객체 검출/분류기(160)는 객체 검출용 딥러닝 네트워크에 외부 센서들(110-1, 110-2, 110-n)에 의해 생성된 이종의 영상들을 입력하여 영상에 존재하는 객체를 검출하고 분류한다(S230).The object detection/classifier 160 detects and classifies objects existing in the image by inputting heterogeneous images generated by external sensors 110-1, 110-2, and 110-n to the deep learning network for object detection. do (S230).
내부 센서 제어기(150)는 객체 검출/분류기(160)에 의해 추정된 조도 정보를 기초로, IR 조명의 강도나 내부 센서(120)의 WDR을 능동적으로 조정한다(S240).The internal sensor controller 150 actively adjusts the intensity of IR illumination or the WDR of the internal sensor 120 based on the illuminance information estimated by the object detection/classifier 160 (S240).
장애물 검출기(170)는 내부 센서(120)에 의해 생성된 사각 지대의 영상을 분석하여, 장애물을 검출한다(S250).The obstacle detector 170 analyzes the image of the blind spot generated by the internal sensor 120 to detect the obstacle (S250).
인지/판단부(180)는 S230단계에 의한 객체 객체 검출/분류 결과와 S250단계에 의한 사각 지대의 장애물 검출 결과를 기초로, 현재 상황을 인지하고, 위험을 판단하여, 필요시 경보를 발생시킨다(S260).The recognition/determination unit 180 recognizes the current situation based on the object object detection/classification result by step S230 and the obstacle detection result in the blind spot by step S250, determines the risk, and generates an alarm if necessary. (S260).
지금까지, 차량용 다중 센서 딥러닝 처리 장치 및 방법에 대해 바람직한 실시예들을 들어 상세히 설명하였다.So far, preferred embodiments of the multi-sensor deep learning processing apparatus and method for a vehicle have been described in detail.
본 발명의 실시예에서는, 막대한 연산을 최소화 할 수 있는 딥러닝 네트워크 구조를 적용하여, 임베디드 하드웨어에 구현 가능한 수준의 다중 센서 기반 객체 인지가 가능하게 하였다.In an embodiment of the present invention, a deep learning network structure capable of minimizing enormous computations was applied to enable multi-sensor-based object recognition at a level that can be implemented in embedded hardware.
특히, 이종 센서와 더불어 신규 센서가 추가된 경우에도 동시 적용 가능한 구조를 제시하였고, 유연한 딥러닝 장치 및 신종 센서 입력에도 유지보수가 가능한 모델을 제시하였다.In particular, a structure that can be applied simultaneously even when a new sensor is added along with a heterogeneous sensor is presented, and a model that can be maintained even with a flexible deep learning device and new sensor input is presented.
본 발명의 실시예는, 단일 센서 기반의 처리와 더불어 이종의 센서 간의 딥러닝 처리를 위한 융합 기술의 적용으로 다양한 외부 센서 인터페이스에도 적용이 가능하다.Embodiments of the present invention can be applied to various external sensor interfaces by applying a fusion technology for deep learning processing between heterogeneous sensors as well as single sensor-based processing.
또한, 본 발명의 실시예에서는, 차량 내부에 설치된 센서를 활용하여, 차량 외부, 구체적으로는, 사각 지대의 상황/위험까지 모니터링할 수 있는 방법을 제시하였다.In addition, in an embodiment of the present invention, a method for monitoring the outside of the vehicle, specifically, the situation/danger of the blind spot by utilizing the sensor installed inside the vehicle, is presented.
한편, 본 실시예에 따른 장치와 방법의 기능을 수행하게 하는 컴퓨터 프로그램을 수록한 컴퓨터로 읽을 수 있는 기록매체에도 본 발명의 기술적 사상이 적용될 수 있음은 물론이다. 또한, 본 발명의 다양한 실시예에 따른 기술적 사상은 컴퓨터로 읽을 수 있는 기록매체에 기록된 컴퓨터로 읽을 수 있는 코드 형태로 구현될 수도 있다. 컴퓨터로 읽을 수 있는 기록매체는 컴퓨터에 의해 읽을 수 있고 데이터를 저장할 수 있는 어떤 데이터 저장 장치이더라도 가능하다. 예를 들어, 컴퓨터로 읽을 수 있는 기록매체는 ROM, RAM, CD-ROM, 자기 테이프, 플로피 디스크, 광디스크, 하드 디스크 드라이브, 등이 될 수 있음은 물론이다. 또한, 컴퓨터로 읽을 수 있는 기록매체에 저장된 컴퓨터로 읽을 수 있는 코드 또는 프로그램은 컴퓨터간에 연결된 네트워크를 통해 전송될 수도 있다.On the other hand, it goes without saying that the technical idea of the present invention can also be applied to a computer-readable recording medium containing a computer program for performing the functions of the apparatus and method according to the present embodiment. In addition, the technical ideas according to various embodiments of the present invention may be implemented in the form of computer-readable codes recorded on a computer-readable recording medium. The computer-readable recording medium may be any data storage device readable by the computer and capable of storing data. For example, the computer-readable recording medium may be a ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical disk, hard disk drive, or the like. In addition, the computer-readable code or program stored in the computer-readable recording medium may be transmitted through a network connected between computers.
또한, 이상에서는 본 발명의 바람직한 실시예에 대하여 도시하고 설명하였지만, 본 발명은 상술한 특정의 실시예에 한정되지 아니하며, 청구범위에서 청구하는 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 기술분야에서 통상의 지식을 가진자에 의해 다양한 변형실시가 가능한 것은 물론이고, 이러한 변형실시들은 본 발명의 기술적 사상이나 전망으로부터 개별적으로 이해되어져서는 안될 것이다.In addition, although preferred embodiments of the present invention have been illustrated and described above, the present invention is not limited to the specific embodiments described above, and the technical field to which the present invention belongs without departing from the gist of the present invention as claimed in the claims Various modifications are possible by those of ordinary skill in the art, and these modifications should not be individually understood from the technical spirit or prospect of the present invention.

Claims (8)

  1. 제1 타입의 제1 영상을 입력받는 제1 입력단계;a first input step of receiving a first image of a first type;
    제2 타입의 제2 영상을 입력받는 제2 입력단계;a second input step of receiving a second type of second image;
    입력된 제1 영상과 제2 영상을 인공지능 모델에 함께 입력하는 제3 입력단계;a third input step of inputting the input first image and the second image together into the artificial intelligence model;
    인공지능 모델로 제1 영상의 특징과 제2 영상의 특징을 추출하는 단계; 및extracting the features of the first image and the features of the second image with an artificial intelligence model; and
    추출된 제1 영상의 특징과 제2 영상을 이용하여, 객체를 검출하고 검출된 객체를 분류하는 검출 및 분류단계;를 포함하는 것을 특징으로 하는 다중 영상 처리 방법.and a detection and classification step of detecting an object and classifying the detected object by using the extracted features of the first image and the second image.
  2. 청구항 1에 있어서,The method according to claim 1,
    제1 영상을 이용하여, 조도를 추정하는 단계;를 더 포함하고,Using the first image, estimating the illuminance; further comprising,
    제3 입력단계는,The third input step is
    추정된 조도 정보를 제1 영상 및 제2 영상과 함께 인공지능 모델에 입력하는 것을 특징으로 하는 다중 영상 처리 방법.A multi-image processing method, characterized in that the estimated illuminance information is input to the artificial intelligence model together with the first image and the second image.
  3. 청구항 2에 있어서,3. The method according to claim 2,
    추출 단계는,The extraction step is
    추정된 조도 정보를 더 이용하여, 제1 영상의 특징과 제2 영상의 특징을 추출하는 것을 특징으로 하는 다중 영상 처리 방법.A multi-image processing method, characterized in that extracting features of the first image and features of the second image by further using the estimated illuminance information.
  4. 청구항 2에 있어서,3. The method according to claim 2,
    추정된 조도 정보를 참조하여, 차량 내부와 사각 지대에 대한 영상인 제3 영상을 생성하는 제3 센서의 WDR(Wide Dynamic Range)과 조명을 제어하는 단계;를 더 포함하는 것을 특징으로 하는 다중 영상 처리 방법.Controlling wide dynamic range (WDR) and lighting of a third sensor that generates a third image that is an image of the inside of the vehicle and the blind spot with reference to the estimated illuminance information; processing method.
  5. 청구항 4에 있어서,5. The method according to claim 4,
    제3 센서에서 생성된 제3 영상을 이용하여, 사각 지대의 장애물을 검출하는 단계;를 더 포함하는 것을 특징으로 하는 다중 영상 처리 방법.Using the third image generated by the third sensor, detecting an obstacle in the blind spot; Multiple image processing method comprising the further comprising.
  6. 청구항 5에 있어서,6. The method of claim 5,
    검출 및 분류단계에서 검출 및 분류된 객체와 검출 단계에서 검출된 장애물을 기초로, 상황을 인지하고 위험을 판단하는 단계;를 더 포함하는 것을 특징으로 하는 다중 영상 처리 방법.Based on the object detected and classified in the detection and classification step and the obstacle detected in the detection step, the step of recognizing a situation and determining a risk; The multi-image processing method further comprising a.
  7. 청구항 1에 있어서,The method according to claim 1,
    제1 영상은,The first video is
    RGB 영상이고,RGB image,
    제2 영상은,The second video is
    IR 영상인 것을 특징으로 하는 다중 영상 처리 방법.Multiple image processing method, characterized in that the IR image.
  8. 제1 타입의 제1 영상을 생성하는 제1 센서;a first sensor for generating a first image of a first type;
    제2 타입의 제2 영상을 생성하는 제2 센서; 및a second sensor generating a second image of a second type; and
    제1 센서에 의해 생성된 제1 영상과 제2 센서에 의해 생성된 제2 영상을 인공지능 모델에 함께 입력하여, 인공지능 모델로 제1 영상의 특징과 제2 영상의 특징을 추출하고, 추출된 제1 영상의 특징과 제2 영상을 이용하여 객체를 검출하고 검출된 객체를 분류하는 객체 분류/검출기;를 포함하는 것을 특징으로 하는 다중 센서 처리 장치.The first image generated by the first sensor and the second image generated by the second sensor are input together into the artificial intelligence model, and the features of the first image and the features of the second image are extracted with the artificial intelligence model. The multi-sensor processing apparatus comprising: an object classifier/detector that detects an object using the features of the first image and the second image and classifies the detected object.
PCT/KR2019/016338 2019-11-26 2019-11-26 Deep learning processing apparatus and method for multiple sensors for vehicle WO2021107171A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190152984A KR102333107B1 (en) 2019-11-26 2019-11-26 Deep Learning Processing Apparatus and Method for Multi-Sensor on Vehicle
KR10-2019-0152984 2019-11-26

Publications (1)

Publication Number Publication Date
WO2021107171A1 true WO2021107171A1 (en) 2021-06-03

Family

ID=76130605

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/016338 WO2021107171A1 (en) 2019-11-26 2019-11-26 Deep learning processing apparatus and method for multiple sensors for vehicle

Country Status (2)

Country Link
KR (1) KR102333107B1 (en)
WO (1) WO2021107171A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537284B (en) * 2021-06-04 2023-01-24 中国人民解放军战略支援部队信息工程大学 Deep learning implementation method and system based on mimicry mechanism
KR20230003953A (en) * 2021-06-30 2023-01-06 한국전자기술연구원 Vehicle lightweight deep learning processing device and method applying environment variance adaptive feature generator
KR102610353B1 (en) * 2021-10-05 2023-12-06 한국전자기술연구원 Method and device for detecting objects from depth images
KR102633340B1 (en) * 2021-11-12 2024-02-05 한국전자기술연구원 Lightweight deep learning device with multi-channel information extractor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120054325A (en) * 2010-11-19 2012-05-30 국방과학연구소 System, apparatus and method for extracting a target in images
KR20160131261A (en) * 2015-05-06 2016-11-16 한화테크윈 주식회사 Method of monitoring interested area
KR101866676B1 (en) * 2017-01-03 2018-06-12 중앙대학교 산학협력단 Apparatus and Method for identifying object using multi spectral images
KR20180096101A (en) * 2017-02-20 2018-08-29 엘아이지넥스원 주식회사 Apparatus and Method for Intelligent Infrared Image Fusion
KR20190122606A (en) * 2019-10-11 2019-10-30 엘지전자 주식회사 Apparatus and method for monitoring object in vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120054325A (en) * 2010-11-19 2012-05-30 국방과학연구소 System, apparatus and method for extracting a target in images
KR20160131261A (en) * 2015-05-06 2016-11-16 한화테크윈 주식회사 Method of monitoring interested area
KR101866676B1 (en) * 2017-01-03 2018-06-12 중앙대학교 산학협력단 Apparatus and Method for identifying object using multi spectral images
KR20180096101A (en) * 2017-02-20 2018-08-29 엘아이지넥스원 주식회사 Apparatus and Method for Intelligent Infrared Image Fusion
KR20190122606A (en) * 2019-10-11 2019-10-30 엘지전자 주식회사 Apparatus and method for monitoring object in vehicle

Also Published As

Publication number Publication date
KR20210064591A (en) 2021-06-03
KR102333107B1 (en) 2021-11-30

Similar Documents

Publication Publication Date Title
WO2021107171A1 (en) Deep learning processing apparatus and method for multiple sensors for vehicle
US11606516B2 (en) Image processing device, image processing method, and image processing system
WO2020171281A1 (en) Visible light and infrared fusion image-based object detection method and apparatus
CN109703460B (en) Multi-camera complex scene self-adaptive vehicle collision early warning device and early warning method
EP3859708B1 (en) Traffic light image processing method and device, and roadside device
US20180253630A1 (en) Surrounding View Camera Blockage Detection
CN112750170B (en) Fog feature recognition method and device and related equipment
WO2021006491A1 (en) Sound source visualization device and method
US20220198801A1 (en) Method and system for detecting fire and smoke
WO2018097595A1 (en) Method and device for providing driving information by using camera image
WO2022039319A1 (en) Personal information de-identification method, verification method, and system
WO2014061922A1 (en) Apparatus and method for detecting camera tampering using edge image
WO2017195965A1 (en) Apparatus and method for image processing according to vehicle speed
KR20200102907A (en) Method and apparatus for object recognition based on visible light and infrared fusion image
WO2012011715A2 (en) Vehicle collision warning system and method therefor
US11847837B2 (en) Image-based lane detection and ego-lane recognition method and apparatus
WO2021167189A1 (en) Method and device for multi-sensor data-based fusion information generation for 360-degree detection and recognition of surrounding object
WO2021095962A1 (en) Hybrid-based recognition system for abnormal behavior and method therefor
WO2023277219A1 (en) Lightweight deep learning processing device and method for vehicle to which environmental change adaptive feature generator is applied
WO2023085464A1 (en) Lightweight deep learning apparatus comprising multichannel information extractor
CN114348811B (en) Robot, robot boarding method, robot boarding device, and storage medium
WO2022107911A1 (en) Lightweight deep learning processing device and method for vehicle, applying multiple feature extractor
WO2021096030A1 (en) Image object recognition system based on deep learning
WO2019245191A1 (en) Apparatus and method for analyzing image
WO2023096092A1 (en) Method and system for detecting abnormal behavior on basis of composite image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954681

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19954681

Country of ref document: EP

Kind code of ref document: A1