WO2020171281A1 - 가시광 및 적외선 융합 영상 기반 객체 검출 방법 및 장치 - Google Patents
가시광 및 적외선 융합 영상 기반 객체 검출 방법 및 장치 Download PDFInfo
- Publication number
- WO2020171281A1 WO2020171281A1 PCT/KR2019/004122 KR2019004122W WO2020171281A1 WO 2020171281 A1 WO2020171281 A1 WO 2020171281A1 KR 2019004122 W KR2019004122 W KR 2019004122W WO 2020171281 A1 WO2020171281 A1 WO 2020171281A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- infrared
- object detection
- visible light
- image
- data
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/30—Transforming light or analogous information into electric information
- H04N5/33—Transforming infrared radiation
Definitions
- the present invention relates to a method and apparatus for detecting an object based on a fused visible and infrared image, and more particularly, to a method and apparatus for recognizing an object around a vehicle using an image acquired through a visible light camera and an infrared camera. .
- a vehicle In general, a vehicle is equipped with various convenience means to provide a stable and comfortable driving condition. In addition to the demand for convenience means, the demand for devices for vehicle safety is also increasing.
- Vehicle safety devices include active safety devices such as ABS (Antilock Breaking System) devices, ECS (Electronic Controlled Suspension) devices, and Autonomous Emergency Braking (AEB) devices, and vehicle black boxes to identify the cause of post-accidents.
- Active safety devices such as ABS (Antilock Breaking System) devices, ECS (Electronic Controlled Suspension) devices, and Autonomous Emergency Braking (AEB) devices, and vehicle black boxes to identify the cause of post-accidents.
- ABS Antilock Breaking System
- ECS Electronic Controlled Suspension
- AEB Autonomous Emergency Braking
- the autonomous emergency braking system measures the distance to a vehicle (or object) running in front through a radar mounted on the vehicle, and recognizes a risk of collision when the distance between the host vehicle and the vehicle in front is closer than a predetermined distance. When the danger of a collision is recognized, the vehicle speed is reduced by automatically braking.
- the autonomous emergency braking system notifies the driver of the risk of collision with a warning sound and operates the braking device in a standby mode so as to quickly respond to the driver's pedal operation.
- it is necessary to quickly and accurately determine whether an object in front of the vehicle is a pedestrian, a vehicle, or other object.
- Korean Patent No. 10-1671993 (2016.10.27) relates to a vehicle safety system, photodiode, collision prevention distance calculation module, speed setting module, vehicle control module, traffic light detection module, color display module, surrounding monitoring module Including, the color detection module sets the color of the traffic light as a range, the color of a certain range detected by the color detection module is determined as one color, and the color display module is image information obtained from the traffic light detection camera module Displays the color area of the traffic light, and the vehicle speed control module stops the user's vehicle and flashes the emergency light when the color of the traffic light detected by the color detection module is red, and the distance between the traffic light and the user's vehicle is within a certain distance.
- the peripheral distance calculation module uses blocks on the image acquired from the surrounding surveillance camera module. After searching the whole by size, dividing it by the difference in brightness of areas, and learning, weighting is applied using the AdaBoost algorithm, and a strong classifier is created by combining weak classifiers to detect pedestrian faces in the image, and pedestrian faces are detected. In this case, a warning message is sent when the distance between the pedestrian and the user's vehicle is within a certain range.
- Korean Patent Registration No. 10-1611273 (2016.04.05) relates to a system and method for detecting a subject using sequential infrared images, an infrared lamp that is disposed on a vehicle and emits infrared rays according to a preset period, and is reflected from the subject.
- the image data including the infrared component of the coming infrared component and the infrared component of other vehicle headlights is acquired, but when the infrared lamp emits infrared rays, the on image including the infrared component reflected from the subject and the infrared component of other vehicle headlights, and infrared
- an infrared camera that acquires an off-image including an infrared component of a headlamp of another vehicle
- a subject recognition unit that analyzes the image data and detects an area of a subject.
- the conventional method of detecting surrounding objects has a problem in that it is difficult to discriminate objects in the case of light reflection, backlighting, and nighttime situations in an acquired image.
- the present invention is to propose a method and apparatus for detecting objects based on visible and infrared fused images that can further improve object detection performance in various environments.
- an apparatus for detecting objects around a vehicle based on a fusion image of visible light and infrared light comprising: a processor; And a memory connected to the processor, wherein the memory determines whether there is an object detection reduction region due to the surrounding environment in the visible light image around the vehicle input through the visible light camera, and the object detection reduction region exists.
- visible light and infrared fusion storing program instructions executable by the processor to classify an object using RGB data of the visible light image and infrared image data acquired through an infrared camera as an input value of a pre-learned object detection network
- An image-based object detection device is provided.
- the infrared image data is at least one of raw data defined as a count value, gray scale data obtained from raw data (infrared data), temperature data, and infrared signal data calculated from the temperature data. It may contain more than one.
- the program commands may be input to the object detection network by changing weights of each of the RGB data and the infrared image data according to the object detection deterioration region.
- the program instructions count pixels having a brightness value higher than a preset first threshold value or a brightness value lower than a second threshold value in the frame of the visible light image to determine whether an object detection deterioration region exists in the visible light image frame. I can judge.
- the weight is the loss ( Optimal parameters to minimize ( ) Can be determined.
- the object detection network may include a plurality of blocks including convolution layers, attention modules, batch normalization, and leaky RELU.
- the visible light camera and the infrared camera may be disposed adjacent to each other in the same direction.
- a method of detecting an object around a vehicle based on a fused image of visible light and infrared light comprising: receiving a visible light image around the vehicle through a visible light camera; Receiving an infrared image around the vehicle through an infrared camera; Determining whether an object detection deterioration area due to a surrounding environment exists in the visible light image; And classifying an object by using RGB data of the visible light image and infrared image data acquired through an infrared camera as input values of a pre-learned object detection network when an object detection deterioration region exists.
- An image-based object detection method is provided.
- the present invention it is possible to further improve object detection performance by determining whether an object recognition deterioration region exists in a visible light image and complementarily using an infrared image for the corresponding region.
- FIG. 1 is a diagram showing a configuration of an image analysis system for object detection according to an exemplary embodiment of the present invention.
- FIG. 2 is a diagram illustrating visible light image data and infrared image data used for object detection in the image analysis apparatus according to the present embodiment.
- FIG. 3 shows input images from a visible light camera and an infrared camera in light reflection, backlight, and night situations, respectively.
- FIG. 4 is a diagram showing the architecture of an object detection network according to the present embodiment.
- 5 is a diagram showing the architecture of the attention module in this embodiment.
- FIG. 7 is a flowchart illustrating a process of detecting an object based on a fused visible and infrared image according to the present embodiment.
- the present invention recognizes objects around a vehicle using images acquired through a visible light camera and an infrared camera.
- a visible light camera can obtain a clear image, but object detection performance deteriorates in snow, rain, bad weather, and night situations.
- the infrared camera detects infrared rays emitted by the surface temperature of an object, and has good object detection performance in night situations, but is sensitive to surrounding environments such as season and time, and has a low object recognition rate.
- the present invention proposes a method capable of improving object detection performance irrespective of surrounding environments by fusing images obtained from each of a visible light camera and an infrared camera.
- FIG. 1 is a diagram showing a configuration of an image analysis system for object detection according to an exemplary embodiment of the present invention.
- the image analysis system 100 may include a visible light camera 110, an infrared camera 120, an image analysis device 130, and an alarm unit 150.
- the image analysis device 130 recognizes objects around the vehicle by analyzing visible light image data and infrared image data acquired through the visible light camera 110 and the infrared camera 120.
- the visible light camera 110 and the infrared camera 120 are installed adjacent to each other in the same direction and more preferably installed in one module with a viewing angle in the same direction, so that a visible light image and an infrared image of the same object ( Thermal image).
- the image analysis apparatus 130 may include a processor and a memory.
- the processor may include a central processing unit (CPU) capable of executing a computer program or a virtual machine.
- CPU central processing unit
- the memory may include a nonvolatile storage device such as a fixed hard drive or a removable storage device.
- the removable storage device may include a compact flash unit, a USB memory stick, or the like.
- the memory may also include volatile memories such as various random access memories.
- Program instructions executable by the processor are stored in such a memory, and objects around the vehicle are detected using a fusion image of visible light and infrared light, as described below.
- FIG. 2 is a diagram illustrating visible light image data and infrared image data used for object detection in the image analysis apparatus according to the present embodiment.
- visible light image data is RGB data of each pixel included in a single frame
- infrared image data is raw data and row defined by counts. It may be gray scale data (infrared data), temperature data, and infrared signal data (radiation) obtained from the data.
- the RGB data is an R/G/B value displayed by a light source and is a value that is not affected by the surrounding environment.
- gray scale data is a value obtained by converting raw data and expressing it in gray scale.
- the infrared signal and temperature have a correlation, and infrared signal data of each pixel can be calculated from the temperature data through the correlation equation.
- infrared image data used together with RGB data for object detection may be at least one of raw data, gray scale data, temperature data, and infrared signal data.
- the image analysis apparatus 130 recognizes an object using RGB data, raw data, and gray scale data, but is not limited thereto.
- the image analysis device 130 uses a pre-learned algorithm to detect an object difficult to detect in a visible light image due to light reflection, backlighting, or a nighttime environment (a region that is more than or less than a preset pixel value, hereinafter, ‘object. It is determined whether or not there is a detection reduction region').
- FIG 3 shows input images from the visible light camera 110 and the infrared camera 120 in light reflection, backlight, and night situations, respectively.
- a region in which object detection is difficult exists in a predetermined region 300 of visible light image data in light reflection, backlight, and night situations.
- the image analysis apparatus 130 determines an object detection deterioration area by counting pixels having a brightness value higher than a predetermined first threshold value or lower than a second threshold value using each pixel value (brightness value) in the frame of the visible light image.
- the image analysis apparatus 130 may determine that an object detection deterioration region exists in a visible light image frame in which pixels having a brightness value higher or lower than a predetermined threshold value exist in a larger number than a predetermined threshold value.
- the image analysis apparatus 130 may perform machine learning on the object detection deterioration region in advance, and determine the object detection deterioration region from the currently input visible light image based on the machine learning.
- the image analysis apparatus 130 classifies the object by referring to infrared image data for the region.
- the image analysis device 130 dynamically allocates weights of RGB data acquired through the visible light camera 110 and raw data and gray scale data acquired through the infrared camera 120 to provide an object in the visible light image.
- the accuracy of object detection is improved even when the detection deterioration region is included.
- the image analysis apparatus 130 performs object detection using a multi-domain attentive detection network (MDADN) as a pre-learned object detection network.
- MDADN multi-domain attentive detection network
- RGB data is input to the object detection network.
- the present invention performs object detection through fusion of visible and infrared images
- a total of five data such as RGB data, raw data, and gray scale data are input to the object detection network.
- a weight is dynamically assigned to each input value according to whether or not a detection deterioration region exists.
- the present invention is the loss ( Optimal parameter to minimize ( ).
- the optimum parameter means an optimum value of weights of the object recognition network.
- x is RGB data, raw data, and gray scale data which are inputs of the object detection network, Is the result obtained from the above input, silver Calculate the distance between y and the ground truth result y.
- Visible light image data and infrared image data which are data sources according to the present embodiment, are complementary to each other, and an object may be recognized by at least one data source.
- FIG. 4 is a diagram showing the architecture of an object detection network according to the present embodiment.
- the object detection network has 7 blocks consisting of convolution layers, attention modules, batch normalization, and leaky RELU.
- the first five blocks have a max-pooling layer with stride 2, and a skip connection is added from the third to the sixth blocks.
- the attention module is inserted into the convolution layers 1-1, 2-1, 3-3, 4-3, 5-5, and 6-7.
- n-k n is a block
- k is a layer.
- 5 is a diagram showing the architecture of the attention module in this embodiment.
- Each attention module has four fully connected layers (FC).
- FC fully connected layers
- average-pooling (pool avg ) and max-pooling (pool max ) are vertical/horizontal features Applies to
- W is the width
- H is the height
- C is the number of channels.
- the sigmoid function The two outputs from the pooling layer are correlated by using, and element-wise products are obtained through broadcasting.
- Attention module features dependent on environment as follows Attention maps for each channel are created.
- the attention module is a feature dependent on the environment. Create a spatial attention map of.
- the present invention derives a feature map as described above by using information on visible light image data and infrared image data as input values to an object detection network, and classifies objects from the feature map.
- the image analysis device 130 performs machine learning in advance through the object learning population and classifies objects based on the machine learning result.
- the object learning population may include a number of objects required for machine learning, for example, people, cars, signs, traffic lights, and the like.
- the alarm unit 150 generates an alarm when an object exists around the vehicle based on fusion of visible light and infrared light in the image analysis device 130 and is located within a preset distance in the traveling direction.
- the object detection result (MDADN) shows an improvement of at least two or more recognition rates as shown in FIG. 6D compared to FIGS. 6B and 6C using existing artificial intelligence algorithms (SSD and RCNN). Can be confirmed.
- FIG. 7 is a flowchart illustrating a process of detecting an object based on a fused visible and infrared image according to the present embodiment.
- FIG. 7 shows a process of performing an image analysis device.
- the image analysis device 130 on which machine learning has been completed receives visible light image data and infrared image data captured through the visible light camera 110 and the infrared camera 120 (step 700).
- the visible light image data is RGB data of each pixel
- the infrared image data may include at least one of raw data, gray scale data, temperature, and infrared signals of each pixel.
- the image analysis apparatus 130 determines whether an object detection reduction region exists in the input visible light image frame (step 702).
- Operation 702 may be a process of determining whether a predetermined number or more of pixels having a brightness value higher or lower than a predetermined threshold value exist in a visible light image in the object detection network learned in advance.
- Operation 702 may include a process of identifying an object detection deterioration region as well as whether an object detection deterioration region exists in the visible light image frame.
- the image analysis apparatus 130 classifies the object by allocating weights of visible light image data and infrared image data, which are input values of the object detection network (step 704).
- a high weight may be given to RGB data for an area other than the object detection deterioration region, and a high weight for at least one of raw data, gray scale data, temperature data, and infrared signal data for the object detection deterioration region Can be given.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims (10)
- 가시광 및 적외선 융합 영상 기반으로 차량 주변 객체를 검출하는 장치로서,프로세서; 및상기 프로세서에 연결되는 메모리를 포함하되,상기 메모리는,가시광 카메라를 통해 입력된 차량 주변의 가시광 영상에 주변 환경에 의한 객체 검출 저하 영역이 존재하는지 여부를 판단하고,객체 검출 저하 영역이 존재하는 경우, 상기 가시광 영상의 RGB 데이터 및 적외선 카메라를 통해 획득된 적외선 영상 데이터를 미리 학습된 객체 검출 네트워크의 입력값으로 하여 객체를 분류하도록,상기 프로세서에 의해 실행 가능한 프로그램 명령어들을 저장하는 가시광 및 적외선 융합 영상 기반 객체 검출 장치.
- 제1항에 있어서,상기 적외선 영상 데이터는 카운트 값(Counts)으로 정의되는 로우 데이터(Raw data), 로우 데이터에서 획득된 그레이 스케일 데이터(적외선 데이터), 온도 데이터 및 상기 온도 데이터로부터 계산되는 적외선 신호 데이터(Radiation) 중 적어도 하나 이상을 포함하는 가시광 및 적외선 융합 영상 기반 객체 검출 장치.
- 제2항에 있어서,상기 프로그램 명령어들은,상기 객체 검출 저하 영역에 따라 상기 RGB 데이터 각각 및 상기 적외선 영상 데이터의 가중치를 변경하여 상기 객체 검출 네트워크로 입력하는 가시광 및 적외선 융합 영상 기반 객체 검출 장치.
- 제1항에 있어서,상기 프로그램 명령어들은, 상기 가시광 영상의 프레임에서 미리 설정된 제1 임계치보다 높은 밝기값을 가지거나 제2 임계치보다 낮은 밝기 값을 갖는 픽셀을 카운팅하여 상기 가시광 영상 프레임 내에 객체 검출 저하 영역이 존재하는지 여부를 판단하는 가시광 및 적외선 융합 영상 기반 객체 검출 장치.
- 제1항에 있어서,상기 객체 검출 네트워크는 합성곱 레이어(convolution layers), 어텐션 모듈(attention module), 배치 정규화(batch normalization) 및 leaky RELU를 포함하는 복수의 블록으로 구성되는 가시광 및 적외선 융합 영상 기반 객체 검출 장치.
- 제1항에 있어서,상기 가시광 카메라 및 적외선 카메라는 인접한 위치에 동일 방향으로 배치되는 가시광 및 적외선 융합 영상 기반 객체 검출 장치.
- 가시광 및 적외선 융합 영상 기반으로 차량 주변 객체를 검출하는 방법으로서,가시광 카메라를 통해 차량 주변의 가시광 영상을 입력 받는 단계;적외선 카메라를 통해 상기 차량 주변의 적외선 영상을 입력 받는 단계;상기 가시광 영상에 주변 환경에 의한 객체 검출 저하 영역이 존재하는지 여부를 판단하는 단계; 및객체 검출 저하 영역이 존재하는 경우, 상기 가시광 영상의 RGB 데이터 및 적외선 카메라를 통해 획득된 적외선 영상 데이터를 미리 학습된 객체 검출 네트워크의 입력값으로 하여 객체를 분류하는 단계를 포함하는 가시광 및 적외선 융합 영상 기반 객체 검출 방법.
- 제8항에 있어서,상기 적외선 영상 데이터는 카운트 값(Counts)으로 정의되는 로우 데이터(Raw data), 로우 데이터에서 획득된 그레이 스케일 데이터(적외선 데이터), 온도 데이터 및 상기 온도 데이터로부터 계산되는 적외선 신호 데이터(Radiation) 중 적어도 하나 이상을 포함하는 가시광 및 적외선 융합 영상 기반 객체 검출 방법.
- 제9항에 있어서,상기 객체 분류 단계는, 상기 객체 검출 저하 영역에 따라 상기 RGB 데이터 각각 및 상기 적외선 영상 데이터의 가중치를 변경하여 상기 객체 검출 네트워크로 입력하는 가시광 및 적외선 융합 영상 기반 객체 검출 방법.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020190020931A KR102061445B1 (ko) | 2019-02-22 | 2019-02-22 | 가시광 및 적외선 융합 영상 기반 객체 검출 방법 및 장치 |
KR10-2019-0020931 | 2019-02-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020171281A1 true WO2020171281A1 (ko) | 2020-08-27 |
Family
ID=69051466
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/004122 WO2020171281A1 (ko) | 2019-02-22 | 2019-04-08 | 가시광 및 적외선 융합 영상 기반 객체 검출 방법 및 장치 |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102061445B1 (ko) |
WO (1) | WO2020171281A1 (ko) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112651276A (zh) * | 2020-09-04 | 2021-04-13 | 江苏濠汉信息技术有限公司 | 一种基于双光融合的输电通道预警系统及其预警方法 |
CN112907571A (zh) * | 2021-03-24 | 2021-06-04 | 南京鼓楼医院 | 基于多光谱图像融合识别的目标判定方法 |
CN113029951A (zh) * | 2021-03-16 | 2021-06-25 | 太原理工大学 | 一种输送带损伤多谱视听调频融合检测方法及装置 |
CN113034915A (zh) * | 2021-03-29 | 2021-06-25 | 北京卓视智通科技有限责任公司 | 一种双光谱交通事件检测方法及装置 |
CN113077533A (zh) * | 2021-03-19 | 2021-07-06 | 浙江大华技术股份有限公司 | 一种图像融合方法、装置以及计算机存储介质 |
CN114298987A (zh) * | 2021-12-17 | 2022-04-08 | 杭州海康威视数字技术股份有限公司 | 一种反光条检测方法及装置 |
US20220198200A1 (en) * | 2020-12-22 | 2022-06-23 | Continental Automotive Systems, Inc. | Road lane condition detection with lane assist for a vehicle using infrared detecting device |
CN115050016A (zh) * | 2022-08-15 | 2022-09-13 | 深圳市爱深盈通信息技术有限公司 | 车牌检测方法、装置、设备终端和可读存储介质 |
CN116778227A (zh) * | 2023-05-12 | 2023-09-19 | 昆明理工大学 | 基于红外图像与可见光图像的目标检测方法、系统及设备 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102449724B1 (ko) * | 2020-03-18 | 2022-09-30 | 주식회사 아이엔지시스템 | 은닉 카메라 탐지 시스템, 방법 및 이를 수행하기 위한 컴퓨팅 장치 |
CN113255797B (zh) * | 2021-06-02 | 2024-04-05 | 通号智慧城市研究设计院有限公司 | 一种基于深度学习模型的危险品检测方法和系统 |
CN113792592B (zh) * | 2021-08-09 | 2024-05-07 | 深圳光启空间技术有限公司 | 图像采集处理方法和图像采集处理装置 |
KR20230003953A (ko) * | 2021-06-30 | 2023-01-06 | 한국전자기술연구원 | 환경 변화 적응형 특징 생성기를 적용한 차량용 경량 딥러닝 처리 장치 및 방법 |
CN113869157A (zh) * | 2021-09-16 | 2021-12-31 | 中国科学院合肥物质科学研究院 | 一种基于可见光和红外云图的云分类方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160058531A (ko) * | 2014-11-17 | 2016-05-25 | 포항공과대학교 산학협력단 | 딥 러닝을 이용하는 구문 분석 모델 구축 방법 및 이를 수행하는 장치 |
KR20160066926A (ko) * | 2014-12-03 | 2016-06-13 | 삼성전자주식회사 | 데이터 분류 방법 및 장치와 관심영역 세그멘테이션 방법 및 장치 |
KR20180008247A (ko) * | 2016-07-14 | 2018-01-24 | 김경호 | 딥러닝 인공신경망 기반의 타스크 제공 플랫폼 |
KR20180093418A (ko) * | 2017-02-13 | 2018-08-22 | 영남대학교 산학협력단 | 보행자 검출 장치 및 방법 |
-
2019
- 2019-02-22 KR KR1020190020931A patent/KR102061445B1/ko active IP Right Grant
- 2019-04-08 WO PCT/KR2019/004122 patent/WO2020171281A1/ko active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160058531A (ko) * | 2014-11-17 | 2016-05-25 | 포항공과대학교 산학협력단 | 딥 러닝을 이용하는 구문 분석 모델 구축 방법 및 이를 수행하는 장치 |
KR20160066926A (ko) * | 2014-12-03 | 2016-06-13 | 삼성전자주식회사 | 데이터 분류 방법 및 장치와 관심영역 세그멘테이션 방법 및 장치 |
KR20180008247A (ko) * | 2016-07-14 | 2018-01-24 | 김경호 | 딥러닝 인공신경망 기반의 타스크 제공 플랫폼 |
KR20180093418A (ko) * | 2017-02-13 | 2018-08-22 | 영남대학교 산학협력단 | 보행자 검출 장치 및 방법 |
Non-Patent Citations (1)
Title |
---|
KIM, SEONG-HO: "Technology of Nighttime Pedestrian Detection for Intelligent Vehicles", THE MAGAZINE OF T HE IEIE, vol. 44, no. 8, 30 August 2017 (2017-08-30), pages 23 - 30 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112651276A (zh) * | 2020-09-04 | 2021-04-13 | 江苏濠汉信息技术有限公司 | 一种基于双光融合的输电通道预警系统及其预警方法 |
US20220198200A1 (en) * | 2020-12-22 | 2022-06-23 | Continental Automotive Systems, Inc. | Road lane condition detection with lane assist for a vehicle using infrared detecting device |
CN113029951A (zh) * | 2021-03-16 | 2021-06-25 | 太原理工大学 | 一种输送带损伤多谱视听调频融合检测方法及装置 |
CN113029951B (zh) * | 2021-03-16 | 2024-03-29 | 太原理工大学 | 一种输送带损伤多谱视听调频融合检测方法及装置 |
CN113077533A (zh) * | 2021-03-19 | 2021-07-06 | 浙江大华技术股份有限公司 | 一种图像融合方法、装置以及计算机存储介质 |
CN112907571A (zh) * | 2021-03-24 | 2021-06-04 | 南京鼓楼医院 | 基于多光谱图像融合识别的目标判定方法 |
CN113034915A (zh) * | 2021-03-29 | 2021-06-25 | 北京卓视智通科技有限责任公司 | 一种双光谱交通事件检测方法及装置 |
CN113034915B (zh) * | 2021-03-29 | 2023-06-20 | 北京卓视智通科技有限责任公司 | 一种双光谱交通事件检测方法及装置 |
CN114298987A (zh) * | 2021-12-17 | 2022-04-08 | 杭州海康威视数字技术股份有限公司 | 一种反光条检测方法及装置 |
CN115050016A (zh) * | 2022-08-15 | 2022-09-13 | 深圳市爱深盈通信息技术有限公司 | 车牌检测方法、装置、设备终端和可读存储介质 |
CN115050016B (zh) * | 2022-08-15 | 2023-01-17 | 深圳市爱深盈通信息技术有限公司 | 车牌检测方法、装置、设备终端和可读存储介质 |
CN116778227A (zh) * | 2023-05-12 | 2023-09-19 | 昆明理工大学 | 基于红外图像与可见光图像的目标检测方法、系统及设备 |
CN116778227B (zh) * | 2023-05-12 | 2024-05-10 | 昆明理工大学 | 基于红外图像与可见光图像的目标检测方法、系统及设备 |
Also Published As
Publication number | Publication date |
---|---|
KR102061445B1 (ko) | 2019-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020171281A1 (ko) | 가시광 및 적외선 융합 영상 기반 객체 검출 방법 및 장치 | |
CN110097109B (zh) | 一种基于深度学习的道路环境障碍物检测系统及方法 | |
KR20200102907A (ko) | 가시광 및 적외선 융합 영상 기반 객체 검출 방법 및 장치 | |
CN109703460B (zh) | 多摄像头的复杂场景自适应车辆碰撞预警装置及预警方法 | |
CN205992300U (zh) | 用于提供利用人行横道识别结果的引导信息的电子装置 | |
WO2020122301A1 (ko) | 딥 러닝 기반 교통 위반 단속 시스템 및 방법 | |
WO2015016461A2 (ko) | 교통표지판 인식을 위한 차량용 영상인식시스템 | |
WO2015088190A1 (en) | Vehicle control system for providing warning message and method thereof | |
WO2015056890A1 (ko) | 단일 다중 노출 카메라를 이용한 야간 전방 차량 검출 및 위치 측정 시스템 및 방법 | |
WO2021107171A1 (ko) | 차량용 다중 센서를 위한 딥러닝 처리 장치 및 방법 | |
WO2013048159A1 (ko) | 아다부스트 학습 알고리즘을 이용하여 얼굴 특징점 위치를 검출하기 위한 방법, 장치, 및 컴퓨터 판독 가능한 기록 매체 | |
WO2018101603A1 (ko) | 스테레오 카메라를 이용한 도로객체 인식방법 및 장치 | |
JP2012138077A (ja) | 車両識別装置 | |
JP6420650B2 (ja) | 車外環境認識装置 | |
WO2023005091A1 (en) | Systems and methods for object detection | |
CA2613922A1 (en) | Monolithic image perception device and method | |
WO2018097595A1 (ko) | 카메라 영상을 이용한 주행 정보 제공 방법 및 장치 | |
KR20200141834A (ko) | 영상기반 교통신호제어 장치 및 방법 | |
WO2012011715A2 (ko) | 차량 충돌 경보 시스템 및 방법 | |
WO2019172500A1 (ko) | 인공지능을 이용한 영상분석 시정계 | |
KR102656881B1 (ko) | AIoT 기반 교통안전 통합 관리시스템 | |
WO2013100624A1 (ko) | 템플릿을 이용한 영상 기반 상황 인식 방법 및 이를 위한 장치 | |
Barshooi et al. | Nighttime Driver Behavior Prediction Using Taillight Signal Recognition via CNN-SVM Classifier | |
JP3779229B2 (ja) | 識別方法、識別装置、及び交通制御システム | |
JP2001216597A (ja) | 画像処理方法及び画像処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19916424 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19916424 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.04.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19916424 Country of ref document: EP Kind code of ref document: A1 |