KR20220082173A - Multimodal deep learning based LDWS black box equipment using images and PPG data - Google Patents

Multimodal deep learning based LDWS black box equipment using images and PPG data Download PDF

Info

Publication number
KR20220082173A
KR20220082173A KR1020200171660A KR20200171660A KR20220082173A KR 20220082173 A KR20220082173 A KR 20220082173A KR 1020200171660 A KR1020200171660 A KR 1020200171660A KR 20200171660 A KR20200171660 A KR 20200171660A KR 20220082173 A KR20220082173 A KR 20220082173A
Authority
KR
South Korea
Prior art keywords
driver
black box
deep learning
ldws
ppg
Prior art date
Application number
KR1020200171660A
Other languages
Korean (ko)
Inventor
홍창의
엄정민
김준희
김도엽
Original Assignee
홍창의
엄정민
김준희
김도엽
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 홍창의, 엄정민, 김준희, 김도엽 filed Critical 홍창의
Priority to KR1020200171660A priority Critical patent/KR20220082173A/en
Publication of KR20220082173A publication Critical patent/KR20220082173A/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K28/00Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions
    • B60K28/02Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver
    • B60K28/06Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver responsive to incapacity of driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q3/00Arrangement of lighting devices for vehicle interiors; Lighting devices specially adapted for vehicle interiors
    • B60Q3/20Arrangement of lighting devices for vehicle interiors; Lighting devices specially adapted for vehicle interiors for lighting specific fittings of passenger or driving compartments; mounted on specific fittings of passenger or driving compartments
    • B60Q3/283Steering wheels; Gear levers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D1/00Steering controls, i.e. means for initiating a change of direction of the vehicle
    • B62D1/02Steering controls, i.e. means for initiating a change of direction of the vehicle vehicle-mounted
    • B62D1/04Hand wheels
    • B62D1/046Adaptations on rotatable parts of the steering wheel for accommodation of switches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0827Inactivity or incapacity of driver due to sleepiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0872Driver physiology
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/221Physiology, e.g. weight, heartbeat, health or special needs
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/30Driving style
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2300/00Purposes or special features of road vehicle drive control systems
    • B60Y2300/10Path keeping
    • B60Y2300/12Lane keeping

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Combustion & Propulsion (AREA)
  • Chemical & Material Sciences (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

본 발명은 이미지 및 PPG데이터를 통한 딥러닝 기술과 LDSW 기술을 통합시킨 졸음운전 방지 블랙박스이다. 더욱 상세하게는, PPG 장치 및 블랙박스의 접목 방법과 이미지 데이터 및 맥박 데이터를 수집하여 딥러닝으로 연결되는 운영 방법에 대한 것이다. 이를 위하여 본 발명은 운전자의 맥박 데이터를 수집하는 PPG 센서를 핸들의 뒷면에 위치시키고, 운전자의 이미지 데이터를 수집하는 카메라를 블랙박스와 혼용함으로써 공간의 경제성을 증가시킨다. 또한, LDWS 기술을 통해 차선이탈을 감지하여 졸음운전 감지 확인절차를 추가한다. 딥러닝 연산을 담당하는 주 제어부로 감지된 정보를 모두 보내어 졸음운전을 판별하고, 블랙박스에 부착된 스피커를 통해 경고 메시지를 전달한다. The present invention is a drowsy driving prevention black box that integrates deep learning technology and LDSW technology through images and PPG data. More specifically, it relates to a method of grafting a PPG device and a black box, and an operation method connected to deep learning by collecting image data and pulse data. To this end, the present invention increases the economic feasibility of space by locating a PPG sensor that collects the driver's pulse data on the back of the steering wheel and mixing a camera that collects the driver's image data with a black box. In addition, a drowsy driving detection confirmation procedure is added by detecting lane departure through LDWS technology. It sends all detected information to the main control unit in charge of deep learning calculations to determine drowsy driving and delivers a warning message through a speaker attached to the black box.

Description

이미지와 PPG데이터를 이용한 멀티모달 딥러닝기반 LDWS 블랙박스 장비{Multimodal deep learning based LDWS black box equipment using images and PPG data}Multimodal deep learning based LDWS black box equipment using images and PPG data}

본 발명은 졸음운전 방지 시스템에 관한 것으로, 더욱 상세하게는, LDWS를 기반으로 한 블랙박스의 전방 카메라를 통해 차선을 감지하여 수집한 차선 이탈과 같은 차량 외부 데이터와 블랙박스의 내부 카메라로 수집한 이미지, 그리고 PGG 센서로 수집한 PPG 데이터를 딥러닝을 통해 졸음 인식 모델을 구성하고 이를 바탕으로 운전자의 졸음상태를 정확하게 판단하여 졸음운전으로 인한 차량 사고를 미연에 방지할 수 있는 졸음운전 방지 시스템의 구성 방법에 관한 것이다.The present invention relates to a system for preventing drowsy driving, and more specifically, data such as lane departure collected by detecting a lane through a front camera of a black box based on LDWS and data collected by an internal camera of the black box. The image and PPG data collected by the PGG sensor are used to construct a drowsiness recognition model through deep learning, and based on this, the driver's drowsiness state is accurately determined to prevent vehicle accidents caused by drowsy driving in advance. How to configure it.

한국 도로공사에 의하면 최근 10년간 고속도로 교통사고 발생원인 1위는 졸음운전으로 이러한 졸음운전 때문에 대형사고나 인명피해가 발생하는 사례가 빈번하게 발생하고 있다. 특히 야간에 운전을 주로 하게 되는 대리운전자나 운송업자 같은 직업 종사자들은 졸음운전에 취약하다. 이에 따라 다양한 형태와 기술을 통한 많은 졸음운전 방지 기술들이 발명되고 있다.According to the Korea Highway Corporation, the number one cause of highway traffic accidents over the past 10 years has been drowsy driving. In particular, job workers such as surrogate drivers and transporters who mainly drive at night are vulnerable to drowsy driving. Accordingly, many technologies for preventing drowsy driving through various forms and technologies have been invented.

MTCNN(Multi-Task Cascaded Convol- ution Networks)를 사용하여 운전자의 양쪽 눈과 입 전체 얼굴을 학습하며 동시에 더 빠른 시간에 운전자의 상태를 파악하기 위해 왼쪽 눈과 입 두 개만을 사용하여 딥러닝 기반 졸음 인식 모델을 정상, 하품, 졸음 세가지 상태로 분류한다.Deep learning-based drowsiness using MTCNN (Multi-Task Cascaded Convolution Networks) to learn the full face of the driver's both eyes and mouth while simultaneously using only the left eye and two mouths to determine the driver's status in a faster time The cognitive model is classified into three states: normal, yawning, and drowsy.

PPG데이터와 졸음 인식 모델을 사용하며 크게 전처리부, 멀티모달 네트워크, 분류 네트워크로 이루어진 멀티모달 딥러닝 기반의 운전자 졸음감지 시스템을 구성한다. It uses PPG data and a drowsiness recognition model to construct a multi-modal deep learning-based driver drowsiness detection system consisting of a preprocessor, a multi-modal network, and a classification network.

전처리부는 필요한 데이터의 특징만을 추출하여 불필요한 부분을 제거하고 필요한 데이터만을 수집하며 이미지 수집부, 주제어부, PPG 수집부로 나눈다. 이미지 수집부는 차량 내 장착된 카메라가 촬영한 운전자 얼굴 이미지를 수신하여 수집하며, 주제어부는 MTCNN을 사용하여 운전자 얼굴 이미지로부터 눈과 입을 추출한다. PPG 수집부는 혈류량의 변화를 통하여 사람의 심박수를 측정한 PPG데이터를 수집한다.The preprocessor extracts only the necessary data characteristics, removes unnecessary parts, collects only the necessary data, and divides it into an image collection unit, a main control unit, and a PPG collection unit. The image collection unit receives and collects the driver's face image taken by the camera mounted in the vehicle, and the main controller uses MTCNN to extract eyes and mouth from the driver's face image. The PPG collection unit collects PPG data obtained by measuring a person's heart rate through a change in blood flow.

멀티모달 네트워크는 전처리부에서 추출된 서로 다른 형태의 데이터를 이질성을 제거하고 동시에 데이터의 특징만을 추출하며 컨볼루션 신경망, 레이어 재구성부, JR 구성부로 이루어져 있다. 컨볼루션 신경망은 입력된 눈과 입의 이미지로부터 특징만을 추출한다. 추출된 눈과 입의 특징은 레이어 재구성부에 의해 하나의 레이어로 재구성되며 이때 레이어의 크기는 임의로 조정이 가능하다. JR 구성부는 레이어 재구성부에 의해 눈과 입의 특징을 추출하여 재구성한 레이어와 주제어부에 의해 표준화한 PPG를 하나의 JR(Joint Representation)로 융합하여 구성한다.The multimodal network removes heterogeneity from different types of data extracted from the preprocessor and simultaneously extracts only data features, and consists of a convolutional neural network, a layer reconstruction unit, and a JR construction unit. The convolutional neural network extracts only features from the input eye and mouth images. The extracted eye and mouth features are reconstructed into one layer by the layer reconstruction unit, and the size of the layer can be arbitrarily adjusted. The JR component consists of a layer reconstructed by extracting eye and mouth features by the layer reconstruction unit, and the PPG standardized by the main control unit by fusion into one JR (Joint Representation).

분류 네트워크는 멀티모달 네트워크의 JR 구성부에서 눈, 입, PPG의 특징을 모두 가진 하나의 데이터로 재구성된 입력데이터를 수신한 뒤 운전자의 상태를 정상, 하품, 졸음 상태로 분류하여 해당하는 상태 정보를 출력한다.The classification network receives input data reconstructed as one data with all the characteristics of eyes, mouth, and PPG from the JR component of the multi-modal network, and classifies the driver's state into normal, yawning, and drowsy states and provides the corresponding state information. to output

전방 카메라에 의해 획득된 영상의 품질은 졸음운전 방지 시스템의 동작 성능과 직결되기 때문에 전방 카메라에 의해 획득된 영상의 정확성은 높아야한다. 특히, 전방 카메라 장치의 기구적 구조에 의해 난반사되어 카메라 렌즈로 입사되는 빛은 전방 카메라에 의해 획득된 영상에 대하여 글레어(glare) 현상 또는 블러(blur) 현상과 같은 문제점을 야기시킬 수 있으며, 이러한 현상은 전방 카메라 장치에 역광이 입사되는 경우 더욱 심화될 수 있다.Since the quality of the image acquired by the front camera is directly related to the operation performance of the drowsy driving prevention system, the accuracy of the image acquired by the front camera should be high. In particular, light that is diffusely reflected by the mechanical structure of the front camera device and is incident on the camera lens may cause problems such as glare or blur with respect to the image acquired by the front camera. The phenomenon may be further exacerbated when backlight is incident on the front camera device.

차선정보를 얻는 과정에서 차량의 차선 인식 시스템이 태양광에 의하여 생기는 그림자와 차선을 구분하지 못하게 되어, 태양이 비추는 시간대에 운행하는 차량의 차선 인식 시스템은 차량의 주변의 피사체들의 그림자가 차선 근처 또는 차선 위에 있을 경우 및 가드레일의 그림자에 의해 차선이 가리는 경우 등에서 차선 인식에 실패할 가능성이 증가한다.In the process of obtaining lane information, the vehicle's lane recognition system cannot distinguish the lane from the shadow caused by sunlight, so the lane recognition system of a vehicle running in the sun is shining. The possibility of failing to recognize a lane increases when it is over the lane or when the lane is obscured by the shadow of a guard rail.

이러한 문제점들을 방지하고자 블랙박스에 오목한 홈을 만들어 홈의 바닥과 측벽부에 반사광을 감소시키기 위한 광트랩부를 통해 광도의 변화를 감소시키고 카메라를 통해 획득한 컬러 영상을 영상 전처리를 통해 조도의 변화를 배제하여 광도와 조도의 변화를 배제한 후처리된 영상을 기반으로 차선과 차량 사이 거리를 인식하여 차선 이탈 여부를 판단한다.To prevent these problems, a concave groove is made in the black box to reduce the change in luminance through a light trap to reduce reflected light on the bottom and sidewalls of the groove, and the color image acquired through the camera is pre-processed to reduce the change in illuminance. Based on the post-processed image that excludes changes in luminance and illuminance, the distance between the lane and the vehicle is recognized and lane departure is determined.

KRKR 10-2013-014693410-2013-0146934 AA KRKR 10-2017-009651610-2017-0096516 AA KRKR 10-2018-015974110-2018-0159741 AA

교통안전공단이 운전자 400명을 대상으로 실시한 졸음운전 실태 조사에 의하면, 운전자 10명 중 4명이 졸음운전을 경험했으며, 그중 19%는 사고가 날 뻔한 ‘아차 사고'경험이있는것으로조사 됐다. 최근 10년간 고속도로 교통사고 발생원인에서 졸음이 과속을 제치고 1위를 했을 만큼 이는 사람의 목숨과도 연결되는 중요한 사항이라고 생각된다. 따라서 졸음운전에 의한 교통사고 및 인명피해를 줄이는데 그 목적이 있다.According to a survey on drowsy driving conducted by the Transportation Safety Authority on 400 drivers, 4 out of 10 drivers experienced drowsy driving, and 19% of them had a ‘near accident’ experience that almost caused an accident. In the past 10 years, drowsiness overtook speeding as the number one cause of highway traffic accidents. Therefore, the purpose of this is to reduce traffic accidents and casualties caused by drowsy driving.

본 발명은 운전자의 얼굴 이미지를 사용하여 운전자의 상태를 판단하는 딥러닝 기반 졸음 인식 모델과 심박수를 측정하는 PPG를 활용하여 만든 멀티모달 딥러닝 기반의 운전자 졸음 감지 시스템과 오목한 홈을 만들어 홈의 바닥과 측벽부에 반사광을 감소시키기 위한 광트랩부를 구비한 카메라로 광도의 변화를 감소시켜 정확도가 높아진 영상을 얻을 수 있는 카메라인 LDWS를 합쳐 만든 블랙박스 카메라라는 특징이 있다.The present invention is a multi-modal deep learning-based driver drowsiness detection system made by using a deep learning-based drowsiness recognition model that determines the driver's condition using a driver's face image and a PPG that measures heart rate, and a concave groove at the bottom of the groove. It is a black box camera made by combining LDWS, which is a camera equipped with a light trap for reducing reflected light on the side wall and a camera that can obtain an image with increased accuracy by reducing the change in luminous intensity.

본 발명은 이미지와 PPG 데이터를 이용한 멀티모달딥러닝 기반의 운전자 졸음감지 시스템과 LDWS를 함께 사용함으로써 추상적인 단일요소 (시각적 정보, 생체 데이터, 차량 변화)들의 복합적 분석을 통해 졸음운전 판단 및 방지하는 효과가 있다.)The present invention is by using a driver drowsiness detection system based on multi-modal deep learning using images and PG data together with LDWS. It has the effect of judging and preventing drowsy driving through complex analysis of abstract single elements (visual information, biometric data, vehicle changes).

도 1은 PPG센서가 부착된 핸들의 정면도.
도 2는 PPG센서가 부착되는 핸들의 뒷면 구조도.
도 3은 내부 카메라 외부 카메라가 부착된 블랙박스.
도 4는 도3의 정면도.
1 is a front view of a handle to which a PPG sensor is attached;
Figure 2 is a rear structural view of the handle to which the PPG sensor is attached.
3 is a black box to which an internal camera and an external camera are attached.
Fig. 4 is a front view of Fig. 3;

본 발명의 핵심 기술 중 멀티모달딥러닝 기반의 운전자 졸음감지 시스템은 크게 전처리부, 멀티모달네트워크, 분류 네트워크로 이루어진다.Among the core technologies of the present invention, the multi-modal deep learning-based driver drowsiness detection system is largely composed of a preprocessor, a multi-modal network, and a classification network.

전처리부는 필요한 데이터의 특징만을 추출하여 불필요한 부분을 제거하고 필요한 데이터만을 수집하며 이미지 수집부, 주제어부, PPG 수집부로 나뉜다.이미지 수집부는차량 내 장착된 카메라가 촬영한 운전자 얼굴 이미지를 수신하여 수집, 주제어부는 운전자 얼굴 이미지로부터 눈과 입을 추출 이때 MTCNN을 사용(눈과 입의 위치정보를 추출하는 것은, 눈과 입이 졸음 발생 시 나타나는 변화를 포함하고 있기 때문), PPG 수집부는 혈류량을변화를 통하여 사람의 심박수를 측장한값이 PPG를 수집한다. PPG는 졸음 발생 시 부교감신경으로 인하여 감소하는 특징을 보이며 이미지 특징과 함께 활용이 가능하다.The pre-processing unit extracts only the characteristics of the necessary data, removes unnecessary parts, collects only the necessary data, and is divided into an image collection unit, a main control unit, and a PPG collection unit. The main controller uses MTCNN when extracting the eyes and mouth from the driver's face image (because the extraction of the location information of the eyes and mouth includes changes in the eyes and mouth when drowsiness occurs), the PPG collector uses the blood flow by changing PPG is collected by measuring a person's heart rate. PPG shows a decrease due to parasympathetic nervous system when drowsiness occurs, and it can be used together with image characteristics.

멀티모달네트워크는 눈과 입의 이미지 데이터와 PPG를 하나로 융합된다. 전처리부에서 추출된 각각의 데이터는 서로 다른 형태로 구성되어 있으며, 졸음 발생시 나타나는 변화도 다르게 나타나기 때문에, 멀티모달네트워크는 추출된 각각의 데이터에서 이질성을 제거하고 동시에 데이터의 특징만을 추출하고, 컨볼루션신경망, 레이어 재구성부, JR 구성부로 이루어져 있다. 컨볼루션신경망은 입력된 눈, 입 이미지로부터 눈과 입의 특징만을 추출, 추출된 눈과 입의 특징은 레이어 재구성부에 의해 하나의 레이어로 재구성되며 이때 레이어의 크기는 임의로 조정 가능하다. JR 구성부는 레이어 재구성부에 의해 눈과 입의 특징을 추출하여 재구성한 레이어와 주제어부에 의해 표준화한 PPG를 하나의 JR(Joint Representation)로 융합하여 구성한다.The multimodal network is a fusion of eye and mouth image data and PPG into one. Since each data extracted from the preprocessor is composed of different shapes and changes appearing differently when drowsiness occurs, the multimodal network removes heterogeneity from each extracted data and simultaneously extracts only the characteristics of the data and performs convolution It consists of a neural network, a layer reconstruction unit, and a JR construction unit. The convolutional neural network extracts only the eye and mouth features from the input eye and mouth images, and the extracted eye and mouth features are reconstructed into one layer by the layer reconstruction unit, and the size of the layer can be arbitrarily adjusted. The JR component consists of a layer reconstructed by extracting eye and mouth features by the layer reconstruction unit and a PPG standardized by the main control unit by fusion into one JR (Joint Representation).

분류 네트워크는 멀티모달네트워크에서 융합된 융합 데이터를 가지고 운전자의 졸음상태를 파악하고 JR 구성부에서 눈, 입, PPG의 특징을 모두 가진 하나의 데이터로 재구성된 입력데이터를 수신한 분류 네트워크는 운전자의 상태를 분류한 상태 정보를 출력한다.The classification network identifies the driver's drowsiness state with the convergence data from the multimodal network, and the classification network that receives the input data reconstructed as one data with all the features of eyes, mouth, and PPG from the JR component part Outputs state information by classifying the state.

LDWS용 카메라는 오목한 홈을 만들어 홈의 바닥과 측벽부에 반사광을 감소시키기 위한 광트랩부를 구비한 카메라로 광도의 변화를 감소시켜 정확도가 높아진 영상을 얻을 수 있는 카메라인 LDWS를 합쳐 만든 블랙박스 카메라인데 이 카메라를 통해 컬러 영상을 획득하여 영상 전처리를 통해 조도의 변화를 배제하여 차선과 차량 사이 거리를 인식하여 차선 이탈 여부를 판단할 수 있다.The camera for LDWS is a camera equipped with a light trap to reduce reflected light on the bottom and side walls of the groove by making a concave groove. However, by acquiring a color image through this camera and pre-processing the image to exclude changes in illuminance, it is possible to recognize the distance between the lane and the vehicle to determine whether or not to deviate from the lane.

따라서 LDWS와 딥러닝 기반 센서 시스템을 활용한 블랙박스 카메라로 차선상태와 운전자의 상태를 살펴 졸음 여부를 판단할 수 있다.Therefore, it is possible to determine whether drowsiness is detected by examining the lane state and the driver's state with a black box camera using LDWS and a deep learning-based sensor system.

본 발명은 졸음운전이 많이 일어나는 고속도로에서 장거리 운전을 주로 하는 차량(예시:고속버스, 화물차 등)에 설치할 수 있다.The present invention can be installed in a vehicle (eg, an express bus, a freight car, etc.) that mainly drives long-distance on a highway where drowsy driving occurs a lot.

경고 메시지를 통해 사고를 예방할 뿐만 아니라, 사고 발생 이후 저장된 정보로 사고의 원인에 대한 빠른 인지가 가능하다. In addition to preventing accidents through warning messages, it is possible to quickly recognize the cause of the accident with the information stored after the accident.

손님을 위한 와이파이가 기본적으로 제공되는 고속버스의 경우 이를 사용해 중앙서버와 연결되어 통제될 수 있다.In the case of a high-speed bus that provides basic Wi-Fi for guests, it can be connected to a central server and controlled.

또한 보험회사에서 이 서비스를 제공해 사고 예방 및 신뢰성 있는 정보로 사고처리를 할 수 있다.In addition, insurance companies can provide this service to prevent accidents and handle accidents with reliable information.

(1)핸들
(2) 홈 : PPG 센서 위치 이동(프리셋)을 위한 경로
(3) 센서 : 센싱을 담당하는 부분은 외부에 노출되어 있음
(4) 고정장치 : 접합부에서 센서가 홈에 고정되게끔 고정해주는 장치
(5) 도선 : 센서와 제어부를 연결하기 위해 핸들 내부를 지나가는 스프링 형태의 도선
(6) 블랙박스 : 블랙박스 본체
(7) 스피커 : 경고 메시지를 주는 장치
(8) 내부카메라
(9) 외부카메라
(1) Handle
(2) Home: Path for moving the PPG sensor position (preset)
(3) Sensor: The part responsible for sensing is exposed to the outside
(4) Fixing device: A device that fixes the sensor so that it is fixed in the groove at the junction.
(5) Conductor: A spring-type wire passing through the inside of the handle to connect the sensor and the control unit.
(6) Black box: Black box body
(7) Speaker: A device that gives a warning message
(8) Internal camera
(9) External camera

Claims (3)

룸미러의 좌측에 설치되는 블랙박스와 센서를 이용한 졸음 방지 장치에 있어서, 상기 차량 내부 영상과 운전자의 얼굴 이미지로부터 정보를 수집하는 내부 카메라(8) 및 외부 영상과 차선의 위치정보를 수집하는 외부 카메라(9)가 부착된 차선이탈 방지 LDWS 블랙박스(6)와; 상기 차량 핸들(1)에 설치되어 운전자 신체에 접촉하여 운전자의 맥박을 측정하는 센서(3)와 측정된 정보를 블랙박스로 전송하는 전자장치(3); 상기 블랙박스와 전자장치를 이용해 운전자의 졸음상태를 파악하여 운전자에게 스피커(7)를 통해 경고 메시지를 전달하는 멀티모달 딥러닝과 LDWS 기반의 운전자 졸음 경고 장치;In the drowsiness prevention device using a black box and a sensor installed on the left side of the rearview mirror, an internal camera (8) that collects information from an image inside the vehicle and a face image of a driver and an exterior that collects an external image and location information of a lane Lane departure prevention LDWS black box 6 to which the camera 9 is attached; a sensor (3) installed on the vehicle handle (1) to measure the driver's pulse by contacting the driver's body and an electronic device (3) for transmitting the measured information to the black box; a driver's drowsiness warning device based on multi-modal deep learning and LDWS that detects the driver's drowsiness using the black box and electronic device and delivers a warning message to the driver through the speaker (7); 제 1항에 있어서, 센서는, 상기 운전자의 맥박 측정을 위해 핸들 뒷면에 설치되어 LED를 방출하여 맥박 신호를 검출하여 PPG데이터를 수집하며; 운전자와 장시간 접촉될 수 있는 것을 특징으로 하는 전자장치;The method according to claim 1, wherein the sensor is installed on the rear side of the steering wheel to measure the driver's pulse, emits an LED, detects a pulse signal, and collects PPG data; Electronic device characterized in that it can be in contact with the driver for a long time; 제 1항에 있어서 전자장치는 운전자의 습관에 따라 장거리 주행 시의 달라지는 손의 위치에 맞게 핸들 뒷면의 홈(2)을 따라 프리셋할 수 있는 것을 특징으로 하는 전자장치;The electronic device according to claim 1, wherein the electronic device can be preset along the groove (2) on the rear side of the steering wheel according to a different position of the hand during long-distance driving according to a driver's habit;
KR1020200171660A 2020-12-09 2020-12-09 Multimodal deep learning based LDWS black box equipment using images and PPG data KR20220082173A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020200171660A KR20220082173A (en) 2020-12-09 2020-12-09 Multimodal deep learning based LDWS black box equipment using images and PPG data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020200171660A KR20220082173A (en) 2020-12-09 2020-12-09 Multimodal deep learning based LDWS black box equipment using images and PPG data

Publications (1)

Publication Number Publication Date
KR20220082173A true KR20220082173A (en) 2022-06-17

Family

ID=82269037

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020200171660A KR20220082173A (en) 2020-12-09 2020-12-09 Multimodal deep learning based LDWS black box equipment using images and PPG data

Country Status (1)

Country Link
KR (1) KR20220082173A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170096516A (en) 2016-02-16 2017-08-24 주식회사 엘지화학 Preparation apparatus of aerogel sheet

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170096516A (en) 2016-02-16 2017-08-24 주식회사 엘지화학 Preparation apparatus of aerogel sheet

Similar Documents

Publication Publication Date Title
CN106709420B (en) Method for monitoring driving behavior of commercial vehicle driver
CN108583432B (en) Intelligent A-pillar dead zone early warning device and method based on image recognition technology
JP5171629B2 (en) Driving information providing device
JP6281492B2 (en) Passenger counting device, method and program
JP6888950B2 (en) Image processing device, external world recognition device
WO2018058958A1 (en) Road vehicle traffic alarm system and method therefor
EP2463806A1 (en) Vehicle detection device and vehicle detection method
CN112289003B (en) Method for monitoring end-of-driving behavior of fatigue driving and active safety driving monitoring system
CN109844450B (en) Sorting device, sorting method, and recording medium
JP4807354B2 (en) Vehicle detection device, vehicle detection system, and vehicle detection method
CN103714659A (en) Fatigue driving identification system based on double-spectrum fusion
US10964137B2 (en) Risk information collection device mounted on a vehicle
KR102332517B1 (en) Image surveilance control apparatus
CN110135235A (en) A kind of dazzle processing method, device and vehicle
DE102009014437B4 (en) Object Recognition System and Method
CN113875217A (en) Image recognition apparatus and image recognition method
JP5077088B2 (en) Image processing apparatus and image processing method
CN114360210A (en) Vehicle fatigue driving early warning system
JPH03254291A (en) Monitor for automobile driver
KR102459906B1 (en) System for detecting passengers in vehicle using dual band infrared camera, and method for the same
KR102211903B1 (en) Method And Apparatus for Photographing for Detecting Vehicle Occupancy
KR20220082173A (en) Multimodal deep learning based LDWS black box equipment using images and PPG data
CN115299948A (en) Driver fatigue detection method and detection system
KR101958238B1 (en) Safety management system for transportation vehicle
JP7070827B2 (en) Driving evaluation device, in-vehicle device, driving evaluation system equipped with these, driving evaluation method, and driving evaluation program