CN112114671A - Human-vehicle interaction method and device based on human eye sight and storage medium - Google Patents
Human-vehicle interaction method and device based on human eye sight and storage medium Download PDFInfo
- Publication number
- CN112114671A CN112114671A CN202011001138.5A CN202011001138A CN112114671A CN 112114671 A CN112114671 A CN 112114671A CN 202011001138 A CN202011001138 A CN 202011001138A CN 112114671 A CN112114671 A CN 112114671A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- human
- driver
- eye
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 230000003993 interaction Effects 0.000 title claims abstract description 44
- 230000004438 eyesight Effects 0.000 title claims abstract description 13
- 238000003860 storage Methods 0.000 title claims abstract description 13
- 230000004424 eye movement Effects 0.000 claims abstract description 46
- 230000009471 action Effects 0.000 claims abstract description 43
- 238000013528 artificial neural network Methods 0.000 claims abstract description 11
- 230000008569 process Effects 0.000 claims description 43
- 238000012706 support-vector machine Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 4
- 230000002411 adverse Effects 0.000 abstract description 6
- 230000000694 effects Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 7
- 230000004397 blinking Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 5
- 238000003825 pressing Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 2
- 239000012141 concentrate Substances 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Ophthalmology & Optometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
技术领域technical field
本申请涉及车辆技术领域,更具体地说,涉及一种基于人眼视线的人车交互方法、装置和存储介质。The present application relates to the technical field of vehicles, and more particularly, to a human-vehicle interaction method, device and storage medium based on the sight line of human eyes.
背景技术Background technique
我们知道,在汽车正常行驶过程中,为了保证安全驾驶,驾驶员的手最好仅限于方向盘及其附件,对其他部件最好不要动。但是,在汽车实际行驶时,往往需要驾驶员执行开启并设定导航、开关音视频播放、开关车窗等实际的需要手眼配合的动作。We know that in the normal driving process of the car, in order to ensure safe driving, the driver's hands are best limited to the steering wheel and its accessories, and it is best not to move other parts. However, when the car is actually driving, the driver is often required to perform actions that require hand-eye coordination, such as opening and setting navigation, switching audio and video playback, and opening and closing windows.
本申请的发明人在实践中发现,这些需要手眼配合的操控动作由于影响到了驾驶员对方向盘及其附件的有效掌握,从而对汽车的安全行驶造成了不利的影响。The inventors of the present application have found in practice that these manipulation actions requiring hand-eye coordination affect the driver's effective grasp of the steering wheel and its accessories, thereby adversely affecting the safe driving of the vehicle.
发明内容SUMMARY OF THE INVENTION
有鉴于此,本申请提供一种基于人眼视线的人车交互方法、装置和存储介质,用于避免驾驶员执行需要放开方向盘的操控动作而对汽车的安全行驶造成的不利影响。In view of this, the present application provides a human-vehicle interaction method, device and storage medium based on the sight of human eyes, which are used to avoid the adverse impact on the safe driving of the vehicle caused by the driver performing the manipulation action that requires releasing the steering wheel.
为了实现上述目的,现提出的方案如下:In order to achieve the above purpose, the proposed scheme is as follows:
一种基于人眼视线的人车交互方法,应用于车辆,所述人车交互方法包括步骤:A human-vehicle interaction method based on human eye sight, applied to a vehicle, the human-vehicle interaction method includes the steps:
获取所述车辆的驾驶员的人脸图像;obtaining a face image of the driver of the vehicle;
利用神经网络算法对所述人脸图像进行处理,得到所述驾驶员的当前视线方向和眼部动作;Utilize neural network algorithm to process described face image, obtain described driver's current sight direction and eye movement;
根据所述当前视线方向与所述眼部动对车辆内的车内设备执行操作。The in-vehicle device in the vehicle is operated according to the current gaze direction and the eye movement.
可选的,所述获取所述车辆的驾驶员的人脸图像,包括步骤:Optionally, the obtaining the face image of the driver of the vehicle includes the steps of:
利用所述车辆内的至少一个摄像设备采集所述驾驶员的影像;capture an image of the driver using at least one camera device in the vehicle;
对所述影像进行处理,得到所述人脸图像。The image is processed to obtain the face image.
可选的,所述影像为可见光影像和/或红外影像。Optionally, the image is a visible light image and/or an infrared image.
可选的,所述利用神经网络算法对所述人脸图像进行处理,包括步骤:Optionally, the processing of the face image using a neural network algorithm includes the steps of:
利用三个级联的Hourglass模块对所述人脸图像进行处理,得到人脸信息;Use three cascaded Hourglass modules to process the face image to obtain face information;
利用一个卷积层模块对所述人脸信息进行处理,得到包括多个眼睛关键点的特征图,所述特征图包含每个所述眼睛关键点的坐标;Utilize a convolutional layer module to process the face information to obtain a feature map including a plurality of eye key points, and the feature map includes the coordinates of each of the eye key points;
利用Resnet网络采用直接回归算法对所述特征图进行处理,得到所述当前视线方向;Using the Resnet network to process the feature map with a direct regression algorithm to obtain the current line of sight direction;
利用Lenet两分类网络对所述特征图进行处理,得到所述眼部动作。The feature map is processed by using the Lenet two-class network to obtain the eye action.
可选的,所述根据所述当前视线方向与所述眼部动对车辆内的车内设备执行操作,包括步骤:Optionally, the performing an operation on the in-vehicle device in the vehicle according to the current line of sight direction and the eye movement includes the steps of:
根据所述当前视线方向从所述车辆内多个车内设备中选定目标车内设备;Selecting a target in-vehicle device from a plurality of in-vehicle devices in the vehicle according to the current line-of-sight direction;
控制所述目标车内设备执行与所述眼部动作匹配的操作。The target in-vehicle device is controlled to perform an operation matching the eye movement.
可选的,所述控制所述目标车内设备执行与所述眼部动作匹配的操作,包括步骤:Optionally, the controlling the target in-vehicle device to perform an operation matching the eye movement includes the steps of:
检测所述当前视线方向位于所述目标车内设备的焦点位置;Detecting that the current sight direction is at the focal position of the target in-vehicle device;
对所述眼部动作进行检测;detecting the eye movement;
当所述眼部动作符合预设标准时,控制所述目标车内设备执行与所述焦点位置相匹配的操作。When the eye movement meets a preset standard, the target in-vehicle device is controlled to perform an operation matching the focus position.
可选的,所述人车交互方法还包括步骤:Optionally, the human-vehicle interaction method further includes the steps:
利用二分类的支持向量机对所述当前视线方向进行处理,得到所述驾驶员是否分分心的结论,当发现驾驶员分心时,向驾驶员发出警示信息。The current line of sight direction is processed by a binary support vector machine to obtain a conclusion of whether the driver is distracted. When the driver is found distracted, a warning message is sent to the driver.
一种基于人眼视线的人车交互装置,应用于车辆,所述人车交互装置包括:A human-vehicle interaction device based on human eye sight, applied to a vehicle, the human-vehicle interaction device includes:
人脸获取模块,用于获取所述车辆的驾驶员的人脸图像;a face acquisition module for acquiring a face image of the driver of the vehicle;
图像处理模块,用于利用神经网络算法对所述人脸图像进行处理,得到所述驾驶员的当前视线方向和眼部动作;an image processing module for processing the face image by using a neural network algorithm to obtain the current sight direction and eye movements of the driver;
操作执行模块,用于根据所述当前视线方向与所述眼部动对车辆内的车内设备执行操作。The operation execution module is configured to perform an operation on the in-vehicle device in the vehicle according to the current line of sight direction and the eye movement.
可选的,所述人脸获取模块包括:Optionally, the face acquisition module includes:
摄像设备,用于采集所述驾驶员的影像;a camera device for capturing images of the driver;
处理器,用于对所述影像进行处理,得到所述人脸图像。The processor is configured to process the image to obtain the face image.
可选的,所述影像为可见光影像和/或红外影像。Optionally, the image is a visible light image and/or an infrared image.
可选的,所述图像处理模块包括:Optionally, the image processing module includes:
三个级联的Hourglass模块,用于对所述人脸图像进行处理,得到人脸信息;Three cascaded Hourglass modules are used to process the face image to obtain face information;
卷积层模块,用于对所述人脸信息进行处理,得到包括多个眼睛关键点的特征图,所述特征图包含每个所述眼睛关键点的坐标;a convolutional layer module for processing the face information to obtain a feature map including a plurality of eye key points, the feature map including the coordinates of each of the eye key points;
Resnet网络,用于采用直接回归算法对所述特征图进行处理,得到所述当前视线方向;The Resnet network is used to process the feature map by using a direct regression algorithm to obtain the current line of sight direction;
Lenet两分类网络,用于对所述特征图进行处理,得到所述眼部动作。The Lenet two-classification network is used to process the feature map to obtain the eye action.
可选的,所述操作执行模块包括:Optionally, the operation execution module includes:
目标选定单元,用于根据所述当前视线方向从所述车辆内多个车内设备中选定目标车内设备;a target selection unit, configured to select a target in-vehicle device from a plurality of in-vehicle devices in the vehicle according to the current line of sight direction;
设备控制单元,用于控制所述目标车内设备执行与所述眼部动作匹配的操作。A device control unit, configured to control the target in-vehicle device to perform an operation matching the eye action.
可选的,所述设备控制单元包括:Optionally, the device control unit includes:
焦点检测子单元,用于检测所述当前视线方向位于所述目标车内设备的焦点位置;a focus detection subunit, configured to detect that the current line of sight direction is at the focus position of the target in-vehicle device;
动作判断子单元,用于对所述眼部动作进行检测;an action judging subunit for detecting the eye action;
控制执行子单元,用于当所述眼部动作符合预设标准时,控制所述目标车内设备执行与所述焦点位置相匹配的操作。A control execution sub-unit is configured to control the target in-vehicle device to perform an operation matching the focus position when the eye movement meets a preset standard.
可选的,所述人车交互装置还包括:Optionally, the human-vehicle interaction device further includes:
分心判断模块,用于利用二分类的支持向量机对所述当前视线方向进行处理,得到所述驾驶员是否分分心的结论,当发现驾驶员分心时,向驾驶员发出警示信息。A distraction judging module is used to process the current line of sight direction by using a binary support vector machine to obtain a conclusion of whether the driver is distracted, and when it is found that the driver is distracted, a warning message is sent to the driver.
一种存储介质,所述存储介质上存储有程序代码,所述程序代码被执行时实现如上所述的人车交互方法的各个步骤。A storage medium, on which program codes are stored, and when the program codes are executed, each step of the above-mentioned human-vehicle interaction method is implemented.
从上述的技术方案可以看出,本申请公开了一种基于人眼视线的人车交互方法、装置和存储介质,该方法和装置应用于车辆,具体为获取车辆的驾驶员的人脸图像;利用神经网络算法对人脸图像进行处理,得到驾驶员的当前视线方向和眼部动作;根据当前视线方向与眼部动对车辆内的车内设备执行操作。这样一来,就无需驾驶员用手去对相应车内设备进行操作,从而避免驾驶员执行需要放开方向盘的操控动作而对汽车的安全行驶造成的不利影响。It can be seen from the above technical solutions that the present application discloses a human-vehicle interaction method, device and storage medium based on human eye sight. The neural network algorithm is used to process the face image to obtain the driver's current sight direction and eye movement; according to the current sight direction and eye movement, the in-vehicle equipment in the vehicle is operated. In this way, the driver does not need to operate the corresponding in-vehicle equipment by hand, thereby avoiding the adverse impact on the safe driving of the vehicle caused by the driver's execution of a manipulation action that requires the driver to release the steering wheel.
附图说明Description of drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following briefly introduces the accompanying drawings required for the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained based on these drawings without any creative effort.
图1为本申请实施例的一种基于人眼视线的人车交互方法的流程图;FIG. 1 is a flowchart of a human-vehicle interaction method based on human eye sight according to an embodiment of the application;
图2为本申请实施例的一种神经网络模型的框图;2 is a block diagram of a neural network model according to an embodiment of the application;
图3为本申请实施例的另一种基于人眼视线的人车交互方法的流程图;3 is a flowchart of another method for human-vehicle interaction based on human eye sight according to an embodiment of the present application;
图4为本申请实施例的一种基于人眼视线的人车交互装置的框图;FIG. 4 is a block diagram of a human-vehicle interaction device based on human eye sight according to an embodiment of the present application;
图5为本申请实施例的另一种基于人眼视线的人车交互装置的框图。FIG. 5 is a block diagram of another human-vehicle interaction device based on human eye sight according to an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present application.
实施例一Example 1
图1为本申请实施例的一种基于人眼视线的人车交互方法的流程图。FIG. 1 is a flowchart of a method for human-vehicle interaction based on human eye sight according to an embodiment of the present application.
如图1所示,本申请的人车交互方法应用于有人驾驶车辆,该车辆内有驾驶员存在,即本方法基于该驾驶员的人眼视线对车辆内的设备进行操作,该人眼交互放方法包括如下步骤:As shown in FIG. 1 , the human-vehicle interaction method of the present application is applied to a manned vehicle, and a driver exists in the vehicle, that is, the method operates the equipment in the vehicle based on the human-eye line of sight of the driver, and the human-eye interaction The release method includes the following steps:
S1、获取驾驶员的人脸图像。S1. Acquire a face image of the driver.
即在驾驶员正常驾驶过程中,利用车辆内的摄像设备获取驾驶员的人脸图像。具体来说,通过如下步骤获取该人脸图像:That is, during the normal driving process of the driver, the camera device in the vehicle is used to obtain the driver's face image. Specifically, the face image is obtained through the following steps:
首先,利用车辆内至少一个摄像设备对驾驶员进行拍摄,从而得到驾驶员的影像,摄像设备可以为可见光摄像设备或者红外摄像设备,因此,得到的影像也为可见光影像或者红外影像,采用红外摄像设备的好处是在车内光线较暗时也能采集到有效的影像。First, use at least one camera device in the vehicle to take a picture of the driver to obtain an image of the driver. The camera device can be a visible light camera device or an infrared camera device. Therefore, the obtained image is also a visible light image or an infrared image, and an infrared camera is used. The advantage of the device is that it can capture effective images even when the light inside the car is dim.
然后,利用一个处理器对可见光影像或者红外影像进行裁切、透视等处理,得到该驾驶员的人脸图像。Then, the visible light image or the infrared image is processed by cropping and perspective processing by a processor to obtain the driver's face image.
S2、利用神经网络算法计算驾驶员的当前视线方向和眼部动作。S2, using a neural network algorithm to calculate the driver's current sight direction and eye movements.
即利用预先训练的神经网络模型对驾驶员的人脸图像进行处理,得到该驾驶员的当前视线方向和眼部动作,这里的眼部动作可以为眼球转动、睁眼、闭眼或眨眼等动作。该神经网络模型包括三个级联的Hourglass模块、一个卷积层模块和两个分支,一个分支为Resnet网络,另一个分支为Lenet两分类网络,如图2所示。That is, the pre-trained neural network model is used to process the driver's face image to obtain the driver's current line of sight and eye movements. The eye movements here can be eye movements, eye opening, eye closing or blinking. . The neural network model includes three cascaded Hourglass modules, one convolutional layer module and two branches, one branch is Resnet network and the other branch is Lenet two-classification network, as shown in Figure 2.
具体来说,通过如下方法得到当前视线方向和眼部动作:Specifically, the current gaze direction and eye movements are obtained by the following methods:
首先,利用三个级联的Hourglass模块对人脸图像进行处理,得到人脸信息;First, use three cascaded Hourglass modules to process the face image to obtain face information;
然后,利用一个卷积层模块对人脸信息进行处理,得到包括多个眼睛关键点的特征图,该特征图包含每个眼睛关键点的坐标;Then, a convolutional layer module is used to process the face information, and a feature map including multiple eye key points is obtained, and the feature map contains the coordinates of each eye key point;
再后,利用Resnet网络采用直接回归算法对特征图进行处理,得到当前视线方向;Then, use the Resnet network to process the feature map with a direct regression algorithm to obtain the current line of sight direction;
在利用Resnet网络对特征图进行处理的同时,利用Lenet两分类网络对特征图进行处理,得到眼部动作。While using the Resnet network to process the feature map, the Lenet two-classification network is used to process the feature map to obtain eye movements.
S3、根据当前视线方向和眼部动作对车内设备进行操作。S3. Operate the in-vehicle device according to the current sight direction and eye movements.
即在得到驾驶员的当前视线方向和眼部动作后,基于该当前视线方向和相应的眼部动作对车内选定的目标车内设备进行相应的操作,即无线驾驶与的手动参与即可实现对车内设备的操作。该步骤的具体过程如下:That is, after obtaining the driver's current sight direction and eye movements, the target in-vehicle equipment selected in the car can be operated based on the current sight direction and the corresponding eye movements, that is, the manual participation of wireless driving and Realize the operation of in-vehicle equipment. The specific process of this step is as follows:
首先,根据当前视线方向选定车内的目标车内设备,例如,车辆内有多个车内设备,例如主控屏、车窗、空调等,当当前视线方向落在某个位置时,如落在主控屏或车窗上时,将主控屏或者车窗选定为目标车内设备。First, select the target in-vehicle device in the car according to the current line of sight. For example, there are multiple in-vehicle devices in the vehicle, such as the main control screen, window, air conditioner, etc. When the current line of sight falls on a certain position, such as When landing on the main control screen or window, select the main control screen or window as the target in-vehicle device.
然后,在得到眼部动作的基础上,控制该目标车内设备执行与该眼部动作相匹配的操作。Then, on the basis of obtaining the eye action, the target in-vehicle device is controlled to perform an operation matching the eye action.
本实施例中,控制目标车内设备的具体过程为:In this embodiment, the specific process of controlling the target in-vehicle equipment is:
首先,检测当前视线方向位于该车内设备的焦点位置,即检测当前视线方向在车内设备的坐标。例如,针对主控屏来说,其焦点位置究竟是哪个按钮的所在位置;针对车窗来说,其焦点位置究竟是车窗的底部、中部还是顶部。First, it is detected that the current sight direction is at the focus position of the in-vehicle device, that is, the coordinates of the in-vehicle device in the current sight direction are detected. For example, for the main control screen, which button is its focal position; for a car window, whether its focal position is the bottom, middle, or top of the car window.
然后,对眼部动作进行检测,以确定该眼部动作究竟是否为预先规定好的动作,例如我们将眨眼作为启动动作,则仅将眨眼作为启动后续操作的有效动作。Then, the eye action is detected to determine whether the eye action is a predetermined action. For example, if we take eye blinking as a starting action, then only eye blinking is an effective action for starting subsequent operations.
最后,当发现眼部动作为有效动作后,控制该目标车内设备执行与焦点位置对应的操作。Finally, when the eye action is found to be an effective action, the target in-vehicle device is controlled to perform an operation corresponding to the focus position.
例如,对于主控屏来说,如果主控屏处于熄屏状态,当发现当前视线方向位于该主控屏上任意点时,则点亮主控屏;然后,如果进一步发现当前视线方向位于主控屏上的播放按钮时且做了眨眼动作,则驱动该播放按钮执行按动动作,并进一步相应该按动动作执行播放操作。当发现当前视线方向离开主控屏一段时间后,则熄灭该主控屏。For example, for the main control screen, if the main control screen is in the off-screen state, when it is found that the current line of sight is located at any point on the main control screen, the main control screen will be turned on; When the play button on the screen is controlled and a blinking action is performed, the play button is driven to perform a pressing action, and further a playback operation is performed corresponding to the pressing action. When it is found that the current line of sight has left the main control screen for a period of time, the main control screen will be turned off.
对于车窗来说,当检测到当前视线方向位于车窗的某个位置,并发现驾驶员做了眨眼动作时,驱动车窗执行开窗或关窗动作,以使玻璃的上缘抵达该当前视线方向位于车窗的哪个位置,从而实现了自动开关窗。For the car window, when it is detected that the current sight direction is located at a certain position of the car window, and the driver is found to blink, the car window is driven to open or close the window, so that the upper edge of the glass reaches the current window. Where the line of sight is located on the window, so as to realize the automatic opening and closing of the window.
从上述技术方案可以看出,本实施例提供了一种基于人眼视线的人车交互方法,该方法应用于车辆,具体为获取车辆的驾驶员的人脸图像;利用神经网络算法对人脸图像进行处理,得到驾驶员的当前视线方向和眼部动作;根据当前视线方向与眼部动对车辆内的车内设备执行操作。这样一来,就无需驾驶员用手去对相应车内设备进行操作,从而避免驾驶员执行需要放开方向盘的操控动作而对汽车的安全行驶造成的不利影响。It can be seen from the above technical solutions that the present embodiment provides a human-vehicle interaction method based on the sight of the human eye. The method is applied to a vehicle, specifically acquiring a face image of the driver of the vehicle; The image is processed to obtain the driver's current sight direction and eye movements; operations are performed on the in-vehicle equipment in the vehicle according to the current sight direction and eye movements. In this way, the driver does not need to operate the corresponding in-vehicle equipment by hand, thereby avoiding the adverse impact on the safe driving of the vehicle caused by the driver's execution of a manipulation action that requires the driver to release the steering wheel.
另外,在本申请的一个具体实施方式中,还包括如下步骤,具体如图3所示:In addition, in a specific embodiment of the present application, the following steps are also included, as shown in Figure 3:
S4、检测驾驶员是否分心。S4. Detect whether the driver is distracted.
即在得到驾驶员的当前视线方向后,利用二分类的支持向量机对该当前视线方向进行处理,得到驾驶员是否分心的结论。如果发现驾驶员发生了分心,则及时向驾驶员或车辆内的其他人发出警示信息,警示驾驶员本人进行专心驾驶,或者警示其他人对驾驶员进行提醒,从而进一步保证了车辆的安全驾驶。That is, after obtaining the driver's current line of sight direction, the current line of sight direction is processed by a binary support vector machine, and a conclusion of whether the driver is distracted is obtained. If it is found that the driver is distracted, a warning message will be sent to the driver or other people in the vehicle in time to warn the driver to concentrate on driving, or to warn others to remind the driver, thereby further ensuring the safe driving of the vehicle. .
实施例二Embodiment 2
图4为本申请实施例的一种基于人眼视线的人车交互装置的框图。FIG. 4 is a block diagram of a human-vehicle interaction device based on the line of sight of a human eye according to an embodiment of the present application.
如图4所示,本申请的人车交互装置应用于有人驾驶车辆,该车辆内有驾驶员存在,即本方法基于该驾驶员的人眼视线对车辆内的设备进行操作,该人眼交互放装置包括如下人脸获取模块10、图像处理模块20和操作执行模块30。As shown in FIG. 4 , the human-vehicle interaction device of the present application is applied to a manned vehicle, and a driver exists in the vehicle, that is, the method operates the equipment in the vehicle based on the human-eye line of sight of the driver, and the human-eye interaction The playback device includes the following
人脸获取模块用于获取驾驶员的人脸图像。The face acquisition module is used to acquire the driver's face image.
即在驾驶员正常驾驶过程中,利用车辆内的摄像设备获取驾驶员的人脸图像。具体来说,该模块具体包括摄像设备和处理器。That is, during the normal driving process of the driver, the camera device in the vehicle is used to obtain the driver's face image. Specifically, the module specifically includes a camera device and a processor.
摄像设备用于对驾驶员进行拍摄,从而得到驾驶员的影像,摄像设备可以为可见光摄像设备或者红外摄像设备,因此,得到的影像也为可见光影像或者红外影像,采用红外摄像设备的好处是在车内光线较暗时也能采集到有效的影像。The camera equipment is used to photograph the driver to obtain the image of the driver. The camera equipment can be a visible light camera equipment or an infrared camera equipment. Therefore, the obtained image is also a visible light image or an infrared image. The advantage of using infrared camera equipment is that Effective images can be captured even when the interior light is dim.
处理器对可见光影像或者红外影像进行裁切、透视等处理,得到该驾驶员的人脸图像。The processor performs processing such as cropping and perspective processing on the visible light image or the infrared image to obtain the driver's face image.
图像处理模块用于利用神经网络算法计算驾驶员的当前视线方向和眼部动作。The image processing module is used to calculate the driver's current line of sight direction and eye movements using neural network algorithms.
即利用预先训练的神经网络模型对驾驶员的人脸图像进行处理,得到该驾驶员的当前视线方向和眼部动作,这里的眼部动作可以为眼球转动、睁眼、闭眼或眨眼等动作。该神经网络模型包括三个级联的Hourglass模块、一个卷积层模块和两个分支,一个分支为Resnet网络,另一个分支为Lenet两分类网络,如图2所示。That is, the pre-trained neural network model is used to process the driver's face image to obtain the driver's current line of sight and eye movements. The eye movements here can be eye movements, eye opening, eye closing or blinking. . The neural network model includes three cascaded Hourglass modules, one convolutional layer module and two branches, one branch is Resnet network and the other branch is Lenet two-classification network, as shown in Figure 2.
具体来说,该模型中各个模块的工作内容如下所述:Specifically, the work of each module in the model is as follows:
三个级联的Hourglass模块用于对人脸图像进行处理,得到人脸信息;Three cascaded Hourglass modules are used to process face images to obtain face information;
卷积层模块用于对人脸信息进行处理,得到包括多个眼睛关键点的特征图,该特征图包含每个眼睛关键点的坐标;The convolutional layer module is used to process the face information to obtain a feature map including multiple eye key points, and the feature map contains the coordinates of each eye key point;
Resnet网络直接回归算法对特征图进行处理,得到当前视线方向;The Resnet network direct regression algorithm processes the feature map to obtain the current line of sight direction;
在利用Resnet网络对特征图进行处理的同时,利用Lenet两分类网络对特征图进行处理,得到眼部动作。While using the Resnet network to process the feature map, the Lenet two-classification network is used to process the feature map to obtain eye movements.
操作执行模块用于根据当前视线方向和眼部动作对车内设备进行操作。The operation execution module is used to operate the in-vehicle equipment according to the current line of sight direction and eye movements.
即在得到驾驶员的当前视线方向和眼部动作后,基于该当前视线方向和相应的眼部动作对车内选定的目标车内设备进行相应的操作,即无线驾驶与的手动参与即可实现对车内设备的操作。该模块具体包括目标选定单元和设备控制单元。That is, after obtaining the driver's current sight direction and eye movements, the target in-vehicle equipment selected in the car can be operated based on the current sight direction and the corresponding eye movements, that is, the manual participation of wireless driving and Realize the operation of in-vehicle equipment. The module specifically includes a target selection unit and a device control unit.
目标选定单元用于根据当前视线方向选定车内的目标车内设备,例如,车辆内有多个车内设备,例如主控屏、车窗、空调等,当当前视线方向落在某个位置时,如落在主控屏或车窗上时,将主控屏或者车窗选定为目标车内设备。The target selection unit is used to select the target in-vehicle device in the car according to the current sight direction. For example, there are multiple in-vehicle devices in the vehicle, such as the main control screen, window, air conditioner, etc. When the location is selected, such as when it falls on the main control screen or the window, select the main control screen or the window as the target in-vehicle device.
设备控制单元用于在得到眼部动作的基础上,控制该目标车内设备执行与该眼部动作相匹配的操作。The device control unit is configured to control the target in-vehicle device to perform an operation matching the eye action on the basis of obtaining the eye action.
本实施例中,该设备控制单元具体包括焦点检测子单元、动作判断子单元和控制执行子单元。In this embodiment, the device control unit specifically includes a focus detection subunit, an action judgment subunit, and a control execution subunit.
焦点检测子单元用于检测当前视线方向位于该车内设备的焦点位置,即检测当前视线方向在车内设备的坐标。例如,针对主控屏来说,其焦点位置究竟是哪个按钮的所在位置;针对车窗来说,其焦点位置究竟是车窗的底部、中部还是顶部。The focus detection subunit is used to detect that the current line of sight direction is at the focus position of the in-vehicle device, that is, to detect the coordinates of the current line of sight direction of the in-vehicle device. For example, for the main control screen, which button is its focal position; for a car window, whether its focal position is the bottom, middle, or top of the car window.
动作判断子单元用于对眼部动作进行检测,以确定该眼部动作究竟是否为预先规定好的动作,例如我们将眨眼作为启动动作,则仅将眨眼作为启动后续操作的有效动作。The action judgment subunit is used to detect the eye action to determine whether the eye action is a predetermined action.
动作执行子单元用于当发现眼部动作为有效动作后,控制该目标车内设备执行与焦点位置对应的操作。The action execution subunit is used to control the target in-vehicle device to perform an operation corresponding to the focus position after the eye action is found to be an effective action.
例如,对于主控屏来说,如果主控屏处于熄屏状态,当发现当前视线方向位于该主控屏上任意点时,则点亮主控屏;然后,如果进一步发现当前视线方向位于主控屏上的播放按钮时且做了眨眼动作,则驱动该播放按钮执行按动动作,并进一步相应该按动动作执行播放操作。当发现当前视线方向离开主控屏一段时间后,则熄灭该主控屏。For example, for the main control screen, if the main control screen is in the off-screen state, when it is found that the current line of sight is located at any point on the main control screen, the main control screen will be turned on; When the play button on the screen is controlled and a blinking action is performed, the play button is driven to perform a pressing action, and further a playback operation is performed corresponding to the pressing action. When it is found that the current line of sight has left the main control screen for a period of time, the main control screen will be turned off.
对于车窗来说,当检测到当前视线方向位于车窗的某个位置,并发现驾驶员做了眨眼动作时,驱动车窗执行开窗或关窗动作,以使玻璃的上缘抵达该当前视线方向位于车窗的哪个位置,从而实现了自动开关窗。For the car window, when it is detected that the current sight direction is located at a certain position of the car window, and the driver is found to blink, the car window is driven to open or close the window, so that the upper edge of the glass reaches the current window. Where the line of sight is located on the window, so as to realize the automatic opening and closing of the window.
从上述技术方案可以看出,本实施例提供了一种基于人眼视线的人车交互装置,该装置应用于车辆,具体为获取车辆的驾驶员的人脸图像;利用神经网络算法对人脸图像进行处理,得到驾驶员的当前视线方向和眼部动作;根据当前视线方向与眼部动对车辆内的车内设备执行操作。这样一来,就无需驾驶员用手去对相应车内设备进行操作,从而避免驾驶员执行需要放开方向盘的操控动作而对汽车的安全行驶造成的不利影响。It can be seen from the above technical solutions that this embodiment provides a human-vehicle interaction device based on human eye sight. The image is processed to obtain the driver's current sight direction and eye movements; operations are performed on the in-vehicle equipment in the vehicle according to the current sight direction and eye movements. In this way, the driver does not need to operate the corresponding in-vehicle equipment by hand, thereby avoiding the adverse impact on the safe driving of the vehicle caused by the driver's execution of a manipulation action that requires the driver to release the steering wheel.
另外,在本申请的一个具体实施方式中,还包括分心判断模块40,具体如图5所示:In addition, in a specific embodiment of the present application, a distraction judgment module 40 is also included, as shown in FIG. 5 :
分心判断模块用于检测驾驶员是否分心。The distraction judgment module is used to detect whether the driver is distracted.
即在得到驾驶员的当前视线方向后,利用二分类的支持向量机对该当前视线方向进行处理,得到驾驶员是否分心的结论。如果发现驾驶员发生了分心,则及时向驾驶员或车辆内的其他人发出警示信息,警示驾驶员本人进行专心驾驶,或者警示其他人对驾驶员进行提醒,从而进一步保证了车辆的安全驾驶。That is, after obtaining the driver's current line of sight direction, the current line of sight direction is processed by a binary support vector machine, and a conclusion of whether the driver is distracted is obtained. If it is found that the driver is distracted, a warning message will be sent to the driver or other people in the vehicle in time to warn the driver to concentrate on driving, or to warn others to remind the driver, thereby further ensuring the safe driving of the vehicle. .
相应的,本申请实施例还提供了一种存储介质,所述存储介质上存储有适于处理器执行的程序代码,所述程序代码用于:Correspondingly, an embodiment of the present application further provides a storage medium, where a program code suitable for execution by a processor is stored on the storage medium, and the program code is used for:
获取所述车辆的驾驶员的人脸图像;obtaining a face image of the driver of the vehicle;
利用神经网络算法对所述人脸图像进行处理,得到所述驾驶员的当前视线方向和眼部动作;Utilize neural network algorithm to process described face image, obtain described driver's current sight direction and eye movement;
根据所述当前视线方向与所述眼部动对车辆内的车内设备执行操作。The in-vehicle device in the vehicle is operated according to the current gaze direction and the eye movement.
所述程序代码的细化功能和扩展功能可参照上文描述。The refined functions and extended functions of the program code may refer to the above description.
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。The various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same and similar parts between the various embodiments may be referred to each other.
本领域内的技术人员应明白,本发明实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本发明实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。It should be understood by those skilled in the art that the embodiments of the embodiments of the present invention may be provided as a method, an apparatus, or a computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product implemented on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本发明实施例是参照根据本发明实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。Embodiments of the present invention are described with reference to flowcharts and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the present invention. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal equipment to produce a machine that causes the instructions to be executed by the processor of the computer or other programmable data processing terminal equipment Means are created for implementing the functions specified in the flow or flows of the flowcharts and/or the blocks or blocks of the block diagrams.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer readable memory capable of directing a computer or other programmable data processing terminal equipment to operate in a particular manner, such that the instructions stored in the computer readable memory result in an article of manufacture comprising instruction means, the The instruction means implement the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing terminal equipment, so that a series of operational steps are performed on the computer or other programmable terminal equipment to produce a computer-implemented process, thereby executing on the computer or other programmable terminal equipment The instructions executed on the above provide steps for implementing the functions specified in the flowchart or blocks and/or the block or blocks of the block diagrams.
尽管已描述了本发明实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明实施例范围的所有变更和修改。Although preferred embodiments of the embodiments of the present invention have been described, additional changes and modifications to these embodiments may be made by those skilled in the art once the basic inventive concepts are known. Therefore, the appended claims are intended to be construed to include the preferred embodiment as well as all changes and modifications that fall within the scope of the embodiments of the present invention.
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。Finally, it should also be noted that in this document, relational terms such as first and second are used only to distinguish one entity or operation from another, and do not necessarily require or imply these entities or there is any such actual relationship or sequence between operations. Moreover, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion such that a process, method, article or terminal device that includes a list of elements includes not only those elements, but also a non-exclusive list of elements. other elements, or also include elements inherent to such a process, method, article or terminal equipment. Without further limitation, an element defined by the phrase "comprises a..." does not preclude the presence of additional identical elements in the process, method, article, or terminal device that includes the element.
以上对本发明所提供的技术方案进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。The technical solutions provided by the present invention are described in detail above, and specific examples are used in this paper to illustrate the principles and implementations of the present invention. The descriptions of the above embodiments are only used to help understand the method of the present invention and its core idea; Meanwhile, for those of ordinary skill in the art, according to the idea of the present invention, there will be changes in the specific embodiments and application scope. In summary, the contents of this specification should not be construed as limiting the present invention.
Claims (15)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011001138.5A CN112114671A (en) | 2020-09-22 | 2020-09-22 | Human-vehicle interaction method and device based on human eye sight and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011001138.5A CN112114671A (en) | 2020-09-22 | 2020-09-22 | Human-vehicle interaction method and device based on human eye sight and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112114671A true CN112114671A (en) | 2020-12-22 |
Family
ID=73801425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011001138.5A Pending CN112114671A (en) | 2020-09-22 | 2020-09-22 | Human-vehicle interaction method and device based on human eye sight and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112114671A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113335300A (en) * | 2021-07-19 | 2021-09-03 | 中国第一汽车股份有限公司 | Man-vehicle takeover interaction method, device, equipment and storage medium |
CN113561988A (en) * | 2021-07-22 | 2021-10-29 | 上汽通用五菱汽车股份有限公司 | Voice control method based on sight tracking, automobile and readable storage medium |
CN114327051A (en) * | 2021-12-17 | 2022-04-12 | 北京乐驾科技有限公司 | Human-vehicle intelligent interaction method |
CN114876312A (en) * | 2022-05-25 | 2022-08-09 | 重庆长安汽车股份有限公司 | Vehicle window lifting control system and method based on eye movement tracking |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101344919A (en) * | 2008-08-05 | 2009-01-14 | 华南理工大学 | Eye-tracking method and assistive system for the disabled using the method |
CN202121681U (en) * | 2011-05-31 | 2012-01-18 | 德尔福电子(苏州)有限公司 | Vehicle-mounted eye movement control device |
CN102830797A (en) * | 2012-07-26 | 2012-12-19 | 深圳先进技术研究院 | Man-machine interaction method and system based on sight judgment |
US20130187847A1 (en) * | 2012-01-19 | 2013-07-25 | Utechzone Co., Ltd. | In-car eye control method |
CN103259971A (en) * | 2012-02-16 | 2013-08-21 | 由田信息技术(上海)有限公司 | Eye control device in vehicle and method for eye control |
CN104461005A (en) * | 2014-12-15 | 2015-03-25 | 东风汽车公司 | Vehicle-mounted screen switch control method |
CN105739705A (en) * | 2016-02-04 | 2016-07-06 | 重庆邮电大学 | Human-eye control method and apparatus for vehicle-mounted system |
CN108309311A (en) * | 2018-03-27 | 2018-07-24 | 北京华纵科技有限公司 | A kind of real-time doze of train driver sleeps detection device and detection algorithm |
CN108537161A (en) * | 2018-03-30 | 2018-09-14 | 南京理工大学 | A driving distraction detection method based on visual characteristics |
CN109460780A (en) * | 2018-10-17 | 2019-03-12 | 深兰科技(上海)有限公司 | Safe driving of vehicle detection method, device and the storage medium of artificial neural network |
CN109492514A (en) * | 2018-08-28 | 2019-03-19 | 初速度(苏州)科技有限公司 | A kind of method and system in one camera acquisition human eye sight direction |
CN109508679A (en) * | 2018-11-19 | 2019-03-22 | 广东工业大学 | Method, device, device and storage medium for realizing three-dimensional eye tracking |
CN110110662A (en) * | 2019-05-07 | 2019-08-09 | 济南大学 | Driver eye movement behavioral value method, system, medium and equipment under Driving Scene |
CN110765807A (en) * | 2018-07-25 | 2020-02-07 | 阿里巴巴集团控股有限公司 | Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium |
-
2020
- 2020-09-22 CN CN202011001138.5A patent/CN112114671A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101344919A (en) * | 2008-08-05 | 2009-01-14 | 华南理工大学 | Eye-tracking method and assistive system for the disabled using the method |
CN202121681U (en) * | 2011-05-31 | 2012-01-18 | 德尔福电子(苏州)有限公司 | Vehicle-mounted eye movement control device |
US20130187847A1 (en) * | 2012-01-19 | 2013-07-25 | Utechzone Co., Ltd. | In-car eye control method |
CN103259971A (en) * | 2012-02-16 | 2013-08-21 | 由田信息技术(上海)有限公司 | Eye control device in vehicle and method for eye control |
CN102830797A (en) * | 2012-07-26 | 2012-12-19 | 深圳先进技术研究院 | Man-machine interaction method and system based on sight judgment |
CN104461005A (en) * | 2014-12-15 | 2015-03-25 | 东风汽车公司 | Vehicle-mounted screen switch control method |
CN105739705A (en) * | 2016-02-04 | 2016-07-06 | 重庆邮电大学 | Human-eye control method and apparatus for vehicle-mounted system |
CN108309311A (en) * | 2018-03-27 | 2018-07-24 | 北京华纵科技有限公司 | A kind of real-time doze of train driver sleeps detection device and detection algorithm |
CN108537161A (en) * | 2018-03-30 | 2018-09-14 | 南京理工大学 | A driving distraction detection method based on visual characteristics |
CN110765807A (en) * | 2018-07-25 | 2020-02-07 | 阿里巴巴集团控股有限公司 | Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium |
CN109492514A (en) * | 2018-08-28 | 2019-03-19 | 初速度(苏州)科技有限公司 | A kind of method and system in one camera acquisition human eye sight direction |
CN109460780A (en) * | 2018-10-17 | 2019-03-12 | 深兰科技(上海)有限公司 | Safe driving of vehicle detection method, device and the storage medium of artificial neural network |
CN109508679A (en) * | 2018-11-19 | 2019-03-22 | 广东工业大学 | Method, device, device and storage medium for realizing three-dimensional eye tracking |
CN110110662A (en) * | 2019-05-07 | 2019-08-09 | 济南大学 | Driver eye movement behavioral value method, system, medium and equipment under Driving Scene |
Non-Patent Citations (2)
Title |
---|
董洪义: "深度学习之PYTorch物体检测实战", vol. 2, 31 March 2020, 机械工业出版社, pages: 258 - 263 * |
黄君浩;贺辉等: "基于LSTM的眼动行为识别及人机交互应用", 计算机系统应用, vol. 29, no. 3, 15 March 2020 (2020-03-15), pages 210 - 216 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113335300A (en) * | 2021-07-19 | 2021-09-03 | 中国第一汽车股份有限公司 | Man-vehicle takeover interaction method, device, equipment and storage medium |
CN113561988A (en) * | 2021-07-22 | 2021-10-29 | 上汽通用五菱汽车股份有限公司 | Voice control method based on sight tracking, automobile and readable storage medium |
CN114327051A (en) * | 2021-12-17 | 2022-04-12 | 北京乐驾科技有限公司 | Human-vehicle intelligent interaction method |
CN114876312A (en) * | 2022-05-25 | 2022-08-09 | 重庆长安汽车股份有限公司 | Vehicle window lifting control system and method based on eye movement tracking |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112114671A (en) | Human-vehicle interaction method and device based on human eye sight and storage medium | |
US10124648B2 (en) | Vehicle operating system using motion capture | |
CN111709264A (en) | Driver attention monitoring method and device and electronic equipment | |
CN103019524B (en) | Vehicle operating input equipment and the control method for vehicle operating input equipment | |
JP6030430B2 (en) | Control device, vehicle and portable terminal | |
KR101490908B1 (en) | System and method for providing a user interface using hand shape trace recognition in a vehicle | |
US9547373B2 (en) | Vehicle operating system using motion capture | |
CN111566612A (en) | Visual data acquisition system based on posture and sight line | |
JP5187517B2 (en) | Information providing apparatus, information providing method, and program | |
CN105829994A (en) | Device and method for navigating within a menu for controlling a vehicle, and selecting a menu entry from the menu | |
CN112249005A (en) | An interactive method and device for automatic parking of vehicles | |
KR20210120398A (en) | Electronic device displaying image by using camera monitoring system and the method for operation the same | |
CN109703554B (en) | Parking space confirmation method and device | |
CN103869970B (en) | Pass through the system and method for 2D camera operation user interfaces | |
JP6822325B2 (en) | Maneuvering support device, maneuvering support method, program | |
US20160046236A1 (en) | Techniques for automated blind spot viewing | |
CN110682921A (en) | Vehicle interaction method and device, vehicle and machine readable medium | |
CN112486205A (en) | Vehicle-based control method and device | |
CN106557043A (en) | Plant control unit, apparatus control method and recording medium | |
JP2016029532A (en) | User interface | |
CN103945184B (en) | A kind for the treatment of method and apparatus of Vehicular video | |
JP2022086263A (en) | Information processing equipment and information processing method | |
JP7046748B2 (en) | Driver status determination device and driver status determination method | |
WO2019167109A1 (en) | Display control device and display control method for vehicle | |
CN116935358A (en) | Driving state detection method, driving state detection device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20201222 |