CN112181149B - Driving environment recognition method and device and simulated driver - Google Patents
Driving environment recognition method and device and simulated driver Download PDFInfo
- Publication number
- CN112181149B CN112181149B CN202011068538.8A CN202011068538A CN112181149B CN 112181149 B CN112181149 B CN 112181149B CN 202011068538 A CN202011068538 A CN 202011068538A CN 112181149 B CN112181149 B CN 112181149B
- Authority
- CN
- China
- Prior art keywords
- data
- target
- eye movement
- movement information
- visual recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 87
- 230000004424 eye movement Effects 0.000 claims abstract description 101
- 230000000007 visual effect Effects 0.000 claims abstract description 97
- 238000007781 pre-processing Methods 0.000 claims abstract description 20
- 210000001747 pupil Anatomy 0.000 claims description 74
- 238000004458 analytical method Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 7
- 238000000354 decomposition reaction Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 230000011514 reflex Effects 0.000 claims description 2
- 230000001052 transient effect Effects 0.000 claims 4
- 230000004044 response Effects 0.000 claims 1
- 230000004321 blink reflex Effects 0.000 description 21
- 230000008569 process Effects 0.000 description 17
- 230000008447 perception Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 10
- 238000005070 sampling Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 230000001149 cognitive effect Effects 0.000 description 5
- 230000002159 abnormal effect Effects 0.000 description 4
- 210000005252 bulbus oculi Anatomy 0.000 description 3
- 238000013480 data collection Methods 0.000 description 3
- 210000001508 eye Anatomy 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000007418 data mining Methods 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000004397 blinking Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000000926 neurological effect Effects 0.000 description 1
- HQQSBEDKMRHYME-UHFFFAOYSA-N pefloxacin mesylate Chemical compound [H+].CS([O-])(=O)=O.C1=C2N(CC)C=C(C(O)=O)C(=O)C2=CC(F)=C1N1CCN(C)CC1 HQQSBEDKMRHYME-UHFFFAOYSA-N 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Eye Examination Apparatus (AREA)
Abstract
Description
技术领域technical field
本发明涉及视觉识别技术领域,尤其是涉及一种驾驶环境的识别方法、装置及模拟驾驶器。The invention relates to the technical field of visual recognition, in particular to a driving environment recognition method, device and simulated driver.
背景技术Background technique
对无人驾驶车辆的环境进行感知是无人驾驶技术发展的重要环节,现有的环境感知方法主要涉及基于雷达的环境感知方法,然而基于雷达的环境感知方法存在技术落地成本昂贵和易受环境的影响等问题,因此相关技术提出基于机器学习的环境感知方法,以提高环境感知的准确率。但是,发明人经研究发现,基于机器学习的环境感知方法通常通过增加全连接层或全局平均池化层的数量,以实现环境中多种类型视认对象的辨识,但是由于这种方法将扩大用于感知环境的机器学习模型的整体规模,从而导致降低了环境感知的精度和效率。Sensing the environment of unmanned vehicles is an important part of the development of unmanned driving technology. The existing environmental perception methods mainly involve radar-based environmental perception methods. Therefore, related technologies propose an environment perception method based on machine learning to improve the accuracy of environment perception. However, the inventors have found through research that the environment perception method based on machine learning usually increases the number of fully connected layers or global average pooling layers to realize the recognition of various types of visual recognition objects in the environment, but because this method will expand The overall size of the machine learning models used to perceive the environment, resulting in reduced accuracy and efficiency of environment perception.
发明内容Contents of the invention
有鉴于此,本发明的目的在于提供一种驾驶环境的识别方法、装置及模拟驾驶器,可以较好地提升环境感知的精度和效率。In view of this, the object of the present invention is to provide a driving environment recognition method, device and simulated driver, which can better improve the accuracy and efficiency of environment perception.
第一方面,本发明实施例提供了一种驾驶环境的识别方法,包括:采集目标对象的原始眼动信息和虚拟驾驶场景的场景信息,并对所述原始眼动信息进行预处理得到目标眼动信息;基于所述目标眼动信息确定所述虚拟驾驶场景中的待视认对象的重要等级;基于所述目标眼动信息和所述场景信息,确定所述虚拟驾驶场景中的待视认对象的视认顺序;基于所述重要等级和所述视认顺序,对所述虚拟驾驶场景的驾驶环境进行识别。In the first aspect, the embodiment of the present invention provides a driving environment recognition method, including: collecting the original eye movement information of the target object and the scene information of the virtual driving scene, and preprocessing the original eye movement information to obtain the target eye movement information. Based on the target eye movement information, determine the importance level of the object to be visually recognized in the virtual driving scene; based on the target eye movement information and the scene information, determine the object to be visually recognized in the virtual driving scene The visual recognition order of the objects: based on the importance level and the visual recognition order, the driving environment of the virtual driving scene is recognized.
在一种实施方式中,所述目标眼动信息包括瞳孔直径数据和瞬目反射数据;所述基于所述目标眼动信息确定所述虚拟驾驶场景中的待视认对象的重要等级的步骤,包括:对所述瞳孔直径数据进行多尺度几何分析,以从所述虚拟驾驶场景的待视认对象中选择感兴趣对象,得到感兴趣对象集合;对所述瞬目反射数据进行谐波分析,以从所述虚拟驾驶场景的待视认对象中选择专注对象,得到专注对象集合;根据预设感兴趣权重、预设专注权重、所述感兴趣对象集合和所述专注对象集合,确定所述虚拟驾驶场景中的待视认对象的重要等级。In one embodiment, the target eye movement information includes pupil diameter data and blink reflex data; the step of determining the importance level of the object to be recognized in the virtual driving scene based on the target eye movement information, It includes: performing multi-scale geometric analysis on the pupil diameter data to select an object of interest from objects to be recognized in the virtual driving scene to obtain a set of objects of interest; performing harmonic analysis on the blink reflection data, To select a focused object from the objects to be visually recognized in the virtual driving scene to obtain a set of focused objects; according to preset interest weights, preset focus weights, the set of interested objects and the set of focused objects, determine the The importance level of the objects to be recognized in the virtual driving scene.
在一种实施方式中,所述对所述瞳孔直径数据进行多尺度几何分析,以从所述虚拟驾驶场景的待视认对象中选择感兴趣对象的步骤,包括:对所述瞳孔直径数据进行小波变换,得到第一数据集合;确定所述第一数据集合中包含有峰值点的第一区间,并识别所述第一区间对应的目标瞳孔直径范围;基于所述目标瞳孔直径范围对应的感兴趣程度从所述虚拟驾驶场景的待视认对象中选择感兴趣对象。In one embodiment, the step of performing multi-scale geometric analysis on the pupil diameter data to select an object of interest from the objects to be recognized in the virtual driving scene includes: performing a multi-scale geometric analysis on the pupil diameter data Wavelet transform to obtain the first data set; determine the first interval containing the peak point in the first data set, and identify the target pupil diameter range corresponding to the first interval; based on the sensory sensitivity corresponding to the target pupil diameter range The degree of interest selects an object of interest from objects to be recognized in the virtual driving scene.
在一种实施方式中,所述对所述瞬目反射数据进行谐波分析,以从所述虚拟驾驶场景的待视认对象中选择专注对象的步骤,包括:对所述瞬目反射数据进行傅里叶变换,得到第二数据集合;确定所述第二数据集合中包含有幅值点的第二区间,并识别所述第二区间对应的目标瞬目反射范围;基于所述目标瞬目反射范围对应的专注程度从所述虚拟驾驶场景的待视认对象中选择专注对象。In one embodiment, the step of performing harmonic analysis on the blink reflection data to select an object to focus on from the objects to be recognized in the virtual driving scene includes: performing a harmonic analysis on the blink reflection data Fourier transform to obtain a second data set; determine a second interval containing amplitude points in the second data set, and identify the target blink reflection range corresponding to the second interval; based on the target blink The degree of concentration corresponding to the reflection range selects the focus object from the objects to be recognized in the virtual driving scene.
在一种实施方式中,所述预设感兴趣权重w1的设定方式为:其中,Ci表示第i个瞳孔直径数据,S1j表示第j个重要等级的瞳孔直径标准值,n表示瞳孔直径数据的总个数;所述预设专注权重w2的设定方式为:其中,Dp表示第p个瞬目反射数据,S2q表示第q个重要等级的瞬目反射标准值,m表示瞬目反射数据的总个数。In an implementation manner, the setting method of the preset interest weight w 1 is: Wherein, C i represents the i pupil diameter data, S j represents the pupil diameter standard value of the j important level, and n represents the total number of pupil diameter data; the setting method of the preset concentration weight w 2 is: Among them, Dp represents the pth blink reflection data, S 2q represents the blink reflection standard value of the qth important level, and m represents the total number of blink reflection data.
在一种实施方式中,所述基于所述眼动信息和所述场景信息,确定所述虚拟驾驶场景中的待视认对象的视认顺序的步骤,包括:基于所述目标眼动信息和所述场景信息,得到视线点数据;将满足预设条件的视线点数据确定为目标视认数据;其中,所述预设条件包括注视时间条件和注视角度条件;根据视认时间与所述目标视认数据之间的关联关系,确定所述目标视认数据对应待视认对象的视认顺序。In one embodiment, the step of determining the visual recognition sequence of objects to be recognized in the virtual driving scene based on the eye movement information and the scene information includes: based on the target eye movement information and Described scene information, obtain line-of-sight point data; Determine the line-of-sight point data that satisfies the preset condition as target visual recognition data; Wherein, the preset condition includes gaze time condition and gaze angle condition; According to the visual recognition time and the target The association relationship between the visual recognition data determines the visual recognition order of the target visual recognition data corresponding to the objects to be recognized.
在一种实施方式中,所述对所述原始眼动信息进行预处理得到目标眼动信息的步骤,包括:利用拉格朗日插值算法和经验模态分解法,对所述原始眼动信息进行预处理得到目标眼动信息。In one embodiment, the step of preprocessing the original eye movement information to obtain target eye movement information includes: using Lagrangian interpolation algorithm and empirical mode decomposition method to process the original eye movement information Perform preprocessing to obtain target eye movement information.
第二方面,本发明实施例还提供一种驾驶环境的识别装置,包括:数据采集模块,用于采集目标对象的原始眼动信息和虚拟驾驶场景的场景信息,并对所述原始眼动信息进行预处理得到目标眼动信息;等级确定模块,用于基于所述原始眼动信息确定所述虚拟驾驶场景中的待视认对象的重要等级;顺序确定模块,用于基于所述目标眼动信息和所述场景信息,确定所述虚拟驾驶场景中的待视认对象的视认顺序;环境识别模块,用于基于所述重要等级和所述视认顺序,对所述虚拟驾驶场景的驾驶环境进行识别。In the second aspect, the embodiment of the present invention also provides a driving environment recognition device, including: a data collection module, used to collect the original eye movement information of the target object and the scene information of the virtual driving scene, and analyze the original eye movement information Preprocessing is performed to obtain target eye movement information; a level determination module is used to determine the importance level of objects to be recognized in the virtual driving scene based on the original eye movement information; an order determination module is used to determine the order based on the target eye movement information and the scene information to determine the visual recognition sequence of the objects to be visually recognized in the virtual driving scene; the environment recognition module is used for driving in the virtual driving scene based on the importance level and the visual recognition sequence The environment is identified.
第三方面,本发明实施例还提供一种模拟驾驶器,包括模拟器屏幕、处理器和存储器;所述模拟器屏幕用于显示所述虚拟驾驶场景,所述存储器上存储有计算机程序,所述计算机程序在被所述处理器运行时执行如第一方面提供的任一项所述的方法。In a third aspect, an embodiment of the present invention also provides a simulated driver, including a simulator screen, a processor, and a memory; the simulator screen is used to display the virtual driving scene, and a computer program is stored on the memory, so The computer program executes any one of the methods provided in the first aspect when executed by the processor.
第四方面,本发明实施例还提供一种计算机存储介质,用于储存为第一方面提供的任一项所述方法所用的计算机软件指令。In a fourth aspect, an embodiment of the present invention further provides a computer storage medium for storing computer software instructions used in any one of the methods provided in the first aspect.
本发明实施例提供的一种驾驶环境的识别方法、装置及模拟驾驶器,首先采集目标对象的原始眼动信息和虚拟驾驶场景的场景信息,并对原始眼动信息进行预处理得到目标眼动信息,基于目标眼动信息确定虚拟驾驶场景中的待视认对象的重要等级,以及基于目标眼动信息和场景信息,确定虚拟驾驶场景中的待视认对象的视认顺序,进而基于重要等级和视认顺序,对虚拟驾驶场景的驾驶环境进行识别。上述方法提出了一种新的感知环境的方法,本发明实施例基于目标眼动信息确定待视认对象的重要等级,并基于目标眼动信息和场景信息确定待视认对象的视认顺序,从而基于视认顺序和重要分级识别驾驶环境,可以有效提高驾驶环境的识别精度和识别效率。The driving environment recognition method, device and simulated driver provided by the embodiments of the present invention first collect the original eye movement information of the target object and the scene information of the virtual driving scene, and preprocess the original eye movement information to obtain the target eye movement Information, based on the target eye movement information to determine the importance level of the objects to be recognized in the virtual driving scene, and based on the target eye movement information and scene information, determine the visual recognition order of the objects to be recognized in the virtual driving scene, and then based on the importance level and visual recognition sequence to recognize the driving environment of the virtual driving scene. The above method proposes a new method of perceiving the environment. The embodiment of the present invention determines the importance level of the objects to be recognized based on the eye movement information of the target, and determines the recognition sequence of the objects to be recognized based on the eye movement information of the target and the scene information. Therefore, recognizing the driving environment based on the order of visual recognition and the important classification can effectively improve the recognition accuracy and recognition efficiency of the driving environment.
本发明的其他特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
为使本发明的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present invention more comprehensible, preferred embodiments will be described in detail below together with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the specific implementation of the present invention or the technical solutions in the prior art, the following will briefly introduce the accompanying drawings that need to be used in the specific implementation or description of the prior art. Obviously, the accompanying drawings in the following description The drawings show some implementations of the present invention, and those skilled in the art can obtain other drawings based on these drawings without any creative work.
图1为本发明实施例提供的一种驾驶环境的识别方法的流程示意图;FIG. 1 is a schematic flowchart of a method for identifying a driving environment provided by an embodiment of the present invention;
图2为本发明实施例提供的一种模拟驾驶器的结构示意图;Fig. 2 is a schematic structural diagram of a simulated driver provided by an embodiment of the present invention;
图3为本发明实施例提供的一种驾驶环境的识别方法的过程示意图;Fig. 3 is a process schematic diagram of a method for identifying a driving environment provided by an embodiment of the present invention;
图4为本发明实施例提供的一种视觉搜索策略模型的结构框架图;FIG. 4 is a structural framework diagram of a visual search strategy model provided by an embodiment of the present invention;
图5为本发明实施例提供的一种驾驶环境的识别装置的结构示意图;FIG. 5 is a schematic structural diagram of an identification device for a driving environment provided by an embodiment of the present invention;
图6为本发明实施例提供的另一种模拟驾驶器的结构示意图。Fig. 6 is a schematic structural diagram of another simulated driver provided by an embodiment of the present invention.
具体实施方式detailed description
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合实施例对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below in conjunction with the embodiments. Obviously, the described embodiments are part of the embodiments of the present invention, not all of them. the embodiment. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
目前,现有的环境感知方法,存在感知精度低以及感知效率低的问题。诸如,相关技术中提出一种基于FMCW(Frequency Modulated Continuous Wave,调频连续波)的车载雷达多目标识别方法,车载雷达向前方持续发射调频连续波,对差拍信号作FFT(FastFourier Transformation,快速傅氏变换)运算进行主频率提取和匹配,进行目标速度距离的计算,降低了虚假目标的误判率;相关技术提出一种基于数据挖掘的智能化多目标综合识别方法,将数据挖掘技术应用到目标识别领域,提出基于目标特征知识识别和基于目标关联知识识别两种目标识别思路,为多目标识别提供了自动化、智能化手段;相关技术提出一种基于车辆周围环境感知系统及其控制方法,总控制单元经CAN总线连接毫米波雷达单元、视觉感知模块;相关技术提出一种基于目标特征的智能车辆多激光雷达融合识别方法,根据相似度进行目标匹配,进行目标跟踪获取目标运动特征并进行修正,增强对目标的识别能力。然而上述技术均存在感知精度低或者感知效率低的问题。为改善这一问题,本发明实施例提供了一种驾驶环境的识别方法、装置及模拟驾驶器,可以较好地提升环境感知的精度和效率。At present, the existing environment perception methods have the problems of low perception accuracy and low perception efficiency. For example, a vehicle radar multi-target recognition method based on FMCW (Frequency Modulated Continuous Wave, frequency modulated continuous wave) is proposed in the related art. Transform) operation to extract and match the main frequency, and calculate the target speed and distance, which reduces the misjudgment rate of false targets; the related technology proposes an intelligent multi-target comprehensive recognition method based on data mining, and applies data mining technology to In the field of target recognition, two target recognition ideas based on target feature knowledge recognition and target related knowledge recognition are proposed, which provide automatic and intelligent means for multi-target recognition; related technologies propose a vehicle-based surrounding environment perception system and its control method, The main control unit is connected to the millimeter-wave radar unit and the visual perception module through the CAN bus; the related technology proposes a multi-lidar fusion recognition method for intelligent vehicles based on target characteristics, and performs target matching according to similarity, and performs target tracking to obtain target motion characteristics and perform Corrected to enhance the ability to identify targets. However, the above technologies all have the problem of low sensing accuracy or low sensing efficiency. In order to improve this problem, embodiments of the present invention provide a driving environment recognition method, device, and simulated driver, which can better improve the accuracy and efficiency of environment perception.
为便于对本实施例进行理解,首先对本发明实施例所公开的一种驾驶环境的识别方法进行详细介绍,参见图1所示的一种驾驶环境的识别方法的流程示意图,该方法主要包括以下步骤S102至步骤S108:In order to facilitate the understanding of this embodiment, a method for identifying a driving environment disclosed in an embodiment of the present invention is firstly introduced in detail, referring to a schematic flowchart of a method for identifying a driving environment shown in FIG. 1 , the method mainly includes the following steps S102 to step S108:
步骤S102,采集目标对象的原始眼动信息和虚拟驾驶场景的场景信息,并对所述原始眼动信息进行预处理得到目标眼动信息。其中,目标对象也即驾驶员,原始眼动信息也即未经过预处理的眼动信息,诸如瞳孔直径数据、瞬目反射数据等,这些原始眼动信息可能存在缺失点或异常点,场景信息可以为虚拟驾驶场景的影像数据,目标眼动信息即经过预处理后得到的眼动信息,也可以包括瞳孔直径数据、瞬目反射数据等。在一种实施方式中,可分别设置用于采集目标对象的原始眼动信息的相机,和用于采集虚拟驾驶场景的场景信息的相机,并分别获取各个相机采集到的数据。Step S102, collecting the original eye movement information of the target object and the scene information of the virtual driving scene, and performing preprocessing on the original eye movement information to obtain the target eye movement information. Among them, the target object is the driver, and the original eye movement information is the unpreprocessed eye movement information, such as pupil diameter data, blink reflex data, etc. There may be missing points or abnormal points in these original eye movement information, and the scene information It can be the image data of the virtual driving scene, and the eye movement information of the target is the eye movement information obtained after preprocessing, and can also include pupil diameter data, blink reflex data, etc. In one embodiment, a camera for collecting raw eye movement information of a target object and a camera for collecting scene information of a virtual driving scene may be respectively set up, and the data collected by each camera may be respectively obtained.
步骤S104,基于目标眼动信息确定虚拟驾驶场景中的待视认对象的重要等级。其中,虚拟驾驶场景中设置有多个待视认对象,待视认对象可以包括诸如行人、机动车、非机动车和标志线等。重要等级用于表征待视认对象的重要程度,例如,可将重要等级划分为关键目标、重要目标、中等重要目标、一般目标、不重要目标等五个等级。在一种实施方式中,可以预先划分多个重要等级,并基于眼动信息所表征的认知神经反映和认知心理反映,确定各个眼动信息所属的重要等级。Step S104, determining the importance level of the object to be recognized in the virtual driving scene based on the eye movement information of the target. Wherein, multiple objects to be recognized are set in the virtual driving scene, and the objects to be recognized may include, for example, pedestrians, motor vehicles, bicycles, and marking lines. The importance level is used to represent the importance of the object to be recognized. For example, the importance level can be divided into five levels: key goals, important goals, moderately important goals, general goals, and unimportant goals. In an implementation manner, multiple importance levels may be divided in advance, and the importance level to which each eye movement information belongs is determined based on the cognitive neurological reflection and cognitive psychological reflection represented by the eye movement information.
步骤S106,基于目标眼动信息和场景信息,确定虚拟驾驶场景中的待视认对象的视认顺序。视认顺序可以用于表征目标对象观察虚拟驾驶场景中待视认对象的先后顺序。在一种实施方式中,根据目标眼动信息、场景信息和视认时间三个数据共同确定视认顺序,可选的,可以基于目标眼动信息和场景信息确定目标视认数据,通过建立视认时间与目标视认数据之间的关联,可以结合视认时间的先后顺序确定目标视认数据对应的待视认对象的视认顺序。Step S106, based on the target eye movement information and the scene information, determine the visual recognition order of the objects to be visually recognized in the virtual driving scene. The visual recognition order can be used to represent the order in which the target object observes the objects to be visually recognized in the virtual driving scene. In one embodiment, the visual recognition order is jointly determined according to the target eye movement information, scene information and visual recognition time. Optionally, the target visual recognition data can be determined based on the target eye movement information and scene information. The association between the recognition time and the target visual recognition data can be combined with the sequence of the visual recognition time to determine the visual recognition order of the objects to be visually recognized corresponding to the target visual recognition data.
步骤S108,基于重要等级和视认顺序,对虚拟驾驶场景的驾驶环境进行识别。在一种实施方式中,可以利用重要等级和视认顺序表征虚拟驾驶场景的驾驶环境。In step S108, the driving environment of the virtual driving scene is recognized based on the importance level and the recognition order. In one embodiment, the driving environment of the virtual driving scene can be represented by the importance level and the visual recognition sequence.
本发明实施例提供的上述驾驶环境的识别方法,提出了一种新的感知环境的方法,本发明实施例基于目标眼动信息确定待视认对象的重要等级,并基于目标眼动信息和场景信息确定待视认对象的视认顺序,从而基于视认顺序和重要分级识别驾驶环境,可以有效提高驾驶环境的识别精度和识别效率。The above-mentioned driving environment recognition method provided by the embodiment of the present invention proposes a new method of perceiving the environment. The embodiment of the present invention determines the importance level of the object to be recognized based on the target eye movement information, and based on the target eye movement information and the scene The information determines the visual recognition sequence of the objects to be visually recognized, so that the driving environment can be recognized based on the visual recognition sequence and importance classification, which can effectively improve the recognition accuracy and recognition efficiency of the driving environment.
在一种实施方式中,上述驾驶环境的识别方法可通过模拟驾驶器执行,模拟驾驶器为虚拟驾驶场景的实验装置,选择一定数量的驾驶经验丰富的驾驶员作为目标对象,目标对象佩戴头戴式眼动仪坐在驾驶模拟舱中。参见图2所示的一种模拟驾驶器的结构示意图,模拟驾驶器包括驾驶模拟舱和模拟器屏幕,模拟器屏幕可以根据实验需求的设置显示多视认目标的虚拟驾驶场景(也可称之为驾驶环境),驾驶模拟舱可以模拟驾驶操作环境,模拟器屏幕设置在驾驶模拟舱的正前方预设距离(诸如5~8米)处。在实际应用中,头戴式眼动仪包括场景相机和眼动相机,场景相机安装用于采集场景信息,眼动相机用于采集原始眼动信息。目标对象是原始眼动信息生成的主体,本发明实施例在选择驾驶员时,可以选择以驾驶里程5万km驾驶经验为标准的驾驶经验丰富的驾驶员进行实验,驾驶员的数量N>=10,男女比例约为3:1,生理心理均健康。In one embodiment, the above-mentioned identification method of the driving environment can be performed by a simulated driver, which is an experimental device for a virtual driving scene, and a certain number of drivers with rich driving experience are selected as target objects, and the target objects wear head-mounted The eye tracker sits in the cockpit of the driving simulator. Referring to the schematic structural diagram of a simulated driver shown in Figure 2, the simulated driver includes a driving simulation cabin and a simulator screen, and the simulator screen can display a virtual driving scene of multi-view recognition targets according to the setting of experimental requirements (also called driving environment), the driving simulation cabin can simulate the driving operation environment, and the simulator screen is set at a preset distance (such as 5-8 meters) directly in front of the driving simulation cabin. In practical applications, the head-mounted eye tracker includes a scene camera and an eye movement camera. The scene camera is installed to collect scene information, and the eye movement camera is used to collect raw eye movement information. The target object is the subject of original eye movement information generation. In the embodiment of the present invention, when selecting a driver, an experienced driver with a driving mileage of 50,000 km can be selected as the standard for the experiment. The number of drivers is N>= 10. The ratio of male to female is about 3:1, and they are both physically and mentally healthy.
可选的,可以预先对模拟驾驶器进行设置,以使模拟驾驶器提供虚拟驾驶场景,并同步采集虚拟驾驶场景的场景信息和目标对象的原始眼动信息。其中,虚拟驾驶场景指构建单一视认目标的驾驶环境和不同驾驶任务下的多视认目标驾驶环境。Optionally, the simulated driver can be set in advance, so that the simulated driver provides a virtual driving scene, and simultaneously collects scene information of the virtual driving scene and original eye movement information of the target object. Among them, the virtual driving scene refers to the driving environment with a single visual recognition target and the multi-visual recognition target driving environment under different driving tasks.
考虑到采集到的原始眼动信息和场景信息可能存在缺失点或异常点等异常数据,因此本发明实施例可以对原始眼动信息和场景信息进行预处理,可选的,可以利用拉格朗日插值算法和经验模态分解法,对原始眼动信息进行诸如剔除、补偿、降噪等预处理得到目标眼动信息,然后使用预处理后的目标眼动信息进行分析。其中。原始眼动信息分别为不同时刻目标对象的视线点位置数据、瞳孔直径数据和瞬目反射数据,其中,视线点位置数据(X0,Y0)={(X1,Y1),(X2,Y2)…(Xi,Yi),…(Xn,Yn)}、瞳孔直径数据D0=(D1,D2…Di,…Dn)、瞬目反射数据B0=(B1,B2…Bi,…Bn)。Considering that there may be abnormal data such as missing points or abnormal points in the collected original eye movement information and scene information, the embodiment of the present invention can preprocess the original eye movement information and scene information. Optionally, Lagrang can be used to The daily interpolation algorithm and the empirical mode decomposition method perform preprocessing on the original eye movement information such as elimination, compensation, noise reduction, etc. to obtain the target eye movement information, and then use the preprocessed target eye movement information for analysis. in. The original eye movement information is the sight point position data, pupil diameter data and blink reflex data of the target object at different times, where the sight point position data (X0,Y0)={(X1,Y1),(X2,Y2)... (Xi, Yi),...(Xn, Yn)}, pupil diameter data D0=(D1, D2...Di,...Dn), blink reflex data B0=(B1, B2...Bi,...Bn).
针对上述采集到的场景信息和原始眼动信息,执行以下(1)至(3):(1)确定数据中的缺失值和异常值。(2)在缺失点和异常点处使用Lagrange插值法公式获得缺失值或异常值对应点处的近似值,其中,n表示视线点数据的总个数,xa表示第a个视线点数据,xb表示第b个视线点数据。(3)使用EMD(Empirical Mode Decomposition,经验模态分解)找出瞳孔直径数据D0和瞬目反射数据B0中的极大值和极小值,计算两者均值,然后基于该均值对数据进行降噪。其中,x(t)为原始信号,在本发明实施例中,原始信号可以为视线点数据。经过上述预处理后,目标眼动信息为视线点位置数据(X1,Y1)、瞳孔直径数据D1、瞬目反射数据B1。For the scene information and original eye movement information collected above, perform the following (1) to (3): (1) Determine missing values and outliers in the data. (2) Use the Lagrange interpolation formula at missing points and outliers Obtain the approximate value at the corresponding point of the missing value or outlier, where n represents the total number of sight point data, x a represents the a-th sight point data, and x b represents the b-th sight point data. (3) Use EMD (Empirical Mode Decomposition, Empirical Mode Decomposition) to find the maximum value and minimum value in the pupil diameter data D0 and the blink reflection data B0, calculate the mean value of the two, and then reduce the data based on the mean value noise. Wherein, x(t) is an original signal, and in the embodiment of the present invention, the original signal may be line-of-sight data. After the above preprocessing, the eye movement information of the target is the gaze point position data (X1, Y1), pupil diameter data D1, and blink reflex data B1.
在实际应用中,上述目标眼动信息包括瞳孔直径数据和瞬目反射数据,基于此,本发明实施例提供了一种基于目标眼动信息确定虚拟驾驶场景中的待视认对象的重要等级的具体实施方式,参见如下步骤1至步骤3:In practical applications, the above target eye movement information includes pupil diameter data and blink reflex data. Based on this, an embodiment of the present invention provides a method for determining the importance level of objects to be recognized in a virtual driving scene based on the target eye movement information. For specific implementation, see the following steps 1 to 3:
步骤1,对瞳孔直径数据进行多尺度几何分析,以从虚拟驾驶场景的待视认对象中选择感兴趣对象,得到感兴趣对象集合。在一种实施方式中,可按照如下步骤1.1至步骤1.3所示的方法确定感兴趣对象:Step 1: Perform multi-scale geometric analysis on the pupil diameter data to select objects of interest from objects to be recognized in the virtual driving scene to obtain a set of objects of interest. In one embodiment, the object of interest can be determined according to the method shown in the following steps 1.1 to 1.3:
步骤1.1,对瞳孔直径数据进行小波变换,得到第一数据集合。在具体实现时,将瞳孔直径数据D1中所包含的瞳孔直径的统一序号作为每个采样点的标记信息,得到所有采样点的集合U=(1,2…i,…n),然后对瞳孔直径数据D1进行小波变换,得到第一数据集合Q=(Q1,Q2…Qi,…Qn),第一数据集合Q中采样点的数据序号为L(L1,L2…Li,…Ln)。其中,小波变换的公式为:α为控制尺度,τ为控制位置,α和τ为任意实数,ψ为母小波,上述第一数据集合也可称之为初始待匹配瞳孔直径数据。Step 1.1, performing wavelet transformation on the pupil diameter data to obtain the first data set. In the specific implementation, the unified serial number of the pupil diameter contained in the pupil diameter data D1 is used as the label information of each sampling point, and the set U=(1,2...i,...n) of all sampling points is obtained, and then the pupil The diameter data D1 is subjected to wavelet transformation to obtain the first data set Q=(Q1, Q2...Qi,...Qn), and the data serial number of the sampling point in the first data set Q is L(L1, L2...Li,...Ln). Among them, the formula of wavelet transform is: α is the control scale, τ is the control position, α and τ are arbitrary real numbers, ψ is the mother wavelet, and the first data set above can also be called the initial pupil diameter data to be matched.
步骤1.2,确定第一数据集合中包含有峰值点的第一区间,并识别第一区间对应的目标瞳孔直径范围。在实际应用中,可以从第一数据集合中确定包含有峰值点的第一区间,从而将该第一区间对应的瞳孔直径范围确定为目标瞳孔直径范围。Step 1.2, determining the first interval including the peak point in the first data set, and identifying the target pupil diameter range corresponding to the first interval. In practical applications, the first interval including the peak point may be determined from the first data set, so that the pupil diameter range corresponding to the first interval is determined as the target pupil diameter range.
步骤1.3,基于目标瞳孔直径范围对应的感兴趣程度从所述虚拟驾驶场景的待视认对象中选择感兴趣对象。在一种实施方式中,识别第一数据集合Q中瞳孔直径中每个波峰的峰值点,记峰值点对应的目标瞳孔直径在第一数据集合Q中的采样点数据序号的集合为I(I1,I2…Ii,…In),本发明实施例通过集合I得到瞳孔直径的变化范围,将瞳孔直径大时对应的待视认对象作为感兴趣对象。在一种可选的实施方式中,可以确定目标眼动信息中瞳孔直径数据的范围,并将瞳孔直径数据的范围划分为多个区间(诸如五个区间),并确定每个区间对应不同的感兴趣程度,从而按照感兴趣程度由大到小的顺序从虚拟驾驶场景的待视认对象中选择感兴趣对象。Step 1.3, selecting an object of interest from the objects to be recognized in the virtual driving scene based on the degree of interest corresponding to the target pupil diameter range. In one embodiment, the peak point of each peak in the pupil diameter in the first data set Q is identified, and the set of the sampling point data numbers of the target pupil diameter corresponding to the peak point in the first data set Q is I(I1 , I2...Ii,...In), the embodiment of the present invention obtains the variation range of the pupil diameter through the set I, and takes the corresponding object to be recognized when the pupil diameter is large as the object of interest. In an optional implementation, the range of the pupil diameter data in the target eye movement information can be determined, and the range of the pupil diameter data is divided into multiple intervals (such as five intervals), and it is determined that each interval corresponds to a different The degree of interest, so that the object of interest is selected from the objects to be recognized in the virtual driving scene in descending order of the degree of interest.
步骤2,对瞬目反射数据进行谐波分析,以从虚拟驾驶场景的待视认对象中选择专注对象,得到专注对象集合。在一种实施方式中,可按照如下步骤2.1至步骤2.3所示的方法确定专注对象:Step 2: Harmonic analysis is performed on the blink reflection data to select focused objects from the objects to be recognized in the virtual driving scene to obtain a set of focused objects. In one embodiment, the focus object can be determined according to the method shown in the following steps 2.1 to 2.3:
步骤2.1,对瞬目反射数据进行傅里叶变换,得到第二数据集合。在具体实现时,将瞬目反射数据B1中瞬目反射的统一序号作为每个采样点的标记信息,得到所有采样点的集合F=(1,2…i,…n);然后对瞬目反射数据B1进行傅里叶变换,得到第二数据集合E=(E1,E2…Ei,…En),第二数据集合E中采样点的数据序号为P=(P1,P2…Pi,…Pn)。其中,傅里叶变换公式为:an和bn为实频率分量的振幅,T为函数周期,n为整数且n>0,上述第二数据集合也可称之为初始待匹配瞬目反射数据集合。Step 2.1, perform Fourier transform on the blink reflection data to obtain the second data set. During concrete realization, the unified serial number of blink reflection in the blink reflection data B1 is used as the mark information of each sampling point, obtains the collection F=(1,2...i,...n) of all sampling points; Then to blink The reflection data B1 is subjected to Fourier transform to obtain the second data set E=(E1, E2...Ei,...En), and the data serial numbers of the sampling points in the second data set E are P=(P1, P2...Pi,...Pn ). Among them, the Fourier transform formula is: a n and b n are the amplitudes of real frequency components, T is the function period, n is an integer and n>0, the above second data set may also be referred to as an initial to-be-matched blink reflection data set.
步骤2.2,确定第二数据集合中包含有幅值点的第二区间,并识别第二区间对应的目标瞬目反射范围。在实际应用中,可以从第二数据集合中确定包含有幅值点的第二区间,从而将该第二区间对应的瞬目反射范围确定为目标瞬目反射范围。Step 2.2, determining the second interval containing the amplitude points in the second data set, and identifying the target blink reflection range corresponding to the second interval. In practical applications, the second interval including the amplitude point may be determined from the second data set, so that the blink reflection range corresponding to the second interval is determined as the target blink reflection range.
步骤2.3,基于目标瞬目反射范围对应的专注程度从虚拟驾驶场景的待视认对象中选择专注对象。识别第二数据集合P中瞬目反射数据的幅值点,记幅值点在第二数据集合P中的采样点数据序号的集合为R=(R1,R2…Ri,…Rn)。本发明实施例通过集合R得到瞬目反射的幅值变化范围,将瞬目反射小的数据对应的待视认对象作为专注对象。在一种可选的实施方式中,可以确定目标眼动信息中瞬目反射数据的范围,并将瞬目反射数据的范围划分为多个区间(诸如五个区间),并确定每个区间对应不同的专注程度,从而按照专注程度由大到小的顺序从虚拟驾驶场景的待视认对象中选择专注对象。Step 2.3, based on the degree of concentration corresponding to the target blink reflex range, select the focus object from the objects to be recognized in the virtual driving scene. Identify the amplitude points of the blink reflection data in the second data set P, and record the set of sampling point data numbers of the amplitude points in the second data set P as R=(R1, R2...Ri,...Rn). In the embodiment of the present invention, the amplitude range of the blink reflection is obtained through the set R, and the object to be recognized corresponding to the data with a small blink reflection is taken as the focus object. In an optional implementation manner, the range of the blink reflection data in the eye movement information of the target can be determined, and the range of the blink reflection data is divided into multiple intervals (such as five intervals), and the corresponding interval of each interval is determined. Different degrees of concentration, so that the focus objects are selected from the objects to be recognized in the virtual driving scene in descending order of concentration.
步骤3,根据预设感兴趣权重、预设专注权重、感兴趣对象集合和专注对象集合,确定虚拟驾驶场景中的待视认对象的重要等级。可选的,瞳孔直径数据越大,该瞳孔直径数据对应的待视认对象为感兴趣对象的可能性越大;瞬目反射数据越小,该瞬目反射数据对应的待视认对象为专注对象的可能性越大,目标对象既感兴趣有专注的待视认对象认为是重要等级较高的待视认对象。为便于对上述步骤3进行理解,本发明实施例示例性提供了一种确定待视认对象的重要等级的实施方式,本发明将瞳孔直径数据与对应的心理状态分为五类:瞳孔直径<2.5mm为无兴趣对象、瞳孔直径2.5-4mm为一般兴趣对象、瞳孔直径4-5.5mm为中等兴趣对象、瞳孔直径5.5-7mm为重要感兴趣对象、瞳孔直径>7mm为极感兴趣对象;瞬目反射数据也就是眨眼运动,将瞬目反射数据与对应的认知神经状态分为五类:瞬目反射>16次/min为不专注状态、瞬目反射14-16次/min为一般专注状态、瞬目反射12-14次/min为中等专注状态、瞬目反射10-12次/min为非常状态、瞳孔直径<10次/min为极度专注状态。结合聚类权法将集合Q与集合E的数据结合等级标准进行权重的计算:设选取瞳孔直径数据的实测值为Qi,选取瞬目反射数据的实测值为Ei,第i个指标的第j等级的标准值为Sij,i=1,2;j=1,2,3,4,5。设Wij是第i个指标在第j个级别的权重,则预设感兴趣权重w1(也即,瞳孔直径的权重w1)可表示为:其中,Ci表示第i个瞳孔直径数据,S1j表示第j个重要等级的瞳孔直径标准值,n表示瞳孔直径数据的总个数;预设专注权重w2(也即,瞬目反射的权重w2)可表示为:其中,Dp表示第p个瞬目反射数据,S2q表示第q个重要等级的瞬目反射标准值,m表示瞬目反射数据的总个数。从而将不同驾驶任务下多视认目标环境中待视认对象的重要等级分为五个等级,包括:关键目标、重要目标、中等重要目标、一般目标、不重要目标。Step 3, according to the preset interest weight, preset focus weight, interest object set and focus object set, determine the importance level of the objects to be recognized in the virtual driving scene. Optionally, the larger the pupil diameter data, the greater the possibility that the object to be viewed and recognized corresponding to the pupil diameter data is an object of interest; the smaller the blink reflection data, the greater the possibility that the object to be recognized and recognized corresponding to the blink reflection data is focused The more likely the object is, the target object who is both interested and focused is regarded as a higher-level object to be recognized. In order to facilitate the understanding of the above step 3, the embodiment of the present invention exemplarily provides an implementation manner of determining the importance level of the object to be recognized. The present invention divides the pupil diameter data and the corresponding psychological state into five categories: pupil diameter < Pupil diameter 2.5mm is no interest object, pupil diameter 2.5-4mm is general interest object, pupil diameter 4-5.5mm is medium interest object, pupil diameter 5.5-7mm is important interest object, pupil diameter >7mm is extremely interest object; instantaneous The eye reflex data is the blinking movement, and the blink reflex data and the corresponding cognitive neural states are divided into five categories: blink reflex > 16 times/min is inattentive state, blink reflex 14-16 times/min is general concentration State, blink reflex 12-14 times/min is moderate concentration state, blink reflex 10-12 times/min is abnormal state, pupil diameter <10 times/min is extreme concentration state. Combining the clustering weight method, the data of the set Q and the set E are combined with the grade standard to calculate the weight: the measured value of the pupil diameter data is selected as Qi, the measured value of the blink reflex data is selected as Ei, and the j-th index of the i-th index is The standard values of grades are S ij , i=1, 2; j=1, 2, 3, 4, 5. Let W ij be the weight of the i-th indicator at the j-th level, then the preset interest weight w 1 (that is, the weight w 1 of the pupil diameter) can be expressed as: Among them, C i represents the i pupil diameter data, S 1j represents the pupil diameter standard value of the j important level, n represents the total number of pupil diameter data; preset focus weight w 2 (that is, the blink reflex Weight w 2 ) can be expressed as: Among them, Dp represents the pth blink reflection data, S 2q represents the blink reflection standard value of the qth important level, and m represents the total number of blink reflection data. Therefore, the importance levels of objects to be recognized in the multi-view target environment under different driving tasks are divided into five levels, including: key targets, important targets, moderately important targets, general targets, and unimportant targets.
对于上述步骤S106,可以按照如下步骤a至步骤c所示的步骤执行基于目标眼动信息和场景信息,确定虚拟驾驶场景中的待视认对象的视认顺序:For the above step S106, the steps shown in the following steps a to c can be performed based on the target eye movement information and scene information to determine the visual recognition order of the objects to be visually recognized in the virtual driving scene:
步骤a,基于目标眼动信息和场景信息,得到视线点数据。Step a, based on the eye movement information of the target and the scene information, the gaze point data is obtained.
步骤b,将满足预设条件的视线点数据确定为目标视认数据。其中,预设条件包括注视时间条件和注视角度条件。在一种实施方式中,注视时间条件可以为注视持续时间大于预设时间阈值,注视角度条件可以为注视角度范围位于预设角度范围内。首先要明确目标视认数据与常规扫视视线点的区别,目标视认数据为眼球运动速度低于5deg/s、视角偏差a*a其中a<=0.41°、注视持续时间>100ms时的获取的视线点位置;其中,注视持续时间是视轴中心位置保持不变的持续时间,也就是从注视对象上提取信息所用的时间;注视角度是眼球相对于头部在水平和竖直方向转动的角度。以(0,0)为原点,即眼球到竖直面的垂线与竖直面的交点,眼球与注视点之间的连线在水平面上的投影与垂直面之间的夹角a即为视线点位置在水平面上的角度。In step b, the sight point data satisfying the preset condition is determined as the target visual recognition data. Wherein, the preset conditions include gaze time conditions and gaze angle conditions. In one embodiment, the gaze time condition may be that the gaze duration is greater than a preset time threshold, and the gaze angle condition may be that the gaze angle range is within a preset angle range. First of all, it is necessary to clarify the difference between the target visual recognition data and the conventional glance point of sight. The target visual recognition data is obtained when the eye movement speed is lower than 5deg/s, the viewing angle deviation is a*a where a<=0.41°, and the fixation duration>100ms Gaze point position; among them, fixation duration is the duration that the central position of the visual axis remains unchanged, that is, the time it takes to extract information from the gaze object; gaze angle is the angle at which the eyeball rotates in the horizontal and vertical directions relative to the head . Taking (0,0) as the origin, that is, the intersection of the vertical line from the eyeball to the vertical plane and the vertical plane, the angle a between the projection of the line between the eyeball and the fixation point on the horizontal plane and the vertical plane is The angle of the line of sight point position on the horizontal plane.
步骤c,根据视认时间与目标视认数据之间的关联关系,确定目标视认数据对应待视认对象的视认顺序。由于实验采用100Hz的频率进行采样,在不同的驾驶任务和多视认目标的复杂环境中,利用MATLAB图像处理技术,从所有视线点集合(X1,Y1)中提取目标视认数据的集合(X2,Y2),时间域内目标视认数据的位置集合为(Xt,Yt),建立视认时间和目标视认视线点位置的对应矩阵则实际的目标视认视线点位置通过对应矩阵的时间项即可得到对应的目标视认顺序。Step c, according to the correlation between the visual recognition time and the target visual recognition data, determine the visual recognition sequence of the target visual recognition data corresponding to the objects to be recognized. Since the experiment is sampled at a frequency of 100 Hz, in the complex environment of different driving tasks and multiple visual recognition targets, using MATLAB image processing technology, the set of target visual recognition data (X2 , Y2), the location set of the target visual recognition data in the time domain is (Xt, Yt), and the corresponding matrix between the visual recognition time and the target visual recognition sight point position is established Then the actual target visual recognition point position The corresponding target visual recognition sequence can be obtained through the time item of the corresponding matrix.
为便于对上述实施例提供的驾驶环境的识别方法进行理解,本发明实施例提供了另一种驾驶环境的识别方法,参见图3所示的一种驾驶环境的识别方法的过程示意图,对瞳孔直径数据进行多尺度几何分析,以及对瞬目反射数据进行谐波分析,结合两类指标在认知神经学与认知心理学地反映,得到待视认对象的重要等级,另外,对视线点数据进行视线点划分,确定不同时刻的目标视认数据和目标视认数据对应的待视认对象,通过对目标视认数据对应的待视认对象进行是与分析,即可得到待视认对象的视认顺序,最终利用Petri网离散建模方法建立视觉搜索策略模型,该视觉搜索策略模型用于基于上述实施例所提供的方法对驾驶环境进行识别,其中,视觉搜索策略模型的输入为驾驶任务,输出为重要等级和视认顺序。本发明实施例还提供了一种利用Petri网离散建模方法建立视觉搜索策略模型的实施方式,具体的,面向单一视认目标的驾驶环境和不同驾驶任务下的多视认目标驾驶环境,利用处理好的瞳孔直径数据Q和瞬目反射数据E以及时间和目标视认数据位置的对应关系,构建以驾驶任务为输入、以重要等级与视认顺序为输出的视觉搜索策略模型。In order to facilitate the understanding of the driving environment recognition method provided by the above embodiments, the embodiment of the present invention provides another driving environment recognition method, refer to the process diagram of a driving environment recognition method shown in Figure 3, the pupil The multi-scale geometric analysis of the diameter data and the harmonic analysis of the blink reflection data are combined to reflect the two types of indicators in cognitive neurology and cognitive psychology to obtain the importance level of the object to be recognized. In addition, the sight point The data is divided into sight points, and the target visual recognition data and the target visual recognition objects corresponding to the target visual recognition data are determined at different times. By analyzing the target visual recognition data corresponding to the target visual recognition data, the target visual recognition objects can be obtained. The sequence of visual recognition, finally using the Petri net discrete modeling method to establish a visual search strategy model, the visual search strategy model is used to identify the driving environment based on the method provided by the above embodiment, wherein the input of the visual search strategy model is Tasks, the output is importance level and recognition order. The embodiment of the present invention also provides an implementation mode of establishing a visual search strategy model using the Petri net discrete modeling method. Specifically, the driving environment for a single visual recognition target and the multi-visual recognition target driving environment under different driving tasks, using After the processed pupil diameter data Q, blink reflex data E, and the corresponding relationship between time and target visual recognition data position, a visual search strategy model with driving tasks as input and importance level and visual recognition sequence as output is constructed.
为便于对上述视觉搜索策略模型进行理解,本发明实施例提供了一种视觉搜索策略模型的结构框架图,如图4所示,令X={D0,B0,(X0,Y0),D1,B1,(X1,Y1),Q,E,(X2,Y2),I,R,(Xt,Yt),C,S,(Xi,Yi)},其中,X是指所有用到的数据,包括原始眼动信息、场景信息以及处理后的数据等,D0为预处理前的瞳孔直径数据,B0为预处理前的瞬目反射数据,(X0,Y0)为预处理前的视线点位置数据,D1为预处理后的瞳孔直径数据,B1为预处理后的瞬目反射数据,(X1,Y1)为预处理后的视线点位置数据,Q为第一数据集合,E为第二数据集合,(X2,Y2)为目标视认数据的集合,I为峰值点对应的目标瞳孔直径在第一数据集合Q中的采样点数据序号的集合,R为幅值点在第二数据集合P中的采样点数据序号的集合,(Xt,Yt)为时间域内目标视认数据的位置集合,C为重要等级,D为视认顺序。In order to facilitate the understanding of the above-mentioned visual search strategy model, an embodiment of the present invention provides a structural framework diagram of a visual search strategy model, as shown in FIG. 4 , let X={D0, B0, (X0, Y0), D1, B1, (X1, Y1), Q, E, (X2, Y2), I, R, (Xt, Yt), C, S, (Xi, Yi)}, where X refers to all the data used, Including original eye movement information, scene information and processed data, etc. D0 is the pupil diameter data before preprocessing, B0 is the blink reflex data before preprocessing, (X0, Y0) is the sight point position data before preprocessing , D1 is the preprocessed pupil diameter data, B1 is the preprocessed blink reflection data, (X1, Y1) is the preprocessed gaze point position data, Q is the first data set, E is the second data set , (X2, Y2) is the set of target visual recognition data, I is the set of sampling point data numbers of the target pupil diameter corresponding to the peak point in the first data set Q, and R is the amplitude point in the second data set P (Xt, Yt) is the location set of target visual recognition data in the time domain, C is the importance level, and D is the visual recognition sequence.
图4所表征的流程变化集合∑={T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13},其中,T1为数据采集过程(包括采集眼动信息和采集场景信息),T2为对瞳孔直径数据进行预处理(包括剔除、补偿与降噪)过程,T3为对瞬目反射数据进行预处理过程,T4为对视线点数据进行进行预处理过程,T5为对瞳孔直径数据进行多尺度变换过程,T6为对瞬目反射数据进行谐波分析过程,T7为Matlab提取数据过程,T8为峰值点的提取过程以及峰值点所在区间的查找过程,T9为幅值点的提取过程以及幅值点所在区间的查找过程,T10为时间域内的位置结合过程,T11为计算瞳孔直径指标所占权重过程,T12为计算瞬目反射指标所占权重过程,T13为建立时间域目标视认位置的对应矩阵过程。The process change set Σ={T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13} represented by Fig. 4, wherein, T1 is the data collection process (including collecting eye movement information and collected scene information), T2 is the process of preprocessing the pupil diameter data (including elimination, compensation and noise reduction), T3 is the process of preprocessing the blink reflection data, and T4 is the process of preprocessing the gaze point data , T5 is the multi-scale transformation process of pupil diameter data, T6 is the harmonic analysis process of blink reflection data, T7 is the data extraction process of Matlab, T8 is the peak point extraction process and the search process of the peak point interval, T9 is the extraction process of the amplitude point and the search process of the interval where the amplitude point is located, T10 is the position combination process in the time domain, T11 is the process of calculating the weight of the pupil diameter index, T12 is the process of calculating the weight of the blink reflex index, and T13 It is the process of establishing the corresponding matrix of the visual recognition position of the target in the time domain.
综上所述,本发明实施例提供的驾驶环境的识别方法,基于面向多种驾驶任务与多视认目标的复杂驾驶环境,通过驾驶人的眼动信息所反映出的心理状态作为目标视认重要性等级的划分依据,符合基于交通心理学的交通安全原则,并通过处理不同时刻的视线点位置确定视认顺序,利用perti网系统建模方法构建出基于复杂环境的视觉搜索策略模型,解决了其他的智能车环境感知方法中感知精度和效率较低的缺陷,提高了车辆在复杂环境中多视认目标搜索的针对性,减少需持续跟踪目标的数量,从而缩短复杂环境下感知所需的时间,使智能车更为安全可靠。In summary, the driving environment recognition method provided by the embodiment of the present invention is based on a complex driving environment oriented to various driving tasks and multiple visual recognition targets, and the psychological state reflected by the driver's eye movement information is used as the target visual recognition method. The basis for the classification of the importance level is in line with the traffic safety principle based on traffic psychology, and the order of visual recognition is determined by processing the position of sight points at different times, and the visual search strategy model based on the complex environment is constructed by using the perti net system modeling method to solve the problem It overcomes the shortcomings of low perception accuracy and efficiency in other intelligent vehicle environment perception methods, improves the pertinence of the vehicle's multi-view target search in complex environments, reduces the number of targets that need to be continuously tracked, and shortens the time required for perception in complex environments. time, making smart cars safer and more reliable.
对于上述实施例提供的驾驶环境的识别方法,本发明实施例还提供了一种驾驶环境的识别装置,参见图5所示的一种驾驶环境的识别装置的结构示意图,该装置主要包括以下部分:For the driving environment recognition method provided in the above embodiments, the embodiment of the present invention also provides a driving environment recognition device, refer to the schematic structural diagram of a driving environment recognition device shown in Figure 5, the device mainly includes the following parts :
数据采集模块502,用于采集目标对象的原始眼动信息和虚拟驾驶场景的场景信息,并对原始眼动信息进行预处理得到目标眼动信息。The data collection module 502 is used to collect the original eye movement information of the target object and the scene information of the virtual driving scene, and preprocess the original eye movement information to obtain the target eye movement information.
等级确定模块504,用于基于目标眼动信息确定虚拟驾驶场景中的待视认对象的重要等级。A level determination module 504, configured to determine the importance level of the object to be recognized in the virtual driving scene based on the eye movement information of the target.
顺序确定模块506,用于基于目标眼动信息和场景信息,确定虚拟驾驶场景中的待视认对象的视认顺序。The sequence determination module 506 is configured to determine the sequence of visual recognition of the objects to be visually recognized in the virtual driving scene based on the target eye movement information and the scene information.
环境识别模块508,用于基于重要等级和视认顺序,对虚拟驾驶场景的驾驶环境进行识别。The environment identification module 508 is configured to identify the driving environment of the virtual driving scene based on the importance level and the order of visual recognition.
本发明实施例提供的驾驶环境的识别装置,提出了一种新的感知环境的方法,本发明实施例基于眼动信息确定待视认对象的重要等级,并基于眼动信息和场景信息确定待视认对象的视认顺序,从而基于视认顺序和重要分级识别驾驶环境,可以有效提高驾驶环境的识别精度和识别效率。The driving environment recognition device provided by the embodiment of the present invention proposes a new method for perceiving the environment. The embodiment of the present invention determines the importance level of the object to be recognized based on the eye movement information, and determines the importance level of the object to be recognized based on the eye movement information and scene information. The visual recognition sequence of the visual recognition objects, so as to recognize the driving environment based on the visual recognition sequence and important classification, can effectively improve the recognition accuracy and recognition efficiency of the driving environment.
在一种实施方式中,目标眼动信息包括瞳孔直径数据和瞬目反射数据;等级确定模块504还用于:对瞳孔直径数据进行多尺度几何分析,以从虚拟驾驶场景的待视认对象中选择感兴趣对象,得到感兴趣对象集合;对瞬目反射数据进行谐波分析,以从虚拟驾驶场景的待视认对象中选择专注对象,得到专注对象集合;根据预设感兴趣权重、预设专注权重、感兴趣对象集合和专注对象集合,确定虚拟驾驶场景中的待视认对象的重要等级。In one embodiment, the eye movement information of the target includes pupil diameter data and blink reflex data; the level determination module 504 is also used for: performing multi-scale geometric analysis on the pupil diameter data, so as to obtain the visually recognized objects from the virtual driving scene Select an object of interest to obtain a set of objects of interest; conduct harmonic analysis on the blink reflection data to select a focus object from the objects to be recognized in the virtual driving scene to obtain a set of focus objects; according to the preset interest weight, preset The focus weight, the set of interested objects and the set of focus objects determine the importance level of the objects to be recognized in the virtual driving scene.
在一种实施方式中,等级确定模块504还用于:对瞳孔直径数据进行小波变换,得到第一数据集合;确定第一数据集合中包含有峰值点的第一区间,并识别第一区间对应的目标瞳孔直径范围;基于目标瞳孔直径范围对应的感兴趣程度从所述虚拟驾驶场景的待视认对象中选择感兴趣对象。In one embodiment, the level determination module 504 is also used to: perform wavelet transformation on the pupil diameter data to obtain the first data set; determine the first interval containing the peak point in the first data set, and identify the first interval corresponding to The target pupil diameter range; select the object of interest from the objects to be recognized in the virtual driving scene based on the degree of interest corresponding to the target pupil diameter range.
在一种实施方式中,等级确定模块504还用于:对瞬目反射数据进行傅里叶变换,得到第二数据集合;确定第二数据集合中包含有幅值点的第二区间,并识别第二区间对应的目标瞬目反射范围;基于目标瞬目反射范围对应的专注程度从所述虚拟驾驶场景的待视认对象中选择专注对象。In one embodiment, the level determination module 504 is also used to: perform Fourier transform on the blink reflection data to obtain the second data set; determine the second interval containing the amplitude points in the second data set, and identify The target blink reflex range corresponding to the second interval; selecting the focus object from the objects to be recognized in the virtual driving scene based on the degree of concentration corresponding to the target blink reflex range.
在一种实施方式中,预设感兴趣权重w1的设定方式为:其中,Ci表示第i个瞳孔直径数据,S1j表示第j个重要等级的瞳孔直径标准值,n表示瞳孔直径数据的总个数;预设专注权重w2的设定方式为:其中,Dp表示第p个瞬目反射数据,S2q表示第q个重要等级的瞬目反射标准值,m表示瞬目反射数据的总个数。In an implementation manner, the setting method of the preset interest weight w 1 is: Among them, C i represents the i-th pupil diameter data, S 1j represents the pupil diameter standard value of the j-th important level, n represents the total number of pupil diameter data; the setting method of the preset focus weight w 2 is: Among them, Dp represents the pth blink reflection data, S 2q represents the blink reflection standard value of the qth important level, and m represents the total number of blink reflection data.
在一种实施方式中,顺序确定模块506还用于:基于目标眼动信息和场景信息,得到视线点数据;将满足预设条件的视线点数据确定为目标视认数据;其中,预设条件包括注视时间条件和注视角度条件;根据视认时间与目标视认数据之间的关联关系,确定目标视认数据对应待视认对象的视认顺序。In one embodiment, the sequence determination module 506 is also used to: obtain the sight point data based on the target eye movement information and scene information; determine the sight point data satisfying the preset condition as the target visual recognition data; wherein, the preset condition The gaze time condition and the gaze angle condition are included; according to the correlation between the visual recognition time and the target visual recognition data, the visual recognition sequence of the target visual recognition data corresponding to the objects to be recognized is determined.
在一种实施方式中,上述装置还包括预处理模块,用于:利用拉格朗日插值算法和经验模态分解法,对原始眼动信息进行预处理得到目标眼动信息。In one embodiment, the above device further includes a preprocessing module, configured to: use Lagrangian interpolation algorithm and empirical mode decomposition method to preprocess the original eye movement information to obtain target eye movement information.
本发明实施例所提供的装置,其实现原理及产生的技术效果和前述方法实施例相同,为简要描述,装置实施例部分未提及之处,可参考前述方法实施例中相应内容。The implementation principles and technical effects of the device provided by the embodiment of the present invention are the same as those of the foregoing method embodiment. For brief description, for the parts not mentioned in the device embodiment, reference may be made to the corresponding content in the foregoing method embodiment.
本发明实施例提供了一种模拟驾驶器,具体的,该模拟驾驶器包括模拟器屏幕、处理器和存储装置;模拟器屏幕用于显示虚拟驾驶场景,存储装置上存储有计算机程序,计算机程序在被所述处理器运行时执行如上所述实施方式的任一项所述的方法。An embodiment of the present invention provides a simulated driver, specifically, the simulated driver includes a simulator screen, a processor and a storage device; the simulator screen is used to display a virtual driving scene, and a computer program is stored on the storage device, the computer program The method of any one of the above embodiments is performed when executed by the processor.
图6为本发明实施例提供的另一种模拟驾驶器的结构示意图,该模拟驾驶器100包括:处理器60,存储器61,总线62和通信接口63,所述处理器60、通信接口63和存储器61通过总线62连接;处理器60用于执行存储器61中存储的可执行模块,例如计算机程序。FIG. 6 is a schematic structural diagram of another simulated driver provided by an embodiment of the present invention. The
本发明实施例所提供的可读存储介质的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行前面方法实施例中所述的方法,具体实现可参见前述方法实施例,在此不再赘述。The computer program product of the readable storage medium provided by the embodiments of the present invention includes a computer-readable storage medium storing program codes, and the instructions included in the program codes can be used to execute the methods described in the foregoing method embodiments, specifically implemented Reference may be made to the foregoing method embodiments, and details are not repeated here.
最后应说明的是:以上所述实施例,仅为本发明的具体实施方式,用以说明本发明的技术方案,而非对其限制,本发明的保护范围并不局限于此,尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。Finally, it should be noted that: the above-described embodiments are only specific implementations of the present invention, used to illustrate the technical solutions of the present invention, rather than limiting them, and the scope of protection of the present invention is not limited thereto, although referring to the foregoing The embodiment has described the present invention in detail, and those of ordinary skill in the art should understand that any person familiar with the technical field can still modify the technical solutions described in the foregoing embodiments within the technical scope disclosed in the present invention Changes can be easily thought of, or equivalent replacements are made to some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present invention, and should be covered by the scope of the present invention. within the scope of protection. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011068538.8A CN112181149B (en) | 2020-09-30 | 2020-09-30 | Driving environment recognition method and device and simulated driver |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011068538.8A CN112181149B (en) | 2020-09-30 | 2020-09-30 | Driving environment recognition method and device and simulated driver |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112181149A CN112181149A (en) | 2021-01-05 |
CN112181149B true CN112181149B (en) | 2022-12-20 |
Family
ID=73947750
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011068538.8A Active CN112181149B (en) | 2020-09-30 | 2020-09-30 | Driving environment recognition method and device and simulated driver |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112181149B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1705454A (en) * | 2002-10-15 | 2005-12-07 | 沃尔沃技术公司 | Method and arrangement for interpreting a subjects head and eye activity |
CN105205443A (en) * | 2015-08-13 | 2015-12-30 | 吉林大学 | Traffic conflict identification method based on eye movement characteristic of driver |
CN107545754A (en) * | 2017-07-18 | 2018-01-05 | 北京工业大学 | A kind of acquisition methods and device of road signs information threshold value |
CN108068821A (en) * | 2016-11-08 | 2018-05-25 | 现代自动车株式会社 | For determining the device of the focus of driver, there are its system and method |
CN108369780A (en) * | 2015-12-17 | 2018-08-03 | 马自达汽车株式会社 | Visual cognition helps system and the detecting system depending on recognizing object |
CN109637261A (en) * | 2019-01-16 | 2019-04-16 | 吉林大学 | Driver's response ability training system in automatic-manual driving right switching scenario |
CN109726426A (en) * | 2018-11-12 | 2019-05-07 | 初速度(苏州)科技有限公司 | A kind of Vehicular automatic driving virtual environment building method |
CN111667568A (en) * | 2020-05-28 | 2020-09-15 | 北京工业大学 | An evaluation method of variable information board information release effect based on driving simulation technology |
-
2020
- 2020-09-30 CN CN202011068538.8A patent/CN112181149B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1705454A (en) * | 2002-10-15 | 2005-12-07 | 沃尔沃技术公司 | Method and arrangement for interpreting a subjects head and eye activity |
CN105205443A (en) * | 2015-08-13 | 2015-12-30 | 吉林大学 | Traffic conflict identification method based on eye movement characteristic of driver |
CN108369780A (en) * | 2015-12-17 | 2018-08-03 | 马自达汽车株式会社 | Visual cognition helps system and the detecting system depending on recognizing object |
CN108068821A (en) * | 2016-11-08 | 2018-05-25 | 现代自动车株式会社 | For determining the device of the focus of driver, there are its system and method |
CN107545754A (en) * | 2017-07-18 | 2018-01-05 | 北京工业大学 | A kind of acquisition methods and device of road signs information threshold value |
CN109726426A (en) * | 2018-11-12 | 2019-05-07 | 初速度(苏州)科技有限公司 | A kind of Vehicular automatic driving virtual environment building method |
CN109637261A (en) * | 2019-01-16 | 2019-04-16 | 吉林大学 | Driver's response ability training system in automatic-manual driving right switching scenario |
CN111667568A (en) * | 2020-05-28 | 2020-09-15 | 北京工业大学 | An evaluation method of variable information board information release effect based on driving simulation technology |
Also Published As
Publication number | Publication date |
---|---|
CN112181149A (en) | 2021-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113743471B (en) | A driving evaluation method and system thereof | |
US10740658B2 (en) | Object recognition and classification using multiple sensor modalities | |
JP2023526329A (en) | Scenario Identification for Validation and Training of Machine Learning Based Models for Autonomous Vehicles | |
CN109444912B (en) | A driving environment perception system and method based on collaborative control and deep learning | |
Borghi et al. | Embedded recurrent network for head pose estimation in car | |
Chen et al. | Vehicles driving behavior recognition based on transfer learning | |
Pech et al. | Head tracking based glance area estimation for driver behaviour modelling during lane change execution | |
Rezaei et al. | Simultaneous analysis of driver behaviour and road condition for driver distraction detection | |
CN117698762B (en) | Intelligent driving assistance system and method based on environment perception and behavior prediction | |
CN116331221A (en) | Assisted driving method, device, electronic device and storage medium | |
US12067471B2 (en) | Searching an autonomous vehicle sensor data repository based on context embedding | |
Lu et al. | Pose-guided model for driving behavior recognition using keypoint action learning | |
Yang et al. | Comprehensive assessment of artificial intelligence tools for driver monitoring and analyzing safety critical events in vehicles | |
CN112181149B (en) | Driving environment recognition method and device and simulated driver | |
Chen et al. | Situation awareness in ai-based technologies and multimodal systems: Architectures, challenges and applications | |
CN112347851B (en) | Construction method of multi-target detection network, multi-target detection method and device | |
Shichkina et al. | Analysis of driving style using self-organizing maps to analyze driver behavior. | |
Pech et al. | Real time recognition of non-driving related tasks in the context of highly automated driving | |
Xu et al. | Multi-sensor Decision-level Fusion Network Based on Attention Mechanism for Object Detection | |
CN116823884A (en) | Multi-target tracking method, system, computer equipment and storage medium | |
Yang et al. | Using Artificial Intelligence/Machine Learning Tools to Analyze Safety, Road Scene, Near-Misses and Crashes | |
CN117312935A (en) | Action category identification method, device, computer equipment and storage medium | |
Tanu et al. | A comparative study of recent practices and technologies in advanced driver assistance systems | |
Wang et al. | Visual physiological characteristics recognition method of road traffic safety driving behavior. | |
CN119004048B (en) | Intelligent sensing method for vehicle passenger access behavior based on sensor fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |