CN116594858A - A human-computer interaction evaluation method and system for an intelligent cockpit - Google Patents
A human-computer interaction evaluation method and system for an intelligent cockpit Download PDFInfo
- Publication number
- CN116594858A CN116594858A CN202211726814.4A CN202211726814A CN116594858A CN 116594858 A CN116594858 A CN 116594858A CN 202211726814 A CN202211726814 A CN 202211726814A CN 116594858 A CN116594858 A CN 116594858A
- Authority
- CN
- China
- Prior art keywords
- data
- interaction
- score
- human
- cockpit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3604—Analysis of software for verifying properties of programs
- G06F11/3612—Analysis of software for verifying properties of programs by runtime analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3698—Environments for analysis, debugging or testing of software
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
本发明提供一种智能座舱人机交互测评方法及系统,通过XR头戴设备提供虚拟驾驶场景,通过虚拟模拟驾驶器采集多模态数据实现眼控、脑控等交互方式,以供操作人员进行交互,模拟真实的驾驶场景,在保证安全的情况下获取接近真实场景的测试数据。引入人因多模态数据,基于交互过程中产生的生命体征数据和操作人员行为数据,计算形成交互过程中包含认知负荷评分、舒适性评分和行为高效性评分的多维度评分,构建针对智能驾驶座舱设计的全方位测评方案,进行全自动测评,提高了评价效率并降低了个体主观因素的影响,多模态数据量化评估,可靠性更高。
The invention provides a human-computer interaction evaluation method and system for an intelligent cockpit, which provides a virtual driving scene through an XR head-mounted device, and collects multi-modal data through a virtual simulation driver to realize eye control, brain control and other interactive methods for operators to carry out Interactive, simulate real driving scenarios, and obtain test data close to real scenarios while ensuring safety. Introduce multi-modal data of human factors, based on vital sign data and operator behavior data generated during the interaction process, calculate and form a multi-dimensional score including cognitive load score, comfort score and behavioral efficiency score during the interaction process, and build a smart The all-round evaluation plan for cockpit design, fully automatic evaluation, improves the evaluation efficiency and reduces the influence of individual subjective factors, multi-modal data quantitative evaluation, higher reliability.
Description
技术领域technical field
本发明涉及智能驾驶技术领域,尤其涉及一种智能座舱人机交互测评方法及系统。The invention relates to the technical field of intelligent driving, in particular to a human-computer interaction evaluation method and system for an intelligent cockpit.
背景技术Background technique
随着智能驾驶技术不断推陈出新,智能驾驶座舱的人机交互功能也越来越丰富,以实现驾驶过程中对乘员状态的监测以及安全驾驶辅助。同时,AI、大数据、5G等技术的创新应用也带来智能座舱人机交互的多模态形式越来越多样化,交互场景越来越精细化。例如,智能汽车DMS系统(驾驶员监控系统)。DMS系统可以实现对驾驶员的驾驶疲劳.分心以及其他危险行为(比如打电话,吃东西等)等的实时监测,并针对异常状态能够做出及时预警。With the continuous innovation of intelligent driving technology, the human-computer interaction functions of the intelligent cockpit are becoming more and more abundant, so as to realize the monitoring of occupant status and safe driving assistance during driving. At the same time, the innovative application of AI, big data, 5G and other technologies has also brought more and more diversified multi-modal forms of human-computer interaction in the smart cockpit, and the interaction scenarios have become more and more refined. For example, smart car DMS system (driver monitoring system). The DMS system can realize the real-time monitoring of the driver's driving fatigue, distraction and other dangerous behaviors (such as making phone calls, eating, etc.), and can make timely warnings for abnormal states.
在此基础上,智能驾驶座舱设计的人机交互评测需求也越来越高。智能驾驶座舱的设计测评传统上依赖于专家经验或工程师/设计师经验,主观测评为主,同时测试环境也比较复杂,一方面不能保障安全,另一方面缺少客观有体系的评价技术,缺少完善和系统的自动化评估方案,效率较低,也影响后期产品的安全与操作人员体验。On this basis, the demand for human-computer interaction evaluation of intelligent cockpit design is also getting higher and higher. The design evaluation of intelligent cockpit traditionally relies on the experience of experts or engineers/designers, subjective evaluation is the main method, and the test environment is also relatively complicated. On the one hand, it cannot guarantee safety, on the other hand, it lacks objective and systematic evaluation technology, and lacks perfection. And the automated evaluation scheme of the system, the efficiency is low, and it also affects the safety of later products and the experience of operators.
发明内容Contents of the invention
鉴于此,本发明实施例提供了一种智能座舱人机交互测评方法及系统,以消除或改善现有技术中存在的一个或更多个缺陷,解决现有技术无法针对智能驾驶座舱提供能够模拟真实场景的自动化测评方案。In view of this, the embodiment of the present invention provides a smart cockpit human-computer interaction evaluation method and system to eliminate or improve one or more defects in the prior art, and solve the problem that the prior art cannot provide intelligent cockpit that can simulate An automated evaluation solution for real scenarios.
本发明的技术方案如下:Technical scheme of the present invention is as follows:
一方面,本发明提供一种智能座舱人机交互测评方法,所述方法在座舱人机交互设计评价推荐子系统上执行,所述座舱人机交互设计评价推荐子系统连接模拟驾驶器模块,所述模拟驾驶器模块连接XR座舱交互模块,所述模拟驾驶器模块由XR头戴设备提供虚拟驾驶场景以供操作人员进行交互,包括:On the one hand, the present invention provides an intelligent cockpit human-computer interaction evaluation method, the method is executed on the cockpit human-computer interaction design evaluation recommendation subsystem, and the cockpit human-computer interaction design evaluation recommendation subsystem is connected to the simulated driver module. The simulated driver module is connected to the XR cockpit interaction module, and the simulated driver module provides a virtual driving scene for the operator to interact with by the XR head-mounted device, including:
获取交互数据,所述交互数据由所述模拟驾驶器模块的交互数据采集组件在交互过程中,对驾驶座舱内的操作人员采集得到;所述XR座舱交互模块根据所述交互数据基于预设的交互控制方案做出交互决策并反馈至所述模拟驾驶器模块,以实现连续交互;Acquiring interactive data, the interactive data is collected by the interactive data acquisition component of the simulated driver module from the operator in the cockpit during the interaction process; the XR cockpit interactive module is based on the interactive data based on the preset The interactive control scheme makes interactive decisions and feeds back to the simulated driver module to achieve continuous interaction;
基于所述交互数据采集组件采集的多模态人因数据对人机交互行为在多个评分维度计算等级评分,并计算加权平均值得到所述交互控制方案的策略评分;所述评分维度包括认知负荷评分、舒适性评分和行为高效性评分;其中,所述多模态人因数据包括:脑电数据、眼动数据、生理数据、近红外数据和操作人员行为数据。Based on the multimodal human factor data collected by the interactive data collection component, the human-computer interaction behavior is calculated in multiple scoring dimensions, and the weighted average is calculated to obtain the strategy score of the interaction control scheme; the scoring dimension includes recognition. cognitive load score, comfort score, and behavioral efficiency score; wherein, the multimodal human factor data includes: EEG data, eye movement data, physiological data, near-infrared data, and operator behavior data.
在一些实施例中,所述脑电数据包括脑电EEG信号时域、频域及非线性指标;所述眼动数据包括瞳孔直径、注视点坐标及其对应的注视时长;所述生理数据包括皮温数据、皮电数据和呼吸数据;所述近红外数据包括血氧数据、总血红蛋白浓度值和心率变异性数据;所述呼吸数据包括呼吸频率、潮气量和肺活量;所述操作人员行为数据包括驾驶行为的执行完成率和刺激反应时长。In some embodiments, the EEG data includes EEG signal time domain, frequency domain and nonlinear indicators; the eye movement data includes pupil diameter, fixation point coordinates and their corresponding fixation duration; the physiological data includes Skin temperature data, electrodermal data, and respiratory data; the near-infrared data includes blood oxygen data, total hemoglobin concentration, and heart rate variability data; the respiratory data includes respiratory rate, tidal volume, and vital capacity; the operator behavior data Including the completion rate of driving behavior and the duration of stimulus response.
在一些实施例中,所述认知负荷评分采用第一类交互数据评价得到,所述第一类交互数据包括所述脑电EEG信号时域、频域及非线性指标、所述瞳孔直径、所述血氧数据和所述总血红蛋白浓度值;In some embodiments, the cognitive load score is obtained by evaluating the first type of interaction data, and the first type of interaction data includes the EEG signal time domain, frequency domain and nonlinear indicators, the pupil diameter, The blood oxygen data and the total hemoglobin concentration value;
所述舒适性评分采用第二类交互数据评价得到,所述第二类交互数据包括所述心率变异性数据、所述皮温数据和所述皮电数据;The comfort score is obtained by evaluating the second type of interaction data, and the second type of interaction data includes the heart rate variability data, the skin temperature data, and the electrodermal data;
所述行为高效性评分采用第三类交互数据评价得到,所述第三类交互数据包括所述执行完成率和所述刺激反应时长。The behavioral efficiency score is obtained by evaluating the third type of interaction data, and the third type of interaction data includes the execution completion rate and the stimulus response duration.
在一些实施例中,所述方法包括:In some embodiments, the method includes:
采用预训练的第一神经网络将所述第一类交互数据映射至所述认知负荷评分,所述认知负荷评分包含第一设定数量个等级评分;Mapping the first type of interaction data to the cognitive load score using a pre-trained first neural network, the cognitive load score comprising a first set number of grade scores;
采用预训练的第二神经网络将所述第二类交互数据映射至所述舒适性评分,所述舒适性评分包含第二设定数量个等级评分;using a pre-trained second neural network to map the second type of interaction data to the comfort score, where the comfort score includes a second set number of grade scores;
采用预训练的第三神经网络将所述第三类交互数据映射至所述行为高效性评分,所述行为高效性评分包含第三设定数量个等级评分。A pre-trained third neural network is used to map the third type of interaction data to the behavior efficiency score, and the behavior efficiency score includes a third set number of grade scores.
在一些实施例中,所述第一神经网络、所述第二神经网络和所述第三神经网络为决策树或BP神经网络。In some embodiments, the first neural network, the second neural network and the third neural network are decision trees or BP neural networks.
在一些实施例中,所述第一神经网络、所述第二神经网络和所述第三神经网络分别包括一个卷积神经网络。In some embodiments, the first neural network, the second neural network and the third neural network each comprise a convolutional neural network.
在一些实施例中,计算加权平均值得到所述交互控制方案的策略评分之前,还包括:In some embodiments, before calculating the weighted average to obtain the strategy score of the interaction control scheme, it also includes:
对各评分维度的等级评分进行归一化处理。Normalize the grade scores of each scoring dimension.
在一些实施例中,所述方法还包括:计算多个交互控制方案对应的策略评分,选择策略评分最高的交互控制方案作为最优交互控制方案。In some embodiments, the method further includes: calculating strategy scores corresponding to multiple interaction control schemes, and selecting the interaction control scheme with the highest strategy score as the optimal interaction control scheme.
另一方面,本发明还提供一种智能座舱人机交互测评系统,包括:On the other hand, the present invention also provides an intelligent cockpit human-computer interaction evaluation system, including:
模拟驾驶器模块,包括模拟驾驶座舱以及交互数据采集组件,所述驾驶座舱用于提供模拟驾驶的硬件环境,所述交互数据采集组件包括用于脑控和眼控交互的XR头戴设备、设置在方向盘上的皮温皮电传感器和近红外传感器、设置在安全带上的呼吸传感器以及设置在所述驾驶座舱内的交互行为传感器;所述XR头戴设备采集的脑电数据和眼动数据,所述皮温皮电传感器采集皮温数据和皮电数据,所述近红外传感器采集的近红外数据,所述呼吸传感器采集呼吸数据,所述交互行为传感器用于采集操作人员行为数据;The simulated driver module includes a simulated cockpit and an interactive data acquisition component, the cockpit is used to provide a hardware environment for simulated driving, and the interactive data acquisition component includes an XR head-mounted device for brain-controlled and eye-controlled interaction, a setting The skin temperature sensor and near-infrared sensor on the steering wheel, the breathing sensor on the seat belt, and the interactive behavior sensor in the cockpit; the EEG data and eye movement data collected by the XR head-mounted device , the skin temperature skin electricity sensor collects skin temperature data and skin electricity data, the near infrared data collected by the near infrared sensor, the respiratory sensor collects respiratory data, and the interactive behavior sensor is used to collect operator behavior data;
XR座舱交互模块,用于控制所述XR头戴设备呈现虚拟驾驶场景,并基于预设的交互控制方案对所述脑电数据、所述眼动数据、所述皮温数据、所述皮电数据、所述近红外数据、所述呼吸数据和所述操作人员行为数据做出交互决策,将所述交互决策反馈至所述模拟价值模块执行;The XR cockpit interaction module is used to control the XR head-mounted device to present a virtual driving scene, and to analyze the EEG data, the eye movement data, the skin temperature data, and the skin electricity based on a preset interactive control scheme. data, the near-infrared data, the respiration data and the operator behavior data to make an interactive decision, and feed back the interactive decision to the simulation value module for execution;
座舱人机交互设计评价推荐子系统,用于执行如上述智能座舱人机交互测评方法,得到所述交互控制方案的策略评分。The cockpit human-computer interaction design evaluation and recommendation subsystem is used to implement the above-mentioned intelligent cockpit human-computer interaction evaluation method to obtain the strategy score of the interaction control scheme.
在一些实施例中,所述系统还通过有线或无线的形式连接云服务器,并上传所述交互控制方案的策略评分。In some embodiments, the system is also wired or wirelessly connected to the cloud server, and uploads the strategy score of the interaction control scheme.
本发明的有益效果至少是:The beneficial effects of the present invention are at least:
本发明所述智能座舱人机交互测评方法及系统,通过XR头戴设备提供虚拟驾驶场景,通过模拟驾驶器采集多模态数据实现眼控、脑控等交互方式,以供操作人员进行交互,模拟真实的驾驶场景,在保证安全的情况下获取接近真实场景的测试数据。引入人因多模态数据,基于交互过程中产生的生命体征数据和操作人员行为数据,计算形成交互过程中包含认知负荷评分、舒适性评分和行为高效性评分的多维度评分,构建针对智能驾驶座舱设计的全方位测评方案,进行全自动测评,提高的评价效率并杜绝了人为主观因素的影响,可信度更高。The intelligent cockpit human-computer interaction evaluation method and system of the present invention provides a virtual driving scene through an XR head-mounted device, and collects multi-modal data through a simulated driver to realize eye control, brain control and other interactive modes for operators to interact. Simulate real driving scenarios and obtain test data close to real scenarios while ensuring safety. Introduce human factor multimodal data, based on the vital sign data and operator behavior data generated during the interaction process, calculate and form a multi-dimensional score including cognitive load score, comfort score and behavioral efficiency score during the interaction process, and build a smart The all-round evaluation plan of the cockpit design, conducts automatic evaluation, improves the evaluation efficiency and eliminates the influence of human subjective factors, and has higher reliability.
本发明的附加优点、目的,以及特征将在下面的描述中将部分地加以阐述,且将对于本领域普通技术人员在研究下文后部分地变得明显,或者可以根据本发明的实践而获知。本发明的目的和其它优点可以通过在书面说明及其权利要求书以及附图中具体指出的结构实现到并获得。Additional advantages, objects, and features of the present invention will be set forth in part in the following description, and will be partly apparent to those of ordinary skill in the art after studying the following text, or can be learned from the practice of the present invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
本领域技术人员将会理解的是,能够用本发明实现的目的和优点不限于以上具体所述,并且根据以下详细说明将更清楚地理解本发明能够实现的上述和其他目的。It will be understood by those skilled in the art that the objects and advantages that can be achieved by the present invention are not limited to the above specific ones, and the above and other objects that can be achieved by the present invention will be more clearly understood from the following detailed description.
附图说明Description of drawings
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,并不构成对本发明的限定。在附图中:The drawings described here are used to provide further understanding of the present invention, constitute a part of the application, and do not limit the present invention. In the attached picture:
图1为本发明一实施例所述智能座舱人机交互测评系统的结构示意图。Fig. 1 is a schematic structural diagram of the intelligent cockpit human-computer interaction evaluation system according to an embodiment of the present invention.
图2为本发明一实施例所述智能座舱人机交互测评方法中座舱人机交互设计方案得分的计算结构示意图。Fig. 2 is a schematic diagram of the calculation structure of the cockpit human-computer interaction design scheme score in the intelligent cockpit human-computer interaction evaluation method according to an embodiment of the present invention.
图3为本发明一实施例所述智能座舱人机交互测评方法对交互控制方案的评价逻辑示意图。Fig. 3 is a schematic diagram of the evaluation logic of the interactive control scheme by the intelligent cockpit human-computer interaction evaluation method according to an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚明白,下面结合实施方式和附图,对本发明做进一步详细说明。在此,本发明的示意性实施方式及其说明用于解释本发明,但并不作为对本发明的限定。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with the embodiments and accompanying drawings. Here, the exemplary embodiments of the present invention and their descriptions are used to explain the present invention, but not to limit the present invention.
在此,还需要说明的是,为了避免因不必要的细节而模糊了本发明,在附图中仅仅示出了与根据本发明的方案密切相关的结构和/或处理步骤,而省略了与本发明关系不大的其他细节。Here, it should also be noted that, in order to avoid obscuring the present invention due to unnecessary details, only the structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, and the related Other details are not relevant to the invention.
应该强调,术语“包括/包含”在本文使用时指特征、要素、步骤或组件的存在,但并不排除一个或更多个其它特征、要素、步骤或组件的存在或附加。It should be emphasized that the term "comprising/comprising" when used herein refers to the presence of a feature, element, step or component, but does not exclude the presence or addition of one or more other features, elements, steps or components.
智能驾驶座舱可以被定义为一种智能服务系统,能主动洞察和理解操作人员需求,又能满足操作人员需求;从终端消费者需求及应用场景出发,乘客不仅无需担忧驾驶和出行,还能在智能座舱中获得舒服的体验。基于智能驾驶座舱的汽车将告别仅作为出行工具的角色,目标是实现座舱与人、车、路的智能交互,相比现在的驾驶座舱,这种形态的智能座舱将更加智能化和人性化。智能驾驶座舱能够提供一套完整的交互控制方案,能够基于操作人员在驾驶过程中产生的生理特征数据和操作人员行为数据生成和执行交互决策,包括但不限于有关紧急制动和转向等内容的驾驶辅助,有关驾驶员疲劳监督和危险驾驶行为的告警,有关车内驾驶环境的智能化调控。例如智能汽车DMS系统可以实现对驾驶员的驾驶疲劳、分心以及其他危险行为的检测,又例如基于驾驶人员体温、心率的变化调控车内空调的温度。但是,智能驾驶座舱所提供的交互控制方案是否适应场景需求或者是否具能够达到预期效果都需要精确评价。The smart cockpit can be defined as an intelligent service system that can proactively gain insight into and understand the needs of operators and meet the needs of operators; starting from the needs of end consumers and application scenarios, passengers not only do not need to worry about driving and travel, but can also Get a comfortable experience in the smart cockpit. The car based on the smart cockpit will bid farewell to the role of only a travel tool, and the goal is to realize the intelligent interaction between the cockpit and people, vehicles and roads. Compared with the current cockpit, this form of smart cockpit will be more intelligent and humanized. The intelligent cockpit can provide a complete set of interactive control solutions, and can generate and execute interactive decisions based on the physiological characteristic data and operator behavior data generated by the operator during driving, including but not limited to emergency braking and steering. Driving assistance, driver fatigue monitoring and warnings about dangerous driving behaviors, and intelligent control of the driving environment in the car. For example, the smart car DMS system can detect the driver's driving fatigue, distraction and other dangerous behaviors, and it can also adjust the temperature of the air conditioner in the car based on the driver's body temperature and heart rate changes. However, whether the interactive control scheme provided by the intelligent cockpit is suitable for the scene requirements or whether it can achieve the expected effect needs to be accurately evaluated.
一方面,本发明提供一种智能座舱人机交互测评方法,所述方法在座舱人机交互设计评价推荐子系统上执行,座舱人机交互设计评价推荐子系统连接模拟驾驶器模块,模拟驾驶器模块连接XR座舱交互模块,模拟驾驶器模块由XR头戴设备提供虚拟驾驶场景以供操作人员进行交互,包括步骤S101~S102:On the one hand, the present invention provides an intelligent cockpit human-computer interaction evaluation method, the method is executed on the cockpit human-computer interaction design evaluation recommendation subsystem, the cockpit human-computer interaction design evaluation recommendation subsystem is connected to the simulated driver module, and the simulated driver The module is connected to the XR cockpit interaction module, and the simulated driver module is provided with a virtual driving scene by the XR head-mounted device for the operator to interact with, including steps S101-S102:
步骤S101:获取交互数据,交互数据由模拟驾驶器模块的交互数据采集组件在交互过程中,对驾驶座舱内的操作人员采集得到;XR座舱交互模块根据交互数据基于预设的交互控制方案做出交互决策并反馈至模拟驾驶器模块,以实现连续交互。Step S101: Obtain interactive data, which is collected by the interactive data acquisition component of the simulated driver module from the operator in the cockpit during the interaction process; the XR cockpit interactive module makes a decision based on the interactive data based on the preset interactive control scheme Interaction decisions are made and fed back to the simulated driver module for continuous interaction.
步骤S102:基于交互数据采集组件采集的多模态人因数据对人机交互行为在多个评分维度计算等级评分,并计算加权平均值得到交互控制方案的策略评分;评分维度包括认知负荷评分、舒适性评分和行为高效性评分;其中,多模态人因数据包括:脑电数据、眼动数据、生理数据、近红外数据和操作人员行为数据。Step S102: Based on the multi-modal human factor data collected by the interaction data collection component, calculate grade scores for human-computer interaction behavior in multiple scoring dimensions, and calculate the weighted average to obtain the strategy score of the interaction control scheme; the score dimension includes cognitive load score , comfort score and behavioral efficiency score; among them, multimodal human factors data include: EEG data, eye movement data, physiological data, near-infrared data and operator behavior data.
在本实施例中,模拟驾驶器模块包括驾驶座舱和交互数据采集组件,其中驾驶座舱用于提供驾驶环境以及所需的硬件设备,比如:座椅、方向盘、空调系统、影音系统、中控设备以及其他控制键等。交互数据采集组件,括用于脑控和眼控交互的XR头戴设备、设置在方向盘上的皮温皮电传感器和近红外传感器、设置在安全带上的呼吸传感器以及设置在所述驾驶座舱内的交互行为传感器;XR头戴设备采集的脑电数据和眼动数据,皮温皮电传感器采集皮温数据和皮电数据,近红外传感器采集的近红外数据,呼吸传感器采集呼吸数据,交互行为传感器用于采集操作人员行为数据。In this embodiment, the simulated driver module includes a cockpit and an interactive data collection component, wherein the cockpit is used to provide a driving environment and required hardware equipment, such as: seats, steering wheel, air conditioning system, audio-visual system, central control equipment and other control keys etc. Interactive data acquisition components, including XR head-mounted equipment for brain-controlled and eye-controlled interactions, skin-to-skin electrical sensors and near-infrared sensors set on the steering wheel, breathing sensors set on seat belts, and set in the cockpit The interactive behavior sensor in the device; the EEG data and eye movement data collected by the XR headset, the skin temperature data and skin electricity data collected by the skin temperature sensor, the near infrared data collected by the near infrared sensor, and the respiratory data collected by the respiratory sensor. Behavior sensors are used to collect operator behavior data.
XR座舱交互模块根据预设的交互控制方案对交互数据组件采集的生理特征数据以及操作人员行为做出反应,形成交互策略并提供给操作人员,在交互过程中不断调整策略,形成交互的连续性。The XR cockpit interaction module responds to the physiological characteristic data collected by the interactive data component and the behavior of the operator according to the preset interactive control scheme, forms an interaction strategy and provides it to the operator, and continuously adjusts the strategy during the interaction process to form the continuity of the interaction .
在步骤S102中,座舱人机交互设计评价推荐子系统获取交互数据采集组件采集到的生理特征数据和操作人员行为数据,在多个评分维度对交互控制方案做出评价,包括操作人员的认知负荷、舒适性以及行为高效性等方面。其中,对认知负荷的评价可以反应某些智能化的交互方案是否容易被客户接受,是否存在理解障碍,是否需要付出学习成本等。对舒适性的评价能够反应操作人员的使用感受,比如提供的温度调控、车内光线调控策略是否合适。对行为高效性的评价,主要是评价在提供驾驶辅助的基础上,操作人员执行相应动作所需的反应时长和准确性,例如在提供了语音导航提示的基础上,操作人员在每各位置所执行驾驶操作的准确性和反应时间。本发明对于这些评价维度的评分计算加权平均值能够得到交互控制策略的策略评分。In step S102, the cockpit human-computer interaction design evaluation and recommendation subsystem acquires the physiological characteristic data and operator behavior data collected by the interactive data collection component, and evaluates the interactive control scheme in multiple scoring dimensions, including the operator's cognition load, comfort, and behavioral efficiency. Among them, the evaluation of cognitive load can reflect whether some intelligent interaction schemes are easy to be accepted by customers, whether there are obstacles to understanding, and whether learning costs are required. The evaluation of comfort can reflect the experience of the operator, such as whether the provided temperature control and interior light control strategies are appropriate. The evaluation of the efficiency of behavior is mainly to evaluate the response time and accuracy required for the operator to perform corresponding actions on the basis of providing driving assistance. Perform driving maneuvers with accuracy and reaction time. The present invention calculates the weighted average value for the scores of these evaluation dimensions to obtain the strategy score of the interactive control strategy.
需要说明的是,本发明基于由XR头戴设备提供虚拟驾驶场景以供操作人员进行交互,结合驾驶座舱的硬件设备,能够在虚拟场景下实现对真实路况的模拟。不需要路测,保证了检测过程的安全性,特别是对智能辅助驾驶相关功能的检测,能够保证零事故。It should be noted that the present invention is based on the virtual driving scene provided by the XR head-mounted device for the operator to interact with, combined with the hardware equipment in the cockpit, it can realize the simulation of real road conditions in the virtual scene. No road test is required, which ensures the safety of the detection process, especially the detection of functions related to intelligent assisted driving, which can ensure zero accidents.
在一些实施例中,脑电数据包括脑电EEG信号时域、频域及非线性指标;眼动数据包括瞳孔直径、注视点坐标及其对应的注视时长;生理数据包括皮温数据、皮电数据和呼吸数据;近红外数据包括血氧数据、总血红蛋白浓度值和心率变异性数据;呼吸数据包括呼吸频率、潮气量和肺活量;操作人员行为数据包括驾驶行为的执行完成率和刺激反应时长。本领域技术人员应当理解,本发明所能引用的参数不仅仅包含上述内容,对于其他任何各项生理特征数据和操作人员交互数据都能作为用于评价认知负荷、舒适性和行为高效性的参数。In some embodiments, EEG data includes EEG signal time domain, frequency domain, and nonlinear indicators; eye movement data includes pupil diameter, fixation point coordinates, and corresponding fixation duration; physiological data includes skin temperature data, skin electrical Data and respiratory data; near-infrared data includes blood oxygen data, total hemoglobin concentration and heart rate variability data; respiratory data includes respiratory rate, tidal volume, and lung capacity; operator behavior data includes the completion rate of driving behavior and the duration of stimulus response. It should be understood by those skilled in the art that the parameters that can be cited in the present invention not only include the above-mentioned content, but also can be used as a parameter for evaluating cognitive load, comfort and behavioral efficiency for any other physiological characteristic data and operator interaction data. parameter.
在一些实施例中,认知负荷评分采用第一类交互数据评价得到,第一类交互数据包括脑电EEG信号时域、频域及非线性指标、瞳孔直径、血氧数据和总血红蛋白浓度值。In some embodiments, the cognitive load score is obtained by evaluating the first type of interactive data, the first type of interactive data includes EEG signal time domain, frequency domain and nonlinear indicators, pupil diameter, blood oxygen data and total hemoglobin concentration value .
舒适性评分采用第二类交互数据评价得到,第二类交互数据包括心率变异性数据、皮温数据和皮电数据。The comfort score is obtained by evaluating the second type of interactive data, which includes heart rate variability data, skin temperature data, and skin electrical data.
行为高效性评分采用第三类交互数据评价得到,第三类交互数据包括执行完成率和刺激反应时长。The behavioral efficiency score is obtained by evaluating the third type of interaction data, which includes execution completion rate and stimulus response time.
在一些实施例中,所述方法包括:In some embodiments, the method includes:
采用预训练的第一神经网络将第一类交互数据映射至认知负荷评分,认知负荷评分包含第一设定数量个等级评分。A pre-trained first neural network is used to map the first type of interaction data to a cognitive load score, and the cognitive load score includes a first set number of grade scores.
采用预训练的第二神经网络将第二类交互数据映射至舒适性评分,舒适性评分包含第二设定数量个等级评分。The second type of interaction data is mapped to a comfort score by using a pre-trained second neural network, and the comfort score includes a second set number of grade scores.
采用预训练的第三神经网络将第三类交互数据映射至行为高效性评分,行为高效性评分包含第三设定数量个等级评分。A pre-trained third neural network is used to map the third type of interaction data to a behavioral efficiency score, and the behavioral efficiency score includes a third set number of ratings.
具体的,本实施例中,需要首先基于现有数据对第一神经网络、第二神经网络和第三神经网络进行预训练,采用现有数据库中的数据,通过人工添加标签的形式,对于每一组数据,在认知负荷、舒适性和行为高效性上添加评分。评分可以分为多个等级,示例性的,可以按照从高到低打分为5个等级,等级优5分、等级良4分、等级中3分、等级较差2分和等级差1分。Specifically, in this embodiment, it is first necessary to pre-train the first neural network, the second neural network, and the third neural network based on the existing data, and use the data in the existing database to add labels manually. A set of data that adds ratings on cognitive load, comfort, and behavioral efficiency. The scoring can be divided into multiple grades. For example, it can be divided into 5 grades from high to low, with excellent grades of 5 points, good grades of 4 points, average grades of 3 points, poor grades of 2 points and poor grades of 1 point.
在一些实施例中,第一神经网络、第二神经网络和第三神经网络为决策树或BP神经网络。在另一些实施例中,第一神经网络、第二神经网络和第三神经网络分别包括一个卷积神经网络。In some embodiments, the first neural network, the second neural network and the third neural network are decision trees or BP neural networks. In other embodiments, the first neural network, the second neural network and the third neural network each include a convolutional neural network.
在一些实施例中,计算加权平均值得到所述交互控制方案的策略评分之前,还包括:对各评分维度的等级评分进行归一化处理。In some embodiments, before calculating the weighted average value to obtain the strategy score of the interaction control scheme, it further includes: normalizing the grade scores of each scoring dimension.
在一些实施例中,所述方法还包括:计算多个交互控制方案对应的策略评分,选择策略评分最高的交互控制方案作为最优交互控制方案。In some embodiments, the method further includes: calculating strategy scores corresponding to multiple interaction control schemes, and selecting the interaction control scheme with the highest strategy score as the optimal interaction control scheme.
另一方面,本发明还提供一种智能座舱人机交互测评系统,参照图1,包括:On the other hand, the present invention also provides an intelligent cockpit human-computer interaction evaluation system, referring to Figure 1, including:
模拟驾驶器模块,包括模拟驾驶座舱以及交互数据采集组件,驾驶座舱用于提供模拟驾驶的硬件环境,交互数据采集组件包括用于脑控和眼控交互的XR头戴设备、设置在方向盘上的皮温皮电传感器和近红外传感器、设置在安全带上的呼吸传感器以及设置在驾驶座舱内的交互行为传感器;XR头戴设备采集的脑电数据和眼动数据,皮温皮电传感器采集皮温数据和皮电数据,近红外传感器采集的近红外数据,呼吸传感器采集呼吸数据,交互行为传感器用于采集操作人员行为数据。The simulated driver module includes a simulated cockpit and interactive data acquisition components. The cockpit is used to provide a hardware environment for simulated driving. The interactive data acquisition components include XR headsets for brain-controlled and eye-controlled interactions, and the steering wheel. Skin-to-skin electrical sensors and near-infrared sensors, breathing sensors installed on seat belts, and interactive behavior sensors installed in the cockpit; EEG data and eye movement data collected by XR headsets, skin-to-skin electrical sensors to collect skin Temperature data and electrodermal data, near-infrared data collected by near-infrared sensors, respiratory data collected by respiratory sensors, and interactive behavior sensors are used to collect operator behavior data.
XR座舱交互模块,用于控制XR头戴设备呈现虚拟驾驶场景,并基于预设的交互控制方案对脑电数据、眼动数据、皮温数据、皮电数据、近红外数据、呼吸数据和操作人员行为数据做出交互决策,将交互决策反馈至模拟价值模块执行。The XR cockpit interaction module is used to control the XR head-mounted device to present virtual driving scenes, and based on the preset interactive control scheme to analyze EEG data, eye movement data, skin temperature data, skin electricity data, near-infrared data, respiratory data and operations Human behavior data makes interactive decisions, and the interactive decisions are fed back to the simulation value module for execution.
座舱人机交互设计评价推荐子系统,用于执行如上述步骤S101~S102所述的智能座舱人机交互测评方法,得到交互控制方案的策略评分。The cockpit human-computer interaction design evaluation and recommendation subsystem is used to execute the intelligent cockpit human-computer interaction evaluation method described in the above steps S101-S102, and obtain the strategy score of the interactive control scheme.
在一些实施例中,所述系统还通过有线或无线的形式连接云服务器,并上传交互控制方案的策略评分。In some embodiments, the system is also wired or wirelessly connected to the cloud server, and uploads the policy score of the interactive control scheme.
下面结合具体实施例进行说明:Describe below in conjunction with specific embodiment:
参照图1,本实施例提供一种XR驾驶座舱交互控制系统,包括XR座舱交互模块和模拟驾驶器模块组成。该系统用于驾驶座舱人机交互界面设计、人机交互控制方式设计的实现、以及驾驶场景的呈现,以及驾驶员各项驾驶行为数据、生理数据的采集和记录。Referring to FIG. 1 , this embodiment provides an XR cockpit interactive control system, which consists of an XR cockpit interactive module and a simulated driver module. The system is used for the design of the human-computer interaction interface in the cockpit, the realization of the design of the human-computer interaction control method, the presentation of the driving scene, and the collection and recording of various driving behavior data and physiological data of the driver.
其中,XR座舱交互模块:用于呈现驾驶座舱交互界面设计方案、人机交互控制方式设计和呈现驾驶场景,配备有多种智能和自然的人机交互控制方式,包括但不限于:脑控交互、眼控交互、皮电交互、皮温交互、呼吸交互、肌电交互、手势交互等。脑电数据包括脑电EEG信号时域、频域及非线性指标;眼动数据包括瞳孔直径、注视点坐标及其对应的注视时长;近红外数据包括血氧数据、总血红蛋白浓度值和心率变异性数据;呼吸数据包括呼吸频率、潮气量和肺活量;操作人员行为数据包括驾驶行为的执行完成率和刺激反应时长。本领域技术人员应当理解,本发明所能引用的参数不仅仅包含上述内容,对于其他任何各项生理特征数据和操作人员交互数据都能作为用于评价认知负荷、舒适性和行为高效性的参数。交互指令反馈给模拟器,驱动模拟驾驶器的运行。Among them, the XR cockpit interaction module: used to present the design scheme of the cockpit interaction interface, the design of the human-computer interaction control method and the presentation of the driving scene, equipped with a variety of intelligent and natural human-computer interaction control methods, including but not limited to: brain-controlled interaction , eye control interaction, electrodermal interaction, skin temperature interaction, breathing interaction, myoelectric interaction, gesture interaction, etc. EEG data includes time domain, frequency domain and nonlinear indicators of EEG signals; eye movement data includes pupil diameter, fixation point coordinates and corresponding fixation duration; near-infrared data includes blood oxygen data, total hemoglobin concentration and heart rate variability Respiratory data include respiratory rate, tidal volume, and lung capacity; operator behavior data include execution completion rate of driving behavior and stimulus response time. It should be understood by those skilled in the art that the parameters that can be cited in the present invention not only include the above-mentioned content, but also can be used as a parameter for evaluating cognitive load, comfort and behavioral efficiency for any other physiological characteristic data and operator interaction data. parameter. The interactive command is fed back to the simulator to drive the operation of the simulated driver.
人机交互控制方式设计由操作人员依设计方案自定义选择,各控制指令的实现方式包括但不限于以下两种:The design of the human-computer interaction control mode is selected by the operator according to the design scheme. The implementation methods of each control command include but are not limited to the following two:
第一,自定义阈值:操作人员可根据需要设置控制指令的阈值。例如:眼控交互,可通过设置注视时长阈值为2s,实现指令的输出。First, custom threshold: the operator can set the threshold of the control instruction according to the needs. For example: eye control interaction, the output of commands can be realized by setting the gaze duration threshold to 2s.
第二,算法自适应调节阈值:算法根据操作人员习惯、驾驶场景、车辆状态的识别,自适应调节阈值高低。如:眼控交互,当识别到车辆处于高速驾驶状态下,眼动注视时长阈值可自适应降低。当识别到驾驶场景比较复杂,需要驾驶员更多的注意力和认知资源消耗时,实现界面控制的眼动注视时长阈值可自适应降低。Second, the algorithm adaptively adjusts the threshold value: the algorithm adaptively adjusts the threshold value according to the recognition of the operator's habits, driving scenes, and vehicle status. For example: eye control interaction, when it is recognized that the vehicle is driving at a high speed, the eye movement fixation duration threshold can be adaptively reduced. When it is recognized that the driving scene is more complex and requires more attention and cognitive resource consumption of the driver, the eye movement fixation duration threshold for interface control can be adaptively reduced.
模拟驾驶器模块:用于执行交互策略,并采集和记录驾驶员各项人机交互数据、生理数据。采集方式包括但不限于:脑控和眼控交互数据由XR头戴设备实现数据采集,皮温和皮电数据由驾驶模拟器方向盘实现数据采集,呼吸数据由驾驶模拟器安全带实现数据采集,手势交互数据由内置于模拟驾驶器的传感器实现数据采集。Simulated driver module: used to implement interaction strategies, and collect and record various human-computer interaction data and physiological data of the driver. Collection methods include but are not limited to: Brain control and eye control interaction data are collected by XR headsets, skin temperature and electrodermal data are collected by the steering wheel of the driving simulator, breathing data are collected by the seat belt of the driving simulator, gestures Interaction data is collected by sensors built into the simulated driver.
座舱人机交互设计评价推荐子系统:该系统输入多通道的驾驶员生理数据、人机交互等数据,根据评分维度进行座舱人机交互设计的等级评价,等级划分为5级(优5、良4、中3、较差2、差1)。Cockpit human-computer interaction design evaluation and recommendation subsystem: the system inputs multi-channel driver physiological data, human-computer interaction and other data, and evaluates the grade of cockpit human-computer interaction design according to the scoring dimension. The grades are divided into 5 levels (excellent 5, good 4, medium 3, poor 2, poor 1).
评价维度包括:在特定驾驶场景、交互界面和交互方式下,完成驾驶任务时驾驶员的状态或表现,每种状态/表现均为5级评分。包括但不限于认知负荷、舒适性和高效性。The evaluation dimensions include: the state or performance of the driver when completing the driving task in a specific driving scene, interactive interface and interaction mode, and each state/performance is scored on a 5-level scale. Including but not limited to cognitive load, comfort and efficiency.
参照图2,座舱人机交互设计方案等级的计算方式为:各评价维度等级得分的加权平均值。Referring to Figure 2, the calculation method of the grade of the cockpit human-computer interaction design scheme is: the weighted average of the grade scores of each evaluation dimension.
其中,S为座舱人机交互设计方案的得分;W为各维度的权重;X为各维度的评级;n为评价维度的数目。S值四舍五入,判定为该座舱方案的等级。Among them, S is the score of the cockpit human-computer interaction design scheme; W is the weight of each dimension; X is the rating of each dimension; n is the number of evaluation dimensions. The S value is rounded up and judged as the level of the cockpit plan.
评价维度与各次级指标的对应方案包括但不限于:通过机器学习或深度学习算法(如决策树、神经网络等),构建模型,建立特定操作人员生理特征指标、操作人员行为数据与认知负荷评分、舒适性评分和行为高效性评分的映射关系。将操作人员对各维度的主观评分(1~5)与其客观生理指标进行对应。The corresponding schemes of the evaluation dimensions and each sub-indicator include but are not limited to: building models through machine learning or deep learning algorithms (such as decision trees, neural networks, etc.), establishing specific operators’ physiological characteristic indicators, operators’ behavior data and cognition Mapping relationship of load score, comfort score and behavioral efficiency score. Correspond the operator's subjective score (1-5) on each dimension with its objective physiological index.
将各次级指标得分归一化后(范围为0~1),取加权平均值(范围为0~1),将0~1的取值按5分位数划分,映射到等级得分(1~5)。权重的取值由专家定义。After normalizing the scores of each sub-indicator (the range is 0-1), take the weighted average (the range is 0-1), divide the values of 0-1 into quintiles, and map them to the grade score (1 ~5). The value of the weight is defined by experts.
以上述三种评价维度为例,认知负荷(L)的评级1选用的特征包括但不限于:完成任务过程中的脑电EEG信号时域、频域及非线性指标(theta/beta)、近红外(氧合、脱氧、总血红蛋白浓度值)、眼动指标(瞳孔直径);舒适性(H)的评级选用的特征包括但不限于:HRV频域指标(LF/HF)、皮电指标(SC);高效性(Q)的评级选用的特征包括但不限于:驾驶行为指标(完成任务的准确性和反应时)。参照图3,座舱交互界面、座舱交互方式以及驾驶场景三方面要素构成了完整的交互控制方案,在认知负荷方面,基于脑电指标和眼动指标进行评价;舒适性方面,基于生理指标进行评价;高效性方面基于驾驶行为指标进行评价。Taking the above three evaluation dimensions as an example, the features selected for rating 1 of cognitive load (L) include but are not limited to: time domain, frequency domain and nonlinear indicators (theta/beta) of EEG signals during task completion, Near-infrared (oxygenation, deoxygenation, total hemoglobin concentration value), eye movement index (pupil diameter); the characteristics selected for rating comfort (H) include but not limited to: HRV frequency domain index (LF/HF), skin electrical index (SC); Efficiency (Q) ratings include but are not limited to: driving behavior indicators (accuracy and reaction time to complete tasks). Referring to Figure 3, the three elements of the cockpit interaction interface, cockpit interaction method, and driving scene constitute a complete interactive control scheme. In terms of cognitive load, evaluation is based on EEG indicators and eye movement indicators; in terms of comfort, evaluation is based on physiological indicators. Evaluation; Efficiency is evaluated based on driving behavior indicators.
本实施例提供了基于特定的驾驶场景下座舱交互界面和座舱交互方式组合下的驾驶座舱人机交互设计自动化测评方法,有助于适配不同驾驶条件下的最优人机交互方案。This embodiment provides an automatic evaluation method for cockpit human-computer interaction design based on the combination of cockpit interaction interface and cockpit interaction mode in a specific driving scene, which helps to adapt the optimal human-computer interaction scheme under different driving conditions.
综上所述,本发明所述智能座舱人机交互测评方法及系统,通过XR头戴设备提供虚拟驾驶场景,通过模拟驾驶器采集多模态数据实现眼控、脑控等交互方式,以供操作人员进行交互,模拟真实的驾驶场景,在保证安全的情况下获取接近真实场景的测试数据。引入人因多模态数据,基于交互过程中产生的生命体征数据和操作人员行为数据,计算形成交互过程中包含认知负荷评分、舒适性评分和行为高效性评分的多维度评分,构建针对智能驾驶座舱设计的全方位测评方案,进行全自动测评,提高的评价效率并杜绝了人为主观因素的影响,可信度更高。To sum up, the intelligent cockpit human-computer interaction evaluation method and system of the present invention provides virtual driving scenes through XR head-mounted devices, and realizes interactive methods such as eye control and brain control by collecting multi-modal data through simulated drivers for The operator interacts, simulates the real driving scene, and obtains test data close to the real scene while ensuring safety. Introduce human factor multimodal data, based on the vital sign data and operator behavior data generated during the interaction process, calculate and form a multi-dimensional score including cognitive load score, comfort score and behavioral efficiency score during the interaction process, and build a smart The all-round evaluation plan of the cockpit design, conducts automatic evaluation, improves the evaluation efficiency and eliminates the influence of human subjective factors, and has higher reliability.
本领域普通技术人员应该可以明白,结合本文中所公开的实施方式描述的各示例性的组成部分、系统和方法,能够以硬件、软件或者二者的结合来实现。具体究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。当以硬件方式实现时,其可以例如是电子电路、专用集成电路(ASIC)、适当的固件、插件、功能卡等等。当以软件方式实现时,本发明的元素是被用于执行所需任务的程序或者代码段。程序或者代码段可以存储在机器可读介质中,或者通过载波中携带的数据信号在传输介质或者通信链路上传送。“机器可读介质”可以包括能够存储或传输信息的任何介质。机器可读介质的例子包括电子电路、半导体存储器设备、ROM、闪存、可擦除ROM(EROM)、软盘、CD-ROM、光盘、硬盘、光纤介质、射频(RF)链路,等等。代码段可以经由诸如因特网、内联网等的计算机网络被下载。Those of ordinary skill in the art should understand that each exemplary component, system and method described in conjunction with the embodiments disclosed herein can be implemented by hardware, software or a combination of the two. Whether it is implemented in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an application specific integrated circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments employed to perform the required tasks. Programs or code segments can be stored in machine-readable media, or transmitted over transmission media or communication links by data signals carried in carrier waves. "Machine-readable medium" may include any medium that can store or transmit information. Examples of machine-readable media include electronic circuits, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio frequency (RF) links, and the like. Code segments may be downloaded via a computer network such as the Internet, an Intranet, or the like.
还需要说明的是,本发明中提及的示例性实施例,基于一系列的步骤或者装置描述一些方法或系统。但是,本发明不局限于上述步骤的顺序,也就是说,可以按照实施例中提及的顺序执行步骤,也可以不同于实施例中的顺序,或者若干步骤同时执行。It should also be noted that the exemplary embodiments mentioned in the present invention describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above steps, that is, the steps may be performed in the order mentioned in the embodiment, or may be different from the order in the embodiment, or several steps may be performed simultaneously.
本发明中,针对一个实施方式描述和/或例示的特征,可以在一个或更多个其它实施方式中以相同方式或以类似方式使用,和/或与其他实施方式的特征相结合或代替其他实施方式的特征。In the present invention, features described and/or exemplified for one embodiment can be used in the same or similar manner in one or more other embodiments, and/or can be combined with features of other embodiments or replace other Features of the implementation.
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明实施例可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, various modifications and changes may be made to the embodiments of the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211726814.4A CN116594858B (en) | 2022-12-30 | 2022-12-30 | Intelligent cabin man-machine interaction evaluation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211726814.4A CN116594858B (en) | 2022-12-30 | 2022-12-30 | Intelligent cabin man-machine interaction evaluation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116594858A true CN116594858A (en) | 2023-08-15 |
CN116594858B CN116594858B (en) | 2024-08-27 |
Family
ID=87603212
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211726814.4A Active CN116594858B (en) | 2022-12-30 | 2022-12-30 | Intelligent cabin man-machine interaction evaluation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116594858B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117992333A (en) * | 2023-12-22 | 2024-05-07 | 北京津发科技股份有限公司 | Evaluation method and device for man-machine interaction system of vehicle and evaluation system |
CN118021308A (en) * | 2023-12-29 | 2024-05-14 | 北京津发科技股份有限公司 | Human factor intelligent cockpit driving human body inspection and evaluation system and method |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855410A (en) * | 2012-09-20 | 2013-01-02 | 上海品铭机械工程有限公司 | Method and system for evaluation of man-machine work efficiency of cabin simulation test bed |
JP2014035649A (en) * | 2012-08-08 | 2014-02-24 | Toyota Motor Corp | Vehicle driving evaluation apparatus and vehicle driving evaluation method |
CN103761581A (en) * | 2013-12-31 | 2014-04-30 | 西北工业大学 | Method for civil aircraft flight deck human-computer interface comprehensive evaluation |
US20180286269A1 (en) * | 2017-03-29 | 2018-10-04 | The Boeing Company | Systems and methods for an immersive simulator |
CN109740936A (en) * | 2019-01-03 | 2019-05-10 | 中国商用飞机有限责任公司 | A system for assessing the usability of civil aircraft cockpit |
US20200241525A1 (en) * | 2019-01-27 | 2020-07-30 | Human Autonomous Solutions LLC | Computer-based apparatus system for assessing, predicting, correcting, recovering, and reducing risk arising from an operator?s deficient situation awareness |
CN111767611A (en) * | 2020-06-30 | 2020-10-13 | 南京航空航天大学 | A Load Balance-Based Man-Machine Function Allocation Method in Aircraft Cockpit |
CN111783355A (en) * | 2020-06-17 | 2020-10-16 | 南京航空航天大学 | A risk assessment method for human-computer interaction under a multi-agent architecture |
CN113420952A (en) * | 2021-05-17 | 2021-09-21 | 同济大学 | Automobile human-computer interaction testing and evaluating system based on simulated driving |
CN113962022A (en) * | 2021-09-30 | 2022-01-21 | 西南交通大学 | A smart cockpit comfort evaluation method based on occupant experience |
CN114298469A (en) * | 2021-11-24 | 2022-04-08 | 重庆大学 | User experience test and evaluation method of automotive intelligent cockpit |
WO2022095985A1 (en) * | 2020-11-09 | 2022-05-12 | 清华大学 | Method and system for evaluating comfort of passenger of intelligent driving vehicle |
CN217008449U (en) * | 2022-02-21 | 2022-07-19 | 苏州壹心汽车科技有限公司 | New generation automobile driving simulator with man-machine interaction intelligent cabin |
CN115489402A (en) * | 2022-09-27 | 2022-12-20 | 上汽通用五菱汽车股份有限公司 | Vehicle cabin adjusting method and device, electronic equipment and readable storage medium |
-
2022
- 2022-12-30 CN CN202211726814.4A patent/CN116594858B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014035649A (en) * | 2012-08-08 | 2014-02-24 | Toyota Motor Corp | Vehicle driving evaluation apparatus and vehicle driving evaluation method |
CN102855410A (en) * | 2012-09-20 | 2013-01-02 | 上海品铭机械工程有限公司 | Method and system for evaluation of man-machine work efficiency of cabin simulation test bed |
CN103761581A (en) * | 2013-12-31 | 2014-04-30 | 西北工业大学 | Method for civil aircraft flight deck human-computer interface comprehensive evaluation |
US20180286269A1 (en) * | 2017-03-29 | 2018-10-04 | The Boeing Company | Systems and methods for an immersive simulator |
CN109740936A (en) * | 2019-01-03 | 2019-05-10 | 中国商用飞机有限责任公司 | A system for assessing the usability of civil aircraft cockpit |
US20200241525A1 (en) * | 2019-01-27 | 2020-07-30 | Human Autonomous Solutions LLC | Computer-based apparatus system for assessing, predicting, correcting, recovering, and reducing risk arising from an operator?s deficient situation awareness |
CN111783355A (en) * | 2020-06-17 | 2020-10-16 | 南京航空航天大学 | A risk assessment method for human-computer interaction under a multi-agent architecture |
CN111767611A (en) * | 2020-06-30 | 2020-10-13 | 南京航空航天大学 | A Load Balance-Based Man-Machine Function Allocation Method in Aircraft Cockpit |
WO2022095985A1 (en) * | 2020-11-09 | 2022-05-12 | 清华大学 | Method and system for evaluating comfort of passenger of intelligent driving vehicle |
CN113420952A (en) * | 2021-05-17 | 2021-09-21 | 同济大学 | Automobile human-computer interaction testing and evaluating system based on simulated driving |
CN113962022A (en) * | 2021-09-30 | 2022-01-21 | 西南交通大学 | A smart cockpit comfort evaluation method based on occupant experience |
CN114298469A (en) * | 2021-11-24 | 2022-04-08 | 重庆大学 | User experience test and evaluation method of automotive intelligent cockpit |
CN217008449U (en) * | 2022-02-21 | 2022-07-19 | 苏州壹心汽车科技有限公司 | New generation automobile driving simulator with man-machine interaction intelligent cabin |
CN115489402A (en) * | 2022-09-27 | 2022-12-20 | 上汽通用五菱汽车股份有限公司 | Vehicle cabin adjusting method and device, electronic equipment and readable storage medium |
Non-Patent Citations (4)
Title |
---|
张杰;方钰;逄嘉振;季宝宁;: "基于飞行操作流程的虚拟现实驾驶舱操纵设备布置评估", 计算机集成制造系统, no. 03, 15 March 2020 (2020-03-15) * |
邱国华: "汽车智能交互内外饰设计", 31 December 2021, 北京:机械工业出版社, pages: 195 - 196 * |
郝云飞;杨继国;曹永刚;邴洋海;: "生理监测设备在座舱人机工效测量中的应用", 飞机设计, no. 04, 15 August 2017 (2017-08-15) * |
黄迪青;徐霖;陈诚;: "基于神经网络的汽车座椅舒适性研究", 汽车零部件, no. 07, 28 July 2020 (2020-07-28) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117992333A (en) * | 2023-12-22 | 2024-05-07 | 北京津发科技股份有限公司 | Evaluation method and device for man-machine interaction system of vehicle and evaluation system |
CN118021308A (en) * | 2023-12-29 | 2024-05-14 | 北京津发科技股份有限公司 | Human factor intelligent cockpit driving human body inspection and evaluation system and method |
Also Published As
Publication number | Publication date |
---|---|
CN116594858B (en) | 2024-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Real-time driver cognitive workload recognition: Attention-enabled learning with multimodal information fusion | |
CN116594858B (en) | Intelligent cabin man-machine interaction evaluation method and system | |
EP3889740B1 (en) | Affective-cognitive load based digital assistant | |
CN111204348A (en) | Method and device for adjusting vehicle running parameters, vehicle and storage medium | |
CN111951637A (en) | A method for extracting visual attention allocation patterns of drone pilots associated with task scenarios | |
CN114298469A (en) | User experience test and evaluation method of automotive intelligent cockpit | |
CN118474957B (en) | Control method and device for atmosphere lamp in vehicle and computer equipment | |
CN113627740A (en) | Driving load evaluation model construction system and construction method | |
US12265658B2 (en) | Method, device and storage medium for evaluating intelligent human-machine cooperation system | |
Aledhari et al. | Motion comfort optimization for autonomous vehicles: Concepts, methods, and techniques | |
CN116204806A (en) | Brain state determining method and device | |
CN110723145A (en) | Vehicle control method and device, computer-readable storage medium and wearable device | |
CN116595429B (en) | Driver state evaluation method and system | |
CN112455461B (en) | Human-vehicle interaction method and autonomous driving system for autonomous vehicles | |
CN115743137A (en) | Driving situation understanding method based on man-machine enhanced perception | |
CN111361567A (en) | Method and equipment for emergency treatment in vehicle driving | |
CN118182537B (en) | Automatic driving vehicle control method, device, storage medium and equipment | |
CN116168371A (en) | 5G remote driving-oriented safety work load estimation system and estimation method | |
CN118082854A (en) | A driver status monitoring and adjustment method and system based on multi-source data fusion | |
CN116942181A (en) | Driver road accident emotion recognition method based on EEG signals in simulated environment | |
CN117876807A (en) | Personnel state adjustment training method and device based on simulated situation multimodal data | |
CN117644837B (en) | Man-machine interaction method and system based on active learning | |
CN118918558B (en) | System and method for improving driving trust based on multi-mode intelligent interaction | |
CN118860861B (en) | Evaluation method for man-machine interaction interface outside automatic driving vehicle | |
CN117962901B (en) | Driving state adjusting method and device, storage medium, electronic device and computer program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |