CN112183386A - Intelligent cockpit test evaluation method about fixation time - Google Patents

Intelligent cockpit test evaluation method about fixation time Download PDF

Info

Publication number
CN112183386A
CN112183386A CN202011056809.8A CN202011056809A CN112183386A CN 112183386 A CN112183386 A CN 112183386A CN 202011056809 A CN202011056809 A CN 202011056809A CN 112183386 A CN112183386 A CN 112183386A
Authority
CN
China
Prior art keywords
test
time
tested person
fixation time
evaluation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011056809.8A
Other languages
Chinese (zh)
Other versions
CN112183386B (en
Inventor
黄阳
张强
李朝斌
陈媛媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Engineering Research Institute Co Ltd
Original Assignee
China Automotive Engineering Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Engineering Research Institute Co Ltd filed Critical China Automotive Engineering Research Institute Co Ltd
Priority to CN202011056809.8A priority Critical patent/CN112183386B/en
Publication of CN112183386A publication Critical patent/CN112183386A/en
Application granted granted Critical
Publication of CN112183386B publication Critical patent/CN112183386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Multimedia (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to the technical field of automobile cabins, in particular to an intelligent cabin test evaluation method about gazing time, which comprises the following steps: s1, selecting a target customer group, and determining a tested person in the target customer group; s2, setting a test flow, testing the tested person, and recording test data; s3, analyzing the recorded test data, acquiring the fixation time of the tested person on the specific object, and evaluating; s4, carrying out a test interview with the tested person, and obtaining subjective evaluation of the tested person; and S5, giving a test report by taking objective fixation time of the test as a main part and taking subjective evaluation as an auxiliary part. The invention evaluates with objective fixation time of the test as the main part and subjective evaluation as the auxiliary part, so that the subjective evaluation corresponds to objective data, and the technical problem that the test result of the existing test method can not reflect the real situation is solved.

Description

Intelligent cockpit test evaluation method about fixation time
Technical Field
The invention relates to the technical field of automobile cabins, in particular to an intelligent cabin test evaluation method about gazing time.
Background
With the development of scientific technology, smart cabs with multi-screen linkage, autopilot, and emotion engines are being rapidly applied to automobiles. At present, the automobile usually adopts touchable liquid crystal display to realize the interaction, and the driver must watch the screen for a long time when operating the touch screen, can't make the judgement by the sense of touch only to make the driver can bring the vision distraction when using the touch screen, improve the risk of driving. In this respect, document CN109947256A discloses a method of reducing the time for a driver to look at a touch screen, comprising: s1, detecting the gesture action of the driver through the touch screen, and executing S2 when the gesture action accords with a preset trigger instruction; s2, recognizing the touch position of the gesture action on the touch screen; s3, setting the interface element pointed by the touch position as a preselected focus interface element; and S4, playing the text content corresponding to the preselected focus interface element through the vehicle-mounted loudspeaker.
At present, the development of an intelligent cabin is still in a preliminary stage, and an effective test method is required to guide the development process of an intelligent cabin product. On one hand, the intelligent cabin product test evaluation method is single, and the test among all modules is independently carried out; on the other hand, the intelligent cockpit is in development, new products are iterated continuously, the testing method is not developed systematically, and objective and measurable parameters are difficult to find for evaluation. That is, the existing testing method for the intelligent cockpit is mainly subject to evaluation by people, and the testing result cannot reflect the real situation.
Disclosure of Invention
The invention provides an intelligent cockpit test evaluation method about watching time, and solves the technical problem that the test result of the existing intelligent cockpit test method cannot reflect the real situation.
The basic scheme provided by the invention is as follows: an intelligent cockpit test evaluation method about gaze time comprises the following steps:
s1, selecting a target customer group, and determining a tested person in the target customer group;
s2, setting a test flow, testing the tested person, and recording test data;
s3, analyzing the recorded test data, acquiring the fixation time of the tested person on the specific object, and evaluating;
s4, carrying out a test interview with the tested person, and obtaining subjective evaluation of the tested person;
and S5, giving a test report by taking objective fixation time of the test as a main part and taking subjective evaluation as an auxiliary part.
The working principle and the advantages of the invention are as follows: a target user group is selected according to the vehicle type to be tested, and the person to be tested is selected from the target user group through screening. A testing flow is formulated to test the tested person, and after the watching time of the tested person watching a specific object is obtained, evaluation is carried out, wherein the evaluation is objective evaluation; then, the test interview is carried out with the tested person, so that the subjective evaluation of the tested person is obtained. By the mode, the test report is provided mainly by objective watching time of the test and assisted by subjective evaluation, the objective parameters and the subjective evaluation are analyzed, the objective parameters and the subjective evaluation are tested and evaluated together for the intelligent cockpit, the subjective evaluation corresponds to objective data, and therefore the fact that the test result can truly reflect the actual situation is guaranteed.
The invention evaluates with objective fixation time of the test as the main part and subjective evaluation as the auxiliary part, so that the subjective evaluation corresponds to objective data, and the technical problem that the test result of the existing test method can not reflect the real situation is solved.
Further, in S1, the target client group index includes age, height, and weight.
Has the advantages that: people of different ages have different attention and reaction force, the cabin is narrow in space, and the height and the weight can influence the action of the tested person, so that the indexes such as the age, the height, the weight and the like are considered, and the factors are favorably prevented from interfering the test result; the number of the tested personnel is determined according to the actual situation, and certain flexibility is achieved.
Further, in S2, the test flow starts when the subject gets on the vehicle and ends when the subject gets off the vehicle.
Has the advantages that: through the design, the whole test process starts from the moment when the tested person gets on the vehicle and ends at the moment when the tested person gets off the vehicle, so that the actual vehicle using process is met, and the test result can be close to the actual situation as much as possible.
Further, in S3, the specific objects include a road surface, a screen, and a knob.
Has the advantages that: the actual driving condition shows that the attention of the eyes of the driver is mainly focused on the road surface, the screen and the knob in the driving process, the watching time of the objects is focused during the test, the efficiency is improved, and meanwhile the authenticity of the test can be ensured.
Further, in S3, the evaluation method is based on the 3 second rule, and specifically includes the following steps:
if the screen fixation time is between 0 and 1 second: if the road surface fixation time is between 0 second and 1 second, judging that the requirement is met; if the road surface fixation time is between 1 second and 2 seconds, judging that the road surface fixation time is acceptable; if the road surface fixation time is between 2 seconds and 3 seconds, judging that the road surface fixation time is not acceptable; if the road surface fixation time is more than 3 seconds, judging that the road surface fixation time is completely unacceptable;
if the screen fixation time exceeds 1 second, the screen fixation time is directly judged to be unacceptable.
Has the advantages that: according to the ergonomic knowledge, the '1 second' and '3 seconds' accord with the actual situation, and the judgment is carried out through the standard, so that the product development and improvement can be guided pertinently; on the other hand, such an evaluation also appears relatively intuitive.
Further, in S5, the specific object and the gazing time thereof in the test report are represented in the form of a table.
Has the advantages that: in such a way, the specific object and the watching time thereof are more intuitive, so that the reading is convenient, and meanwhile, the data comparison or processing is also convenient.
Further, the expression of the tested person in the testing process is obtained, and whether the tested person is in a tension state is judged: and if the tested person is in a stressed state at a certain moment, the test data at the moment are rejected.
Has the advantages that: when the tested person is tense, the attention is slow, the actual condition cannot be accurately reflected by the fixation time, and the fixation time is removed, so that the misjudgment is avoided.
Further, the action of the tested person in the testing process is obtained, and whether the tested person has the action with the overlarge amplitude is judged: and if the action amplitude of the tested person at a certain moment is too large, the test data at the moment is rejected.
Has the advantages that: when the action amplitude of the tested person is too large suddenly due to some factor, such as scratching, the watching time can be greatly influenced, so that the actual situation can not be accurately reflected, and the wrong judgment can be avoided by rejecting the tested person.
Further, acquiring a face image of the tested person according to a preset frequency, judging whether human eyes in the face image watch on the specific object, calculating the watching time of the specific object, and judging whether the calculated watching time of the specific object is consistent with the watching time of the tested specific object: if not, the tested subject-specific gaze time is updated with the calculated subject-specific gaze time.
Has the advantages that: by the mode, the tested data can be corrected, so that the accuracy of the test result is improved.
Further, the face images of the person to be tested are collected from different angles.
Has the advantages that: in the test process, the brainbags of the tested person have certain actions more or less, so that the shooting is performed from different angles, and the omission is favorably prevented.
Drawings
Fig. 1 is a flowchart of an intelligent cockpit test evaluation method with respect to gaze time according to an embodiment of the present invention.
Detailed Description
The following is further detailed by the specific embodiments:
example 1
The embodiment of the intelligent cockpit test and evaluation method related to the fixation time is basically as shown in the attached figure 1, and comprises the following steps:
s1, selecting a target customer group, and determining a tested person in the target customer group;
s2, setting a test flow, testing the tested person, and recording test data;
s3, analyzing the recorded test data, acquiring the fixation time of the tested person on the specific object, and evaluating;
s4, carrying out a test interview with the tested person, and obtaining subjective evaluation of the tested person;
and S5, giving a test report by taking objective fixation time of the test as a main part and taking subjective evaluation as an auxiliary part.
The specific implementation process is as follows:
and S1, selecting a target customer group, and determining the tested person in the target customer group.
And selecting a corresponding appropriate target customer group according to the tested vehicle type, wherein for example, male customers may prefer cross-country vehicles, and female customers may prefer saloon cars. Meanwhile, the attention and the reaction of people of different ages are different, the space of the cabin is narrow, and the height and the weight also influence the action of the tested person, so that the age, the height and the weight are also considered when a target client group is selected, and the factors are favorably prevented from interfering with a test result. Specifically, for a certain type of off-road vehicle, the target customer group should not be too old (e.g., 20-35 years old), too low in height (e.g., 160-180 cm), and too heavy (e.g., 55-70 kg). Then, the number of the tested persons is determined according to the actual situation, for example, the number of the target client groups meeting the conditions of age, height and weight is large, so that more tested persons can be selected, and the test result can be ensured to have statistical significance.
And S2, setting a test flow, testing the tested person, and recording test data.
Test devices, such as eye trackers, data acquisition devices, are prepared. And setting a test case according to the tested vehicle type, namely, the specific flow and steps of the test. Before testing, a tested person wears the equipment, the testing time is about 15-30 minutes, the testing process starts when the tested person gets on the vehicle and stops when the tested person gets off the vehicle, the whole vehicle using links of the tested person getting on the vehicle, driving for testing and getting off the vehicle are covered, and the testing data are recorded in real time in the testing process. Thus, the method accords with the actual vehicle using process and can ensure that the test result is close to the actual situation as much as possible.
And S3, analyzing the recorded test data, acquiring the fixation time of the tested person on the specific object, and evaluating.
And after the test is finished, analyzing the recorded test number, and acquiring the watching time of the tested person watching a specific object, wherein the specific object comprises a road surface, a screen and a knob. That is, the time when the subject looks at the road surface, the screen, and the dial is acquired and filled in the form prepared in advance in units of "seconds".
Next, the off-road time, that is, the length of time the eye is looking away from the road surface, is evaluated. Specifically, the evaluation method is a 3-second principle and comprises two steps: firstly, judging whether the screen watching time exceeds 1 second, and if the screen watching time exceeds 1 second, directly judging that the screen watching time is not acceptable; if the screen watching time is between 0 and 1 second, the next judgment is carried out: if the road surface fixation time is between 0 second and 1 second, judging that the requirement is met; if the road surface fixation time is between 1 second and 2 seconds, judging that the road surface fixation time is acceptable; if the road surface fixation time is between 2 seconds and 3 seconds, judging that the road surface fixation time is not acceptable; if the road surface fixation time is longer than 3 seconds, it is determined to be completely unacceptable.
And S4, carrying out a test interview with the tested person and acquiring the subjective evaluation of the tested person.
After the test is finished, the test interview is carried out with the tested person, and the subjective evaluation of the tested person is obtained. For example, the examinee is asked whether the operation is convenient or not, whether the required time is too long or not, and the questions and the answer of the examinee are collated to form a text report.
And S5, giving a test report by taking objective fixation time of the test as a main part and taking subjective evaluation as an auxiliary part.
And finally, comprehensively considering the test data and the subjective evaluation of the tested person, wherein the former reflects the common characteristics of the target client group, and the latter reflects the subjective opinion of the tested person, and the two are combined to form a test report. The specific object and the fixation time of the specific object in the test report are represented in a table form, so that the test report is convenient to read and compare.
Example 2
The difference from the embodiment 1 is that when the tested person is tense, the attention is slow, and the watching time cannot accurately reflect the actual situation, so that the expression of the tested person in the testing process is obtained in real time by adopting the camera, whether the tested person is in the tense state or not is judged by adopting FaceReader software, and if the tested person is in the tense state at a certain moment, the testing data at the moment is rejected. Meanwhile, the camera acquires the action of the tested person in the testing process, whether the tested person has the action with the overlarge amplitude is judged through action recognition software (such as Hand Mocap), and if the action amplitude of the tested person is overlarge at a certain moment, the test data at the moment is removed.
Example 3
The difference from the embodiment 2 is that a 3D camera is adopted to collect face images of a tested person from different angles according to a preset frequency, and whether eyes in the face images watch on a specific object (a road surface, a screen and a knob) is judged through a 3D face recognition model; then, calculating the fixation time of the human eyes in the human face image continuously fixing the specific object; and finally, judging whether the calculated watching time of the specific object is consistent with the tested watching time of the specific object or not, if not, updating or replacing the tested watching time of the specific object according to the calculated watching time of the specific object.
Example 4
The difference from embodiment 3 is only that the sound in the smart cabin is also picked up by the sound pick-up. When the time of watching the rearview mirror by the tested person exceeds 3 seconds, for example, the time of watching the rearview mirror is 4 seconds, if the judgment is directly carried out by adopting the 3-second principle, the judgment result at this time is inevitably completely unacceptable. However, there may be cases where: the reason why the testee looks at the rearview mirror is that the driver is changing lanes or overtaking, and the reason is not that the driver looks at the rearview mirror because of the intelligent cabin. At the moment, extracting the sound in the sound pick-up, and judging whether the sound is the whistle sound of the tested person pressing a loudspeaker; meanwhile, the collected video is extracted, and whether the steering lamp flickers or not is judged in an image recognition mode. If the sound collected by the sound pick-up is a horn sound and the turn light flickers, the fact that the tested person is changing lanes or overtaking operation can be judged, so that the time of watching the rearview mirror is too long, the fact that the tested person watches the rearview mirror is too long due to the reason of the intelligent cabin is avoided, and the data need to be deleted to avoid misjudgment and are not used as judgment data.
The foregoing is merely an example of the present invention, and common general knowledge in the field of known specific structures and characteristics is not described herein in any greater extent than that known in the art at the filing date or prior to the priority date of the application, so that those skilled in the art can now appreciate that all of the above-described techniques in this field and have the ability to apply routine experimentation before this date can be combined with one or more of the present teachings to complete and implement the present invention, and that certain typical known structures or known methods do not pose any impediments to the implementation of the present invention by those skilled in the art. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (10)

1. An intelligent cockpit test evaluation method about gaze time is characterized by comprising the following steps of:
s1, selecting a target customer group, and determining a tested person in the target customer group;
s2, setting a test flow, testing the tested person, and recording test data;
s3, analyzing the recorded test data, acquiring the fixation time of the tested person on the specific object, and evaluating;
s4, carrying out a test interview with the tested person, and obtaining subjective evaluation of the tested person;
and S5, giving a test report by taking objective fixation time of the test as a main part and taking subjective evaluation as an auxiliary part.
2. The intelligent cockpit test evaluation method of claim 1 where in S1 the target client group criteria include age, height and weight.
3. The intelligent cockpit test evaluation method of claim 2 with respect to gaze time, wherein in S2, the test procedure starts with the subject getting on the vehicle and ends with the subject getting off the vehicle.
4. The intelligent cockpit test evaluation method of claim 3 where in S3 the specific objects include road surface, screen and knobs.
5. The intelligent cockpit test evaluation method of claim 4 with respect to fixation time, wherein in S3, the evaluation method is a 3-second principle, specifically as follows:
if the screen fixation time is between 0 and 1 second: if the road surface fixation time is between 0 second and 1 second, judging that the requirement is met; if the road surface fixation time is between 1 second and 2 seconds, judging that the road surface fixation time is acceptable; if the road surface fixation time is between 2 seconds and 3 seconds, judging that the road surface fixation time is not acceptable; if the road surface fixation time is more than 3 seconds, judging that the road surface fixation time is completely unacceptable;
if the screen fixation time exceeds 1 second, the screen fixation time is directly judged to be unacceptable.
6. The intelligent cockpit test evaluation method of claim 5 where in S5, the specific objects in the test report and their gaze times are presented in a table format.
7. The intelligent cockpit test evaluation method of claim 6 regarding gaze time, wherein the expression of the person under test during the test is obtained and it is determined whether the person under test is in a stressed state: and if the tested person is in a stressed state at a certain moment, the test data at the moment are rejected.
8. The intelligent cockpit test evaluation method of claim 7 with respect to gaze time, wherein the actions of the person under test during the test are obtained, and whether the person under test has too large an action is determined: and if the action amplitude of the tested person at a certain moment is too large, the test data at the moment is rejected.
9. The intelligent cockpit test evaluation method of claim 8 with respect to gaze time, wherein the human face image of the person under test is collected according to a preset frequency, it is determined whether the human eyes in the human face image gaze at the specific object, the gaze time of the specific object is calculated, and it is determined whether the calculated gaze time of the specific object is consistent with the gaze time of the specific object under test: if not, the tested subject-specific gaze time is updated with the calculated subject-specific gaze time.
10. The intelligent cockpit test evaluation method of claim 9 with respect to gaze time wherein the human face images of the person under test are collected from different angles.
CN202011056809.8A 2020-09-30 2020-09-30 Intelligent cabin test evaluation method for gazing time Active CN112183386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011056809.8A CN112183386B (en) 2020-09-30 2020-09-30 Intelligent cabin test evaluation method for gazing time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011056809.8A CN112183386B (en) 2020-09-30 2020-09-30 Intelligent cabin test evaluation method for gazing time

Publications (2)

Publication Number Publication Date
CN112183386A true CN112183386A (en) 2021-01-05
CN112183386B CN112183386B (en) 2024-03-01

Family

ID=73946118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011056809.8A Active CN112183386B (en) 2020-09-30 2020-09-30 Intelligent cabin test evaluation method for gazing time

Country Status (1)

Country Link
CN (1) CN112183386B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050014605A (en) * 2003-07-31 2005-02-07 김승한 Dynamic Ride-Comfort Analysis System
JP2007213324A (en) * 2006-02-09 2007-08-23 Nissan Motor Co Ltd Driving evaluation support system and method for calculating driving evaluation data
CN109145782A (en) * 2018-08-03 2019-01-04 贵州大学 Visual cognition Research on differences method based on interface task
CN110530662A (en) * 2019-09-05 2019-12-03 中南大学 A kind of train seat Comfort Evaluation method and system based on multi-source physiological signal
WO2020122986A1 (en) * 2019-06-10 2020-06-18 Huawei Technologies Co.Ltd. Driver attention detection using heat maps
CN111709264A (en) * 2019-03-18 2020-09-25 北京市商汤科技开发有限公司 Driver attention monitoring method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050014605A (en) * 2003-07-31 2005-02-07 김승한 Dynamic Ride-Comfort Analysis System
JP2007213324A (en) * 2006-02-09 2007-08-23 Nissan Motor Co Ltd Driving evaluation support system and method for calculating driving evaluation data
CN109145782A (en) * 2018-08-03 2019-01-04 贵州大学 Visual cognition Research on differences method based on interface task
CN111709264A (en) * 2019-03-18 2020-09-25 北京市商汤科技开发有限公司 Driver attention monitoring method and device and electronic equipment
WO2020122986A1 (en) * 2019-06-10 2020-06-18 Huawei Technologies Co.Ltd. Driver attention detection using heat maps
CN110530662A (en) * 2019-09-05 2019-12-03 中南大学 A kind of train seat Comfort Evaluation method and system based on multi-source physiological signal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘普辉等: ""汽车驾驶品质主客观测试评价及相关性分析"", <中国工程机械学报>, 15 October 2015 (2015-10-15) *

Also Published As

Publication number Publication date
CN112183386B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
TWI741512B (en) Method, device and electronic equipment for monitoring driver&#39;s attention
Martin et al. Drive&act: A multi-modal dataset for fine-grained driver behavior recognition in autonomous vehicles
KR101772987B1 (en) Method for providing results of psychological tests using scanned image
Vetturi et al. Use of eye tracking device to evaluate the driver’s behaviour and the infrastructures quality in relation to road safety
CN114209324B (en) Psychological assessment data acquisition method based on image visual cognition and VR system
Lim et al. Investigation of driver performance with night vision and pedestrian detection systems—Part I: Empirical study on visual clutter and glance behavior
Jansen et al. Does agreement mean accuracy? Evaluating glance annotation in naturalistic driving data
Reimer et al. Detecting eye movements in dynamic environments
CN112183386A (en) Intelligent cockpit test evaluation method about fixation time
KR100995972B1 (en) A system and method measuring 3D display-induced subjective fatigue quantitatively
CN113658697A (en) Psychological assessment system based on video fixation difference
You et al. Using eye-tracking to help design HUD-based safety indicators for lane changes
JP2021089480A (en) Driving analyzer and driving analyzing method
WO2017032562A1 (en) System for calibrating the detection of line of sight
WO2020031949A1 (en) Information processing device, information processing system, information processing method, and computer program
Schwerd et al. Evaluating Blink Rate as a Dynamic Indicator of Mental Workload in a Flight Simulator.
Nabatilan Factors that influence visual attention and their effects on safety in driving: an eye movement tracking approach
Ablassmeier et al. Evaluating the potential of head-up displays for a multimodal interaction concept in the automotive environment
RU2819843C2 (en) Method for determining the level of formation of the skill of identifying a potentially dangerous situation and the skill of reacting to an event
DE102015015136A1 (en) Adaptive user interface system
Novotny et al. Advanced methodology for evaluation of driver’s actual state with use of technical driving data
Crescenti et al. An Eye Tracking Based Evaluation Protocol and Method for In-Vehicle Infotainment Systems
CN115683564B (en) Verification test method and device for AR-HUD system
CN116923425B (en) New energy automobile intelligent cabin with intelligent sensing system and control method thereof
US20240050002A1 (en) Application to detect dyslexia using Support Vector Machine and Discrete Fourier Transformation technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant