WO2020244060A1 - 人体应激反应测试方法、系统与计算机可读存储介质 - Google Patents

人体应激反应测试方法、系统与计算机可读存储介质 Download PDF

Info

Publication number
WO2020244060A1
WO2020244060A1 PCT/CN2019/101880 CN2019101880W WO2020244060A1 WO 2020244060 A1 WO2020244060 A1 WO 2020244060A1 CN 2019101880 W CN2019101880 W CN 2019101880W WO 2020244060 A1 WO2020244060 A1 WO 2020244060A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
test
virtual
stress response
road traffic
Prior art date
Application number
PCT/CN2019/101880
Other languages
English (en)
French (fr)
Inventor
聂冰冰
李泉
周青
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Priority to US17/006,419 priority Critical patent/US11751785B2/en
Publication of WO2020244060A1 publication Critical patent/WO2020244060A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/052Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles characterised by provision for recording or measuring trainee's performance
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/1036Measuring load distribution, e.g. podologic studies
    • A61B5/1038Measuring plantar pressure during gait
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1113Local tracking of patients, e.g. in a hospital or private home
    • A61B5/1114Tracking parts of the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4884Other medical applications inducing physiological or psychological stress, e.g. applications for stress testing
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/05Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles the view from a vehicle being simulated
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers
    • A61B2503/22Motor vehicles operators, e.g. drivers, pilots, captains
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Definitions

  • This application relates to the technical field of road traffic safety and automobile safety development, and in particular to a test method, system and computer-readable storage medium based on human stress response.
  • Road traffic safety is a major public health issue worldwide.
  • pedestrians are a vulnerable group in the road traffic environment, and account for a high proportion of the total death toll in traffic accidents.
  • the human stress response of pedestrians in traffic accidents directly affects the risk of injury.
  • Human stress response refers to a type of non-specific response caused by various stressful stimuli (stressors), which can cause changes in human physiology, psychology, and behavior.
  • Stressors Stressors
  • a benign human stress response is conducive to the body's fight or escape in an emergency.
  • Inferior human stress response can cause pathological changes and even death.
  • the study of neurophysiological response and biomechanical behavior patterns related to human stress response is of great significance to the survival and evolution of organisms. The research on human stress response has important practical significance in the field of road traffic safety.
  • the traditional human stress response test method has a big problem: it is impossible to obtain the true stress response of the testee at the moment of the road traffic accident.
  • Experimental research on human stimulus response in the field of neuroscience and psychology The stimulus signal is single and cannot produce a three-dimensional scene close to the real world as the stimulus signal to stimulate the subjects. It is difficult to design complex stimulus generation conditions, so it is impossible to in-depth study of multiple conditions Human stress response mechanism in multiple scenarios.
  • road traffic accident investigations are required. Because safety is difficult to guarantee, and it is difficult to carry out tests on living pedestrian subjects, road traffic accident investigations cannot obtain pedestrian stress response information immediately before the pedestrian accident.
  • a human body stress response test method is provided.
  • This application provides a human stress response test method, including:
  • the virtual reality environment module establishes the virtual road traffic scene, acquiring position information and field of view information of the subject in the virtual road traffic scene;
  • the virtual reality environment module is controlled to establish a virtual stress event in the test area, and a stimulus is applied to the subject to cause the subject to produce Stress response.
  • This application also provides a human stress response test system, including:
  • Virtual reality environment module used to establish virtual road traffic scenes
  • a virtual scene display device worn on the head of the subject, and used to present the virtual road traffic scene in the mind of the subject;
  • the virtual scene control module is respectively connected to the virtual reality environment module and the virtual scene display device, and includes a memory and one or more processors.
  • the memory stores computer-readable instructions, and the computer-readable instructions are When the processor is executed, the one or more processors are caused to execute the following steps:
  • the virtual reality environment module establishes the virtual road traffic scene, acquiring position information and field of view information of the subject in the virtual road traffic scene;
  • the virtual reality environment module is controlled to establish a virtual stress event in the test area, and a stimulus is applied to the subject to cause the subject to produce Stress response.
  • the present application also provides a computer-readable storage medium having computer-readable instructions stored thereon, and when the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps:
  • the virtual reality environment module establishes the virtual road traffic scene, acquiring position information and field of view information of the subject in the virtual road traffic scene;
  • the virtual reality environment module is controlled to establish a virtual stress event in the test area, and a stimulus is applied to the subject to cause the subject to produce Stress response.
  • FIG. 1 is a schematic flowchart of a human stress response test method provided by an embodiment of the application
  • FIG. 2 is a schematic flowchart of a method for testing human stress response provided by another embodiment of the application.
  • FIG. 3 is a schematic structural diagram of a virtual road traffic scene in a human stress response test method provided by an embodiment of the application;
  • FIG. 4 is a schematic structural diagram of a virtual road traffic scene in a human stress response test method provided by another embodiment of the application;
  • FIG. 5 is a schematic structural diagram of a human stress response test system provided by an embodiment of the application.
  • FIG. 6 is a schematic structural diagram of a human stress response test system provided by another embodiment of the application.
  • This application provides a method, system and computer-readable storage medium for testing human stress response.
  • human stress response test method, system, and computer-readable storage medium provided in this application do not limit their application fields and application scenarios.
  • the human stress response test method, system and computer-readable storage medium provided in this application are applied to the field of road traffic safety.
  • the human stress response test method provides a method for testing human stress response.
  • the human stress response test method provided in this application does not limit its execution subject.
  • the human stress response test method is implemented by using virtual reality technology.
  • the execution subject of the human stress response test method may be the virtual scene control module 30 in the human stress response test system.
  • the execution subject may be one or more processors in the virtual scene control module 30.
  • the human stress response test method includes the following steps S100 to S700:
  • S100 Send a request for establishing a virtual road traffic scene 100 to the virtual reality environment module 10.
  • the virtual scene control module 30 sends a request for establishing a virtual road traffic scene 100 to the virtual reality environment module 10.
  • the virtual reality environment module 10 establishes a virtual road traffic scene 100 according to the request for establishing a virtual road traffic scene 100.
  • the virtual road traffic scene 100 may include one or more of virtual buildings, flowing virtual vehicles, virtual pedestrians, and traffic lanes.
  • the virtual road traffic scene 100 is presented on the virtual scene display device 20 worn by the subject.
  • the virtual scene display device 20 may be a virtual reality head-mounted display device (VR head-mounted display device). When the subject wears the virtual scene display device 20, the subject can be placed in the virtual road traffic scene 100, giving the subject a feeling of being in a real road traffic environment.
  • the virtual road traffic scene 100 corresponds to the actual active area where the subject is located.
  • the virtual road traffic scene 100 includes a test area 110 and a test area 120.
  • the actual activity area includes a test activity area and a test activity area.
  • the area to be tested 110 corresponds to the active area to be tested.
  • the test area 120 corresponds to the test activity area.
  • the test subject moves from the test activity area to the test activity area, in the virtual road traffic scene 100 perceived by the test subject, the test subject moves from the test area 110 to the test activity area. Mentioned test area 120.
  • the subject's body is provided with a location feature collection module 70 for collecting location information and visual field information of the subject in the virtual road traffic scene 100.
  • the position feature collection module 70 may include a locator.
  • the locator is connected to the virtual scene control module 30. The locator can obtain the position information of the subject in the virtual road traffic scene 100 in real time and send it to the virtual scene control module 30.
  • the position feature collection module 70 may include a visual field detection device.
  • the visual field detection device may be installed on the head of the subject to obtain real-time visual field information of the subject in the virtual road traffic scene 100.
  • the visual field information may include the visual field direction and/or visual field range of the subject.
  • the visual field detection device can acquire the visual field information in various ways.
  • the visual field detection device may collect the pupil movement track of the subject to determine the visual field information.
  • S300 determine whether the subject is in the area to be measured 110 in the virtual road traffic scene 100.
  • the test area 110 is used for preparatory work before the human stress response test.
  • the virtual scene control module 30 determines whether the position 161 of the subject is in the area to be tested 110. If the subject is in the area to be tested 110 in the virtual road traffic scene 100, the virtual scene control module 30 determines that the subject is ready, and the subsequent test steps can be performed. If the subject is not in the area to be tested 110 in the virtual road traffic scene 100, the virtual scene control module 30 determines that the subject deviates from the real activity area. It can be understood that the subject may deviate from the range of the virtual road traffic scene 100 in the subsequent test process.
  • S400 If the subject is in the area to be measured 110 in the virtual road traffic scene 100, determine whether the subject’s field of view direction faces the area in the virtual road traffic scene 100 according to the visual field information. Test area 120.
  • the virtual scene control module 30 determines that the subject is in the area to be tested 110.
  • the virtual stress event that actually causes the subject to produce a stress response is established in the test area 120, not the test area 110. Therefore, it is necessary to confirm that the subject's field of view direction faces the test area 120 in order to guide the subject to move to the test area 120 to trigger the virtual stress event. It can be understood that, further, the virtual scene control module 30 determines whether the subject's field of view direction faces the test area 120 according to the field of view information.
  • the virtual scene control module 30 may control the voice instruction module 80 to issue an instruction voice for guiding the subject to enter the area to be tested. In the measurement area 110.
  • the testee into the test area 110 through the tester's physical contact with the testee.
  • testee If the subject's field of view is facing the test area 120 in the virtual road traffic scene 100, the testee is guided to enter the test area 120. At the same time, start to obtain stress response data of the subject.
  • the virtual scene control module 30 determines that the subject is fully prepared for the test, and guides the subject into the test area. Test area 120.
  • a data acquisition device is installed on the subject's body.
  • the virtual scene control module 30 sends an instruction to start acquiring stress response data to the data collection device.
  • the data collection device opens the data collection interface according to the above instructions, and starts collecting various stress response data of the subject.
  • the step of determining whether the subject is in the test area 120 may be further included:
  • the virtual stress event is an unexpected event unexpected to the subject.
  • the virtual stress event can be of multiple types.
  • the preset time period is set by the tester.
  • the human stress response test method sends a request for establishing a virtual road traffic scene 100 to the virtual reality environment module 10 to build the virtual road traffic scene 100 so that the subject has an immersive sense of reality.
  • the position information and visual field information of the testee in the virtual road traffic scene 100 it is determined whether the testee is in the virtual road traffic scene 100.
  • the human stress response test method provided in this application under the premise of ensuring the personal safety of the testee, realizes the test of the true stress response of the testee at the moment of a road traffic accident and the obtained stress response data Reliable and effective.
  • the virtual road traffic scene 100 includes a first lane 130, a second lane 140, a test area 120, and traffic lights 121.
  • the first lane 130 extends in the first direction.
  • the second lane 140 extends in the second direction.
  • the first direction is perpendicular to the second direction.
  • the intersection of the first lane 130 and the second lane 140 forms an intersection 150.
  • the test area 120 is set in the first lane 130.
  • the test area 120 extends along the second direction.
  • the test area 120 runs through the first lane 130.
  • the traffic signal lamp 121 is installed in the test area 120.
  • the first lane 130 may include a plurality of sub-lanes 131 arranged parallel to each other.
  • the sub-lanes 131 all extend along the first direction.
  • the test area 120 may be arranged adjacent to the intersection 150.
  • the traffic signal lamp 121 is used to prompt the subject of the traffic state of the test area 120.
  • the first lane 130 and the second lane 140 form a common intersection-type lane environment.
  • the test area 120 is set in the first lane 130 and runs through the first lane 130.
  • the test area 120 is similar to a sidewalk with zebra crossings at an intersection.
  • the traffic signal lamp 121 is installed in the test area 120.
  • the number of traffic signal lights 121 may be one or multiple.
  • the traffic signal lamp 121 may be a traffic light.
  • the test area 110 may be arranged adjacent to the test area 120.
  • the direction of the subject's field of view is the second direction, facing the test area 120.
  • the subject starts from the test area 110 and slowly moves to the test area 120 along the second direction. During this process, the position 161 of the subject is constantly changing.
  • the tester may issue a target instruction to the subject through the voice instruction module 80, such as "please cross the road", to guide the subject through the test area 110.
  • a virtual road traffic scene 100 that is close to reality is created, so that the subject’s sense of reality is greatly enhanced and effectively Simulate the real road traffic environment.
  • the virtual road traffic scene 100 in this embodiment provides an environmental basis for the subsequent test steps of human stress response.
  • the step S500 includes the following steps S510 to 590:
  • S510 If the subject's field of view is facing the test area 120 in the virtual road traffic scene 100, send an instruction to the virtual reality environment module 10.
  • the instruction is used to control the traffic signal lamp 121 to be displayed in an impassable state.
  • the auxiliary vehicle 162 is controlled to appear and travels on the first sub-lane 132 of the first lane 130 at the first preset traveling speed.
  • the traffic signal light 121 may be a traffic light.
  • the control of the traffic signal light 121 to display an impassable state may be specifically a control of a traffic light to change to a red light to remind the subject that the test area 120 is in an impassable state at this time. At this time, the subject stands still on the edge of the test area 120.
  • the first preset traveling speed is preset by the tester.
  • S530 Acquire the position of the auxiliary vehicle 162. According to the position of the auxiliary vehicle 162, the linear distance from the position of the auxiliary vehicle 162 to the test area 120 is calculated along the first direction.
  • setting the auxiliary vehicle 162 to stop driving when it reaches the edge of the test area 120 can make the virtual road traffic scene 100 simulate a real driving environment.
  • the straight-line distance from the position of the auxiliary vehicle 162 to the test area 120 is the driving distance of the auxiliary vehicle 162 from the current position to the edge of the test area 120.
  • the position of the auxiliary vehicle 162 changes in real time, and the position of the test area 120 is fixed. Therefore, the linear distance from the position of the auxiliary vehicle 162 to the test area 120 can be calculated in real time.
  • S550 Determine whether the linear distance from the position of the auxiliary vehicle 162 to the test area 120 is greater than a preset distance.
  • the preset distance is the braking safety distance of the auxiliary vehicle 162.
  • the braking safety distance of the auxiliary vehicle 162 is preset by the tester.
  • the preset distance is the travel distance of the auxiliary vehicle 162 when the auxiliary vehicle 162 is decelerating at a preset braking deceleration to a speed of zero. It can be understood that if the linear distance from the position of the auxiliary vehicle 162 to the test area 120 is greater than the preset distance, the auxiliary vehicle 162 is in a safe state, and the auxiliary vehicle 162 continues to drive normally. Conversely, if the linear distance from the position of the auxiliary vehicle 162 to the test area 120 is not greater than the preset distance, the auxiliary vehicle 162 needs to be controlled to stop running, that is, the auxiliary vehicle 162 is controlled to "brake".
  • the traffic signal light 121 may be a traffic light.
  • the virtual scene control module 30 controls the traffic lights to be displayed as green lights.
  • the step S510 to the step S570 create a scene event in which the auxiliary vehicle 162 changes from a driving state to a stopped state, which deepens the real effect of the virtual road traffic scene 100.
  • S590 Send an instruction to the voice instruction module 80 to control the voice instruction module 80 to issue an instruction voice.
  • the instruction voice is used to guide the subject through the test area 120 at a uniform speed.
  • the tester may issue an instruction voice through the voice instruction module 80, such as "please cross the road", to guide the subject through the test area 110.
  • the step S600 includes the following steps S610 to S630:
  • S610 Set a target location point 122 in the test area 120.
  • the target location point 122 is one of a plurality of intersection points 167.
  • the intersection 167 is formed by the intersection of the sub-lane 131 and the test area 120.
  • each sub-lane 131 intersects the test area 120 to form an intersection 167.
  • the number of intersections 167 is equal to the number of sub-lanes 131.
  • the virtual stress event is that the test vehicle 163 appears on any sub-lane 131, travels in the first direction, and touches the subject to form a virtual stress event.
  • the target location point 122 is a location where the test vehicle 163 touches the subject. Setting the target location point 122 as one of the multiple intersections 167 can ensure that the intersection of the travel trajectory of the test vehicle 163 and the travel trajectory of the subject is the target location point 122.
  • S620 Obtain the location information of the subject and the walking speed of the subject, and calculate the time for the subject to reach the target location point 122.
  • the walking speed of the subject is unchanged.
  • the location information of the subject is the current location of the subject.
  • the linear distance from the subject to the target location point 122 can be calculated.
  • the quotient of the linear distance from the subject to the target location point 122 and the walking speed of the subject is obtained, and the time for the subject to reach the target location point 122 can be obtained.
  • S630 Control the test vehicle 163 to appear at the initial position of the test vehicle according to the time when the subject reaches the target location point 122, and drive on the second sub-lane 133 at a constant speed at a second preset speed.
  • the initial position of the test vehicle is set in the second sub-lane 133.
  • the intersection of the second sub-lane 133 and the test area 120 forms the target location point 122.
  • the linear distance from the initial position of the test vehicle to the target location point 122 satisfies the following formula, so that the test vehicle 163 and the subject touch at the target location point 122:
  • X is the linear distance from the initial position of the test vehicle to the target position point 122.
  • X 1 is the travel distance of the test vehicle 163 before braking.
  • X 2 is the driving distance of the test vehicle 163 after braking.
  • V 0 is the second preset speed.
  • V 1 is the speed when the test vehicle 163 reaches the target position point 122.
  • t 1 is the running time of the test vehicle 163 before braking.
  • t 2 is the braking time of the test vehicle 163.
  • a is the braking deceleration of the test vehicle 163.
  • t is the time when the subject reaches the target location point 122.
  • the linear distance X from the initial position of the test vehicle to the target position point 122 is an unknown quantity and a desired quantity.
  • the braking deceleration a of the test vehicle 163 is a known amount.
  • the second preset speed V 0 is a known quantity.
  • the speed at which the test vehicle 163 reaches the target position point 122 is a known quantity V 1 .
  • the time t for the subject to reach the target location point 122 is a known quantity.
  • the running time t 1 of the test vehicle 163 before braking is an unknown quantity. Therefore, the linear distance X from the initial position of the test vehicle to the target position point 122 can be obtained according to Formula 1.
  • a virtual stress event in which the test vehicle 163 touches the subject is created in the test area 120, and stimulation is applied to the subject.
  • stimulation is applied to the subject.
  • the first sub-lane 132 is a sub-lane close to the subject.
  • the second sub-lane 133 is a sub-lane far away from the subject, so that the auxiliary vehicle 162 can block the test vehicle 163 when the test vehicle 163 is running.
  • the auxiliary vehicle 162 functions to block the line of sight of the subject.
  • the auxiliary vehicle 162 runs in the first sub-lane 132.
  • the test vehicle 163 runs in the second sub-lane 133. It can be known from the foregoing that the first sub-lane 132 and the second sub-lane 133 are parallel to each other.
  • the first sub-lane 132 is close to the subject, and the second sub-lane 133 is far away from the subject, so that when the test vehicle 163 is running, the auxiliary vehicle 162 can block the test vehicle 163,
  • the test vehicle 163 is not allowed to be observed by the subject. Further, the unexpectedness of the virtual stress event in which the test vehicle 163 touches the subject is increased, so that the obtained stress response data is more authentic.
  • the auxiliary vehicle 162 may be one or more. When there is one auxiliary vehicle 162, the length of the body of the auxiliary vehicle 162 is not less than the length of the driving path of the test vehicle 163. When there are multiple auxiliary vehicles 162, the multiple auxiliary vehicles 162 are connected end to end and are arranged in the first sub-lane 132. The plurality of auxiliary vehicles 162 move at the same time and stop at the same time.
  • the distance between the test vehicle 163 and the subject is increased.
  • the unexpectedness of a virtual stress event makes the obtained stress response data more authentic.
  • step S600 further includes the following steps:
  • S640 Control the interfering vehicle 164 and/or the virtual pedestrian 165 to appear within the field of view 166 of the subject, so as to attract the subject's attention when the test vehicle 163 is running.
  • the interfering vehicle 164 may appear in the field of view 166 of the subject.
  • the interfering vehicle 164 turns right from the second lane 140 into the first lane 130, it enters the field of view 166 of the subject, attracting the subject's attention.
  • the virtual pedestrian 165 appears in the test area 110, and gradually moves to the test area 120, enters the subject's field of view 166, and attracts the subject's attention.
  • the interfering vehicle 164 and the virtual pedestrian 165 may appear at the same time, or only the interfering vehicle 164 may appear, or only the virtual pedestrian 165 may appear.
  • the interfering vehicle 164 and/or the virtual pedestrian 165 can be effectively controlled when the test vehicle 163 is running. Attract the subject’s attention. Further, in conjunction with the occlusion of the auxiliary vehicle 162, the interfering vehicle 164 and/or the virtual pedestrian 165 can make the test subject not aware of the test vehicle 163, which improves the unexpectedness of the virtual stress event and makes the stress obtained The response data is more authentic.
  • the stress response data includes one or more of motion characteristic data, physiological electrical signals, and plantar pressure data.
  • the motion characteristic data includes one or more of the speed of the subject, the acceleration of the movement of the subject, and the displacement of the subject relative to the ground.
  • the physiological electrical signals include one or more of brain electrical signals and muscle surface electrical signals.
  • the plantar pressure data includes plantar pressure distribution data.
  • step S500 further includes the following steps S520 to S540:
  • the time when the virtual scene control module 30 sends the data collection start instruction to each module is not limited, as long as it is before the subject is stimulated.
  • S540 Acquire in real time the movement characteristic data sent by the motion capture module 40, the physiological electrical signals sent by the electrical signal acquisition module, and the plantar pressure data sent by the plantar pressure testing module 60.
  • the virtual scene control module may also periodically acquire the motion characteristic data, the physiological electrical signal, and the plantar pressure data after each preset collection time period.
  • the motion feature data acquisition start instruction is sent to the motion capture module 40
  • the physiological electrical signal acquisition start instruction is sent to the physiological electrical signal acquisition module 50
  • the plantar pressure The test module 60 sends the plantar pressure data collection start instruction, so that the stress response data contains both the response data of the subject being stimulated and the response data of the subject in a normal state. This creates a comparison and is beneficial to human stress. Response analysis.
  • the step S700 includes the following steps S710 to S720:
  • S710 Start timing when the test vehicle 163 touches the subject. After a preset period of time, the motion characteristic data collection suspension instruction is sent to the motion capture module 40, the physiological electrical signal collection suspension instruction is sent to the physiological electrical signal acquisition module 50, and the plantar pressure test module 60 is sent Plantar pressure data collection stop command.
  • the preset time period is set by the tester.
  • S720 Stop receiving the movement characteristic data sent by the motion capture module 40, the physiological electrical signal sent by the physiological electrical signal acquisition module 50, and the plantar pressure data sent by the plantar pressure testing module 60.
  • the testing step ends.
  • the motion feature data collection suspension instruction is sent to the motion capture module 40, and the physiological electrical signal collection suspension instruction is sent to the physiological electrical signal acquisition module 50, And sending a plantar pressure data collection stop instruction to the plantar pressure test module 60, so that the stress response data contains both the response data of the subject being stimulated and the response data of the subject in a normal state, which creates a comparison. Conducive to the analysis of human stress response.
  • the application also provides a human stress response test system.
  • the human stress response test system includes a virtual display environment module, a virtual scene display device 20, a virtual scene control module 30, a motion capture module 40, and a physiological
  • the electrical signal acquisition module 50, the plantar pressure test module 60 and the voice instruction module 80 includes a virtual display environment module, a virtual scene display device 20, a virtual scene control module 30, a motion capture module 40, and a physiological
  • the electrical signal acquisition module 50, the plantar pressure test module 60 and the voice instruction module 80 includes a virtual display environment module, a virtual scene display device 20, a virtual scene control module 30, a motion capture module 40, and a physiological
  • the electrical signal acquisition module 50, the plantar pressure test module 60 and the voice instruction module 80 includes a virtual display environment module, a virtual scene display device 20, a virtual scene control module 30, a motion capture module 40, and a physiological
  • the electrical signal acquisition module 50, the plantar pressure test module 60 and the voice instruction module 80 includes a virtual display environment module, a virtual scene display device 20, a virtual scene control module 30, a motion capture module 40
  • the virtual scene control module 30 is connected to the virtual reality environment module 10 and the virtual scene display device 20 respectively.
  • the motion capture module 40 is connected to the virtual scene control module 30.
  • the physiological electrical signal acquisition module 50 is connected to the virtual scene control module 30.
  • the plantar pressure test module 60 is connected to the virtual scene control module 30.
  • the voice instruction module 80 is connected to the virtual scene control module 30.
  • the virtual reality environment module 10 is used to establish a virtual road traffic scene 100.
  • the virtual scene display device 20 is worn on the head of the subject.
  • the virtual scene display device 20 is used to present the virtual road traffic scene 100 in the mind of the subject.
  • the virtual scene control module 30 is used to execute the human stress response test method mentioned in the foregoing content, control the real environment module to establish a virtual stress event, and apply a stimulus to the subject so that the subject Produce a stress response.
  • the motion capture module 40 is arranged on the limbs and/or torso of the subject.
  • the motion capture module 40 is configured to collect and send the motion characteristic data of the testee to the virtual scene control module 30 during the stress reaction of the testee.
  • the physiological electrical signal acquisition module 50 is arranged on the skin surface of the subject.
  • the physiological electrical signal acquisition module 50 is used to collect the physiological electrical signal of the subject and send it to the virtual scene control module 30 when the subject has a stress response.
  • the plantar pressure test module 60 is attached to the sole of the subject.
  • the plantar pressure test module 60 is used to collect plantar pressure data of the testee and send it to the virtual scene control module 30 when the testee has a stress response.
  • the voice instruction module 80 is configured to issue a prompt voice under the control of the virtual scene control module 30 to guide the subject to perform actions.
  • the virtual scene control module 30 may include a human-computer interaction interface.
  • the tester can manipulate the virtual scene control module 30 through the human-computer interaction interface, so as to send control instructions to other modules in the human stress response test system.
  • the motion capture module 40 includes a plurality of markers (marker points) provided with special reflective materials on the surface.
  • the multiple markers are attached to the surface of the limbs and/or torso of the subject.
  • the multiple markers can collect the motion characteristic data of the subject at the moment when the stress response is generated at an information collection frequency higher than 100 Hz.
  • the physiological electrical signal acquisition module 50 includes a brain electrical signal acquisition unit and a myoelectric epidermal signal acquisition unit.
  • the brain electrical signal acquisition unit includes electrode sheets and electrode paste.
  • the electrode sheet is fixed on the head of the subject.
  • the electrode patch is connected with the brain epidermis through the electrode paste, and the brain electrical signals generated by the subject in the stress perception process and the stress decision process are collected.
  • the EMG signal acquisition unit includes electrode sheets fixed on the surface of the main muscle group of the subject. When the subject is stimulated by stress, the subject's muscles exert force, and the myoelectric epidermal signal acquisition unit can collect electrical signals of the muscle epidermis at a frequency higher than 2000 Hz.
  • the EMG signal acquisition unit is used to determine the activation level of the main muscle groups of the subject under the stress response.
  • the plantar pressure test module 60 may include a plurality of membrane pressure sensors.
  • the plurality of membrane pressure sensors may be installed on the insole of the subject. When the subject has a stress response, the multiple membrane pressure sensors can collect in real time the pressure distribution of the subject's feet against the ground during the movement.
  • the body of the subject is provided with a location feature collection module 70.
  • the position feature collection module 70 may include a locator.
  • the locator is connected to the virtual scene control module 30.
  • the locator can obtain the subject's position information in the virtual road traffic scene 100 in real time and send it to the virtual scene control module 30.
  • the position feature collection module 70 may include a visual field detection device.
  • the visual field detection device may be installed on the head of the subject to obtain real-time visual field information of the subject in the virtual road traffic scene 100.
  • the visual field information may include the visual field direction and/or visual field range of the subject.
  • the visual field detection device can acquire the visual field information in various ways.
  • the visual field detection device may collect the pupil movement track of the subject to determine the visual field information.
  • the human stress response test system may also include a real test site.
  • the reality detection site may include a real activity area.
  • the actual activity area may include an activity area to be tested and a test area 120 to be tested.
  • the activity area to be tested corresponds to the area to be tested 110.
  • the test area 120 to be tested corresponds to the test area 120.
  • the test activity area may be a test field of 15 meters ⁇ 5 meters.
  • a plurality of optical motion capture cameras can be arranged on the edge of the test activity area. The multiple optical motion capture cameras are used in conjunction with the motion capture module 40 to collect motion characteristic data of the subject.
  • the virtual road traffic scene 100 is established by setting the virtual reality environment module 10, so that the subject can subconsciously produce a real stress response in an emergency.
  • the virtual scene display device 20 By setting up the virtual scene display device 20, the virtual road traffic scene 100 is presented in the mind of the subject.
  • the virtual scene control module 30 By setting the virtual scene control module 30, the transformation of the virtual road traffic scene 100 is realized, and a virtual stress event is established to apply stimulation to the subject, so that the subject has a stress response.
  • the motion capture module 40 the movement characteristic data of the testee can be collected during the stress reaction of the testee.
  • the physiological electrical signal acquisition module 50 the physiological electrical signal of the subject can be collected during the stress response of the subject.
  • the plantar pressure data of the testee can be collected during the stress reaction of the testee.
  • the human body stress response test system based on virtual reality technology provided in this application can produce real stimulation to the subject, so that the subject can make a real stress response.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Psychiatry (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Social Psychology (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Educational Administration (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Dermatology (AREA)
  • Physiology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Optics & Photonics (AREA)
  • Traffic Control Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种人体应激反应测试方法、系统与计算机可读存储介质。其中,人体应激反应测试方法,在建立虚拟道路交通场景(100)后,获取受测者在虚拟道路交通场景(100)中的位置信息与视野信息;当受测者处于待测区(110),且受测者的视野方向面向试验区(120)时,引导受测者进入试验区(120),并同时开始获取受测者的应激反应数据;在确定受测者处于试验区(120)内后,控制虚拟现实环境模块(10)在试验区(120)建立虚拟应激事件,对受测者施加刺激,以使受测者产生应激反应。

Description

人体应激反应测试方法、系统与计算机可读存储介质
相关申请
本申请要求2019年06月05日申请的,申请号为2019104865291,名称为“人体应激反应测试方法与系统”的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本申请涉及道路交通安全与汽车安全开发技术领域,特别是涉及一种基于人体应激反应测试方法、系统与计算机可读存储介质。
背景技术
道路交通安全问题是全球范围内一项重大的公共健康问题。在道路交通环境中,行人是道路交通环境中的弱势群体,在交通事故总死亡人数占有很高的比例。行人在交通事故中的人体应激反应,直接影响其受到伤害的风险。人体应激反应是指由各种充满紧张性的刺激物(应激源)所引起的一类非特异反应,会引起人体生理、心理、行为等的变化。良性的人体应激反应有利于机体在紧急状态下的战斗或逃避。劣性的人体应激反应可以引起机体的病理变化,甚至死亡。与人体应激反应相关的神经生理学反应及生物力学行为模式的研究,对生物体的生存与演化具有重大意义。关于人体应激反应的研究在道路交通安全领域有重要的现实意义。
但是,目前人体应激反应的测试方法,对于一些实际的危险情形下的刺激可执行的试验方法非常有限。传统的人体刺激反应测试一般是利用神经科学和心理学领域的刺激研究方法,通过简单的视觉刺激或接触式刺激进行。
然而,传统的人体应激反应的测试方法具有一个很大的问题:无法在道路交通事故发生的瞬间,获取受测者真实的应激反应。神经科学与心理学领域的人体刺激反应实验研究,刺激信号单一,不能产生逼近真实世界的三维场景作为刺激信号刺激被试者,难以设计复杂的刺激产生工况,因此也无法深入研究多工况多场景下的人体应激反应机制。在道路交通安全研究和产品开发中,需要进行道路交通事故调查。由于安全性难以保证,活体行人受测者试验难以开展,因此道路交通事故调查也无法获得行人事故发生前瞬间的行人应激反应信息。
发明内容
基于此,根据本申请的各种实施例,提供一种人体应激反应测试方法、系统与计算机可读存储介质。
本申请提供一种人体应激反应测试方法,包括:
向虚拟现实环境模块发送建立虚拟道路交通场景的请求;
在所述虚拟现实环境模块建立所述虚拟道路交通场景后,获取受测者在所述虚拟道路交通场景中的位置信息与视野信息;
当所述受测者处于所述虚拟道路交通场景中的待测区内,且所述受测者的视野方向面向所述虚拟道路交通场景中的试验区时,引导所述受测者进入所述试验区,并同时开始获取所述受测者的应激反应数据;
在确定所述受测者处于所述试验区内后,控制所述虚拟现实环境模块在所述试验区建立虚拟应激事件,对所述受测者施加刺激,以使所述受测者产生应激反应。
本申请还提供一种人体应激反应测试系统,包括:
虚拟现实环境模块,用于建立虚拟道路交通场景;
虚拟场景显示设备,佩戴于受测者的头部,用于在所述受测者的脑海中呈现所述虚拟道路交通场景;
虚拟场景控制模块,与所述虚拟现实环境模块和所述虚拟场景显示设备分别连接,包括存储器和一个或多个处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行以下步骤:
向所述虚拟现实环境模块发送建立所述虚拟道路交通场景的请求;
在所述虚拟现实环境模块建立所述虚拟道路交通场景后,获取受测者在所述虚拟道路交通场景中的位置信息与视野信息;
当所述受测者处于所述虚拟道路交通场景中的待测区,且所述受测者的视野方向面向所述虚拟道路交通场景中的试验区时,引导所述受测者进入所述试验区,并同时开始获取所述受测者的应激反应数据;
在确定所述受测者处于所述试验区内后,控制所述虚拟现实环境模块在所述试验区建立虚拟应激事件,对所述受测者施加刺激,以使所述受测者产生应激反应。
本申请还提供一种计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
向所述虚拟现实环境模块发送建立所述虚拟道路交通场景的请求;
在所述虚拟现实环境模块建立所述虚拟道路交通场景后,获取受测者在所述虚拟道路 交通场景中的位置信息与视野信息;
当所述受测者处于所述虚拟道路交通场景中的待测区,且所述受测者的视野方向面向所述虚拟道路交通场景中的试验区时,引导所述受测者进入所述试验区,并同时开始获取所述受测者的应激反应数据;
在确定所述受测者处于所述试验区内后,控制所述虚拟现实环境模块在所述试验区建立虚拟应激事件,对所述受测者施加刺激,以使所述受测者产生应激反应。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本发明的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据公开的附图获得其他的附图。
图1为本申请一实施例提供的人体应激反应测试方法的流程示意图;
图2为本申请另一实施例提供的人体应激反应测试方法的流程示意图;
图3为本申请一实施例提供的人体应激反应测试方法中虚拟道路交通场景的结构示意图;
图4为本申请另一实施例提供的人体应激反应测试方法中虚拟道路交通场景的结构示意图;
图5为本申请一实施例提供的人体应激反应测试系统的结构示意图;
图6为本申请另一实施例提供的人体应激反应测试系统的结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种人体应激反应测试方法、系统与计算机可读存储介质。
需要说明的是,本申请提供的人体应激反应测试方法、系统与计算机可读存储介质不限制其应用领域与应用场景。可选地,本申请提供的人体应激反应测试方法、系统与计算机可读存储介质应用于道路交通安全领域。
本申请提供一种人体应激反应测试方法。本申请提供的人体应激反应测试方法并不限制其执行主体。可选地,所述人体应激反应测试方法采用虚拟现实技术实现。可选地,所述人体应激反应测试方法的执行主体可以为,所述人体应激反应测试系统中的虚拟场景控制模块30。具体地,所述的执行主体可以为所述虚拟场景控制模块30中的一个或多个处理器。
如图1和图2所示,在本申请的一实施例中,所述人体应激反应测试方法包括如下步骤S100至步骤S700:
S100,向虚拟现实环境模块10发送建立虚拟道路交通场景100的请求。
具体地,所述虚拟场景控制模块30向所述虚拟现实环境模块10发送建立虚拟道路交通场景100的请求。所述虚拟现实环境模块10依据所述建立虚拟道路交通场景100的请求,建立虚拟道路交通场景100。所述虚拟道路交通场景100可以包含虚拟建筑物、流动的虚拟车辆、虚拟的行人和交通车道中的一种或多种。所述虚拟道路交通场景100呈现于受测者佩戴的虚拟场景显示设备20。所述虚拟场景显示设备20可以为虚拟现实头戴式显示设备(VR头显设备)。受测者佩戴所述虚拟场景显示设备20时,所述受测者可以置身于所述虚拟道路交通场景100中,给予所述受测者身处真实道路交通环境中的感觉。
S200,在所述虚拟现实环境模块10建立所述虚拟道路交通场景100后,获取受测者在所述虚拟道路交通场景100中的位置信息与视野信息。
具体地,所述虚拟道路交通场景100与所述受测者所处的现实活动区域相对应。如图3所示,所述虚拟道路交通场景100包括待测区110和试验区120。所述现实活动区域包括待测活动区域和试验活动区域。所述待测区110与所述待测活动区域对应。所述试验区120与所述试验活动区域对应。举例说明,当所述受测者站立在所述待测活动区域时,在受测者感知的虚拟道路交通场景100中,所述受测者处于所述待测区110。当所述受测者由所述待测活动区域移动至所述试验活动区域时,在受测者感知的虚拟道路交通场景100中,所述受测者由所述待测区110移动至所述试验区120。
可选地,所述受测者的身体设置有位置特征采集模块70,用于采集所述受测者在所述虚拟道路交通场景100中的位置信息与视野信息。具体地,所述位置特征采集模块70可以包括定位器。所述定位器与所述虚拟场景控制模块30连接。所述定位器可以实时获取受测者在所述虚拟道路交通场景100中的位置信息,并发送至所述虚拟场景控制模块30。
所述位置特征采集模块70可以包括视野检测装置。所述视野检测装置可以安装于所述受测者的头部,用于实时获取所述受测者在所述虚拟道路交通场景100中的视野信息。所述视野信息可以包括所述受测者的视野方向和/或视野范围。所述视野检测装置获取所述视 野信息的方式可以为多种方式。可选地,所述视野检测装置可以采集所述受测者的瞳孔运动轨迹,以确定所述视野信息。
S300,依据所述位置信息,判断所述受测者是否处于所述虚拟道路交通场景100中的待测区110内。
具体地,所述待测区110用于人体应激反应测试前的准备工作。具体地,如图3和图4所示,所述虚拟场景控制模块30判断所述受测者的位置161是否处于所述待测区110内。若所述受测者处于所述虚拟道路交通场景100中的待测区110内,则所述虚拟场景控制模块30确定所述受测者准备完毕,可以进行后续的测试步骤。若所述受测者未处于所述虚拟道路交通场景100中的待测区110内,则所述虚拟场景控制模块30确定所述受测者偏离所述现实活动区域。可以理解,受测者在后续的测试过程中,可能会偏离所述虚拟道路交通场景100的范围。
S400,若所述受测者处于所述虚拟道路交通场景100中的待测区110内,则根据所述视野信息判断所述受测者的视野方向是否面向所述虚拟道路交通场景100中的试验区120。
具体地,前述内容已提及到,若所述受测者处于所述待测区110内,则所述虚拟场景控制模块30确定所述受测者处于所述待测区110内。然而,在所述虚拟道路交通场景100中,真正使得所述受测者产生应激反应的虚拟应激事件,是在试验区120建立的,并不是待测区110。因此,需要确认所述受测者的视野方向面向所述试验区120,才能引导所述受测者移动至所述试验区120,触发所述虚拟应激事件。可以理解,进一步地,所述虚拟场景控制模块30根据所述视野信息判断所述受测者的视野方向是否面向所述试验区120。
此外,若所述受测者未处于所述待测区110内,则所述虚拟场景控制模块30可以控制所述语音指示模块80发出指示语音,用于引导所述受测者进入所述待测区110内。当然,也可以通过测试人员与所述受测者的身体接触,人为的拉拽所述受测者进入所述待测区110内。
S500,若所述受测者的视野方向面向所述虚拟道路交通场景100中的试验区120,则引导所述受测者进入所述试验区120。同时,开始获取所述受测者的应激反应数据。
具体地,若所述受测者的视野方向面向所述试验区120,则所述虚拟场景控制模块30确定所述受测者已经完全做好了测试准备,引导所述受测者进入所述试验区120。所述受测者的身上安装有数据采集装置。所述虚拟场景控制模块30向所述数据采集装置发送开始获取应激反应数据的指令。所述数据采集装置依据上述指令,开启数据采集的接口,开始采集所述受测者的各项应激反应数据。
S600,在确定所述受测者处于所述试验区120内后,控制所述虚拟现实环境模块10 在所述试验区120建立虚拟应激事件,对所述受测者施加刺激,以使所述受测者产生应激反应。
具体地,在所述步骤S500和所述步骤S600之间,可以还包括受测者是否处于试验区120的判断步骤:
判断所述受测者是否处于所述试验区120内,若所述受测者处于所述试验区120,则执行所述步骤S600。所述虚拟应激事件为使所述受测者意料不到的意外事件。所述虚拟应激事件可以为多种。
S700,在所述受测者受到刺激持续预设时间段后,中止所述应激反应数据的获取。
具体地,所述预设时间段由测试人员设置。
本实施例中,所述人体应激反应测试方法,通过向虚拟现实环境模块10发送建立虚拟道路交通场景100的请求,搭建虚拟道路交通场景100使受测者有身临其境的真实感。依据受测者在所述虚拟道路交通场景100中的位置信息与视野信息,判断受测者是否处于所述虚拟道路交通场景100中。在保证受测者人身安全的情况下,刺激受测者产生应激反应。通过实时获取受测者在面对危险情况下的应激反应数据,实现了人体在交通事故发生过程中真实应激反应数据的监控与记录。本申请提供的人体应激反应测试方法,在保证受测者人身安全的前提下,实现了在道路交通事故发生的瞬间,对受测者真实的应激反应进行测试,获取的应激反应数据可靠有效。
如图3和图4所示,在本申请的一实施例中,所述虚拟道路交通场景100包括第一车道130、第二车道140、试验区120和交通信号灯121。第一车道130沿第一方向延伸。第二车道140沿第二方向延伸。所述第一方向与所述第二方向垂直。所述第一车道130与第二车道140相交形成十字交叉口150。所述试验区120设置于所述第一车道130。所述试验区120沿所述第二方向延伸。所述试验区120贯穿所述第一车道130。所述交通信号灯121设置于所述试验区120。
所述第一车道130可以包括多个互相平行设置的子车道131。所述子车道131均沿所述第一方向延伸。所述试验区120可以与所述十字交叉口150相邻设置。所述交通信号灯121用于提示所述受测者所述试验区120的通行状态。
具体地,如图3和图4所示,所述第一车道130和所述第二车道140组成常见的十字路口型车道环境。所述试验区120设置于所述第一车道130,并贯穿所述第一车道130。所述试验区120类似于十字路口的带有斑马线的人行道。所述交通信号灯121设置于所述试验区120。所述交通信号灯121可以为一个,也可以为多个。可选地,所述交通信号灯121可以为红绿灯。本实施例中,所述待测区110可以与所述试验区120相邻设置。所述 受测者的视野方向为所述第二方向,面向所述试验区120。所述受测者从所述待测区110出发,慢慢沿所述第二方向移动至所述试验区120。在此过程中,所述受测者的位置161不断改变。所述测试人员可以通过所述语音指示模块80向所述受测者下达目标指令,例如“请穿过马路”,引导所述受测者穿过所述待测区110。
本实施例中,通过设置第一车道130、第二车道140、试验区120和交通信号灯121,创建了接近真实的虚拟道路交通场景100,使得所述受测者的真实感大大增强,有效地模拟了真实的道路交通环境。本实施例中虚拟道路交通场景100的为后续人体应激反应的测试步骤提供了环境基础。
在本申请的一实施例中,所述步骤S500包括如下步骤S510至步骤590:
S510,若所述受测者的视野方向面向所述虚拟道路交通场景100中的试验区120,则向所述虚拟现实环境模块10发送指令。所述指令用于控制所述交通信号灯121显示为不可通行状态。控制辅助车辆162出现,以第一预设行驶速度行驶于所述第一车道130的第一子车道132上。
具体地,所述交通信号灯121可以为红绿灯。所述控制所述交通信号灯121显示为不可通行状态,具体可以为控制红绿灯变为红灯,以提示所述受测者此时所述试验区120是不可通行的状态。此时,所述受测者在所述试验区120的边缘站立不动。所述第一预设行驶速度由测试人员预设。
S530,获取所述辅助车辆162的位置。依据所述辅助车辆162的位置,沿所述第一方向,计算所述辅助车辆162的位置至所述试验区120的直线距离。
具体地,设置所述辅助车辆162行驶至所述试验区120的边缘时停止行驶,可以使得所述虚拟道路交通场景100模拟真实行车环境。可以理解,所述辅助车辆162的位置至所述试验区120的直线距离,就是所述辅助车辆162从当前的位置,行驶至所述试验区120的边缘过程中的行驶距离。本实施例中,所述辅助车辆162的位置是实时变化的,而所述试验区120的位置是固定不动的。因此,可以实时计算所述辅助车辆162的位置至所述试验区120的直线距离。
S550,判断所述辅助车辆162的位置至所述试验区120的直线距离是否大于预设距离。
具体地,所述预设距离为所述辅助车辆162的刹车安全距离。所述辅助车辆162的所述刹车安全距离由测试人员预设。所述预设距离为所述辅助车辆162以预设的刹车减速度减速至速度为0的过程中,所述辅助车辆162的行驶距离。可以理解,若所述辅助车辆162的位置至所述试验区120的直线距离大于所述预设距离,则所述辅助车辆162处于安全状态,所述辅助车辆162继续正常行驶。反之,若所述辅助车辆162的位置至所述试验区120 的直线距离不大于所述预设距离,则需要控制所述辅助车辆162停止行驶,即控制所述辅助车辆162“刹车”。
S570,若所述辅助车辆162的位置至所述试验区120的直线距离不大于所述预设距离,则控制所述辅助车辆162停止行驶,并控制所述交通信号灯121显示为可通行状态。
具体地,所述交通信号灯121可以为红绿灯。此时,所述虚拟场景控制模块30控制红绿灯显示为绿灯。所述步骤S510至所述步骤S570创建了辅助车辆162从行驶状态变为停止状态的场景事件,深化了所述虚拟道路交通场景100的真实效果。
S590,向语音指示模块80发送指令,控制所述语音指示模块80发出指示语音。所述指示语音用于引导所述受测者匀速通过所述试验区120。
具体地,所述测试人员可以通过所述语音指示模块80发出指示语音,例如“请穿过马路”,引导所述受测者通过所述待测区110。
本实施例中,通过创建辅助车辆162从行驶状态变为停止状态的场景事件,加强了所述虚拟道路交通场景100的真实效果。
在本申请的一实施例中,所述步骤S600包括如下步骤S610至步骤S630:
S610,在所述试验区120内设置一个目标位置点122。所述目标位置点122为多个交叉点167中的一个。所述交叉点167由所述子车道131与所述试验区120相交形成。
具体地,每一个子车道131与所述试验区120相交,均可以形成一个交叉点167。所述交叉点167的数量等于所述子车道131的数量。本实施例中,所述虚拟应激事件为,试验车辆163出现于任一子车道131上,沿所述第一方向行驶,并触碰所述受测者,形成虚拟应激事件。所述目标位置点122为所述试验车辆163与所述受测者发生触碰的位置。将所述目标位置点122设置为所述多个交叉点167中的一个,能够确保所述试验车辆163的行轨迹与所述受测者的行驶轨迹的交点为所述目标位置点122。
S620,获取所述受测者的位置信息和所述受测者的行走速度,计算所述受测者到达所述目标位置点122的时间。
具体地,本实施例中,默认所述受测者的行走速度不变。所述受测者的位置信息为所述受测者当前所处的位置。依据所述受测者当前所处的位置,可以计算所述受测者至所述目标位置点122的直线距离。求所述受测者至所述目标位置点122的直线距离与所述受测者的行走速度的商,可以获得所述受测者到达所述目标位置点122的时间。
当前,也可以通过在测试之前对所述受测者执行行走测试实验,提取多个行走样本,建立行走速度预估模型,估算所述受测者的行走速度。通过上述方式,所述受测者的行走速度为已知,在测试过程中仅获取所述受测者的位置信息即可计算出所述受测者到达所述 目标位置点122的时间。
S630,依据所述受测者到达所述目标位置点122的时间,控制试验车辆163出现于试验车辆初始位置,并以第二预设速度匀速行驶于第二子车道133上。所述试验车辆初始位置设置于第二子车道133。所述第二子车道133与所述试验区120相交形成所述目标位置点122。
所述试验车辆初始位置至所述目标位置点122的直线距离满足以下公式,以使所述试验车辆163与所述受测者在所述目标位置点122处触碰:
Figure PCTCN2019101880-appb-000001
其中,X为所述试验车辆初始位置至所述目标位置点122的直线距离。X 1为所述试验车辆163在刹车前的行驶距离。X 2为所述试验车辆163在刹车后的行驶距离。V 0为所述第二预设速度。V 1为所述试验车辆163到达所述目标位置点122时的速度。t 1为所述试验车辆163在刹车前的行驶时间。t 2为所述试验车辆163的刹车时间。a为所述试验车辆163的刹车减速度。t为所述受测者到达所述目标位置点122的时间。
具体地,在公式1中,所述试验车辆初始位置至所述目标位置点122的直线距离X是未知量且是想求的量。而所述试验车辆163的刹车减速度a是已知量。所述第二预设速度V 0是已知量。所述试验车辆163到达所述目标位置点122时的速度是已知量V 1。所述受测者到达所述目标位置点122的时间t是已知量。所述试验车辆163在刹车前的行驶时间t 1是未知量。因此,可以根据公式1求得所述试验车辆初始位置至所述目标位置点122的直线距离X。
本实施例中,通过上述步骤S610至步骤S630,实现在所述试验区120创建所述试验车辆163与所述受测者发生触碰的虚拟应激事件,对所述受测者施加刺激。通过这种方式,既能保证受测者的人身安全,又能获取在面对危险情况下的应激反应数据。
在本申请的一实施例中,所述第一子车道132为靠近所述受测者的子车道。所述第二子车道133为远离所述受测者的子车道,以使所述试验车辆163行驶时,所述辅助车辆162能够遮挡所述试验车辆163。
具体地,所述辅助车辆162起遮挡所述受测者视线的作用。所述辅助车辆162行驶于所述第一子车道132。所述试验车辆163行驶于所述第二子车道133。由前文可知,所述第一子车道132和所述第二子车道133相互平行。所述第一子车道132靠近所述受测者, 所述第二子车道133远离所述受测者,使得所述试验车辆163行驶时,所述辅助车辆162能够遮挡所述试验车辆163,不让受测者观察到所述试验车辆163。进一步地,增大了所述试验车辆163与所述受测者发生触碰的这一虚拟应激事件的意外性,使得获得的应激反应数据更具真实性。
所述辅助车辆162可以为一个或多个。当所述辅助车辆162为一个时,所述辅助车辆162的车身长度不小于所述试验车辆163的行驶路径长度。当所述辅助车辆162为多个时,所述多个辅助车辆162首尾相连,设置于所述第一子车道132。所述多个辅助车辆162同时移动,同时停止。
本实施例中,通过设置第一车道130靠近所述受测者,所述第二车道140远离所述受测这,增大了所述试验车辆163与所述受测者发生触碰的这一虚拟应激事件的意外性,使得获得的应激反应数据更具真实性。
在本申请的一实施例中,所述步骤S600还包括如下步骤:
S640,在所述受测者的视野范围166内,控制干扰车辆164和/或虚拟行人165出现,以在所述试验车辆163行驶时,吸引所述受测者的注意力。
具体地,如图4所示,所述受测者的视野范围166可以出现所述干扰车辆164。所述干扰车辆164由所述第二车道140右转驶入所述第一车道130时,进入所述受测者的视野范围166,吸引所述受测者的注意力。所述虚拟行人165出现在所述待测区110内,并逐渐向所述试验区120移动,进入所述受测者的视野范围166,吸引所述受测者的注意力。所述干扰车辆164和所述虚拟行人165可以同时出现,也可以只出现干扰车辆164,或只出现虚拟行人165。
本实施例中,通过控制干扰车辆164和/或虚拟行人165出现在所述受测者的视野范围166内,使得在所述试验车辆163行驶时,干扰车辆164和/或虚拟行人165能够有效地吸引所述受测者的注意力。进一步地,配合辅助车辆162的遮挡,干扰车辆164和/或虚拟行人165能够使得所述受测者察觉不到所述试验车辆163,提高了虚拟应激事件的意外性,使得获得的应激反应数据更具真实性。
在本申请的一实施例中,所述应激反应数据包括运动特征数据、生理电信号与足底压力数据中的一种或多种。
具体地,所述运动特征数据包括所述受测者的速度、所述受测者移动的加速度和所述受测者相对于地面的位移中的一种或多种。所述生理电信号包括脑电信号和肌肉表面电信号中的一种或多种。所述足底压力数据包括足底压力分布数据。
本实施例中,通过采集多种类型的应激反应数据,可以充分、全面的分析体面对危险 信号所表现出来的人体应激反应。
在本申请的一实施例中,所述步骤S500还包括如下步骤S520至S540:
S520,若所述受测者的视野方向面向所述虚拟道路交通场景100中的试验区120,则同时向动作捕捉模块40发送运动特征数据采集开始指令,向生理电信号采集模块50发送生理电信号采集开始指令,向足底压力测试模块60发送足底压力数据采集开始指令。
具体地,所述虚拟场景控制模块30向各个模块发送数据采集开始指令的时刻并不做限制,只要在对所述受测者施加刺激之前即可。
S540,实时获取所述动作捕捉模块40发送的运动特征数据、所述电信号采集模块发送的生理电信号以及所述足底压力测试模块60发送的足底压力数据。
具体地,所述虚拟场景控制模也可以在经历每一段预设采集时间周期后,定期获取所述运动特征数据、生理电信号以及所述足底压力数据。
本实施例中,通过在对所述受测者施加刺激之前,向动作捕捉模块40发送运动特征数据采集开始指令,向生理电信号采集模块50发送生理电信号采集开始指令,以及向足底压力测试模块60发送足底压力数据采集开始指令,使得应激反应数据中既存在受测者受到刺激的反应数据,又存在受测者处于正常状态的反应数据,产生了对比,有利于人体应激反应的分析。
在本申请的一实施例中,所述步骤S700包括如下步骤S710至步骤S720:
S710,在所述试验车辆163与所述受测者触碰时开始计时。在持续预设时间段后,向所述动作捕捉模块40发送运动特征数据采集中止指令,向所述生理电信号采集模块50发送生理电信号采集中止指令,向所述足底压力测试模块60发送足底压力数据采集中止指令。
具体地,所述预设时间段由测试人员设置。
S720,停止接收所述动作捕捉模块40发送的运动特征数据、所述生理电信号采集模块50发送的生理电信号以及所述足底压力测试模块60发送的足底压力数据。
具体地,停止接收所述运动特征数据、所述生理电信号和所述足底压力数据后,所述测试步骤结束。
本实施例中,通过在对所述受测者受到刺激持续预设时间段后,向动作捕捉模块40发送运动特征数据采集中止指令,向生理电信号采集模块50发送生理电信号采集中止指令,以及向足底压力测试模块60发送足底压力数据采集中止指令,使得应激反应数据中既存在受测者受到刺激的反应数据,又存在受测者处于正常状态的反应数据,产生了对比,有利于人体应激反应的分析。
本申请还提供一种人体应激反应测试系统。
如图5和图6所示,在本申请的一实施例中,所述人体应激反应测试系统包括虚拟显示环境模块、虚拟场景显示设备20、虚拟场景控制模块30、动作捕捉模块40、生理电信号采集模块50、足底压力测试模块60和语音指示模块80。
所述虚拟场景控制模块30,与所述虚拟现实环境模块10和所述虚拟场景显示设备20分别连接。所述动作捕捉模块40与所述虚拟场景控制模块30连接。所述生理电信号采集模块50与所述虚拟场景控制模块30连接。所述足底压力测试模块60与所述虚拟场景控制模块30连接。所述语音指示模块80与所述虚拟场景控制模块30连接。
所述虚拟现实环境模块10用于建立虚拟道路交通场景100。所述虚拟场景显示设备20佩戴于受测者的头部。所述虚拟场景显示设备20用于在所述受测者的脑海中呈现所述虚拟道路交通场景100。所述虚拟场景控制模块30用于执行前述内容提及的人体应激反应测试方法,控制所述现实环境模块建立虚拟应激事件,对所述受测者施加刺激,以使所述受测者产生应激反应。
所述动作捕捉模块40设置于所述受测者的四肢和/或躯干。所述动作捕捉模块40用于在所述受测者产生应激反应的过程中,采集所述受测者的运动特征数据并发送至所述虚拟场景控制模块30。所述生理电信号采集模块50设置于所述受测者的皮肤表面。所述生理电信号采集模块50用于在所述受测者产生应激反应的过程中,采集所述受测者的生理电信号并发送至所述虚拟场景控制模块30。所述足底压力测试模块60贴附于所述受测者的足底。所述足底压力测试模块60用于在所述受测者产生应激反应的过程中,采集所述受测者的足底压力数据并发送至所述虚拟场景控制模块30。所述语音指示模块80用于在所述虚拟场景控制模块30的控制下发出提示语音,以引导所述受测者执行动作。
具体地,所述虚拟场景控制模块30可以包括人机交互界面。测试人员可以通过所述人机交互界面操控所述虚拟场景控制模块30,从而向所述人体应激反应测试系统中的其他模块发送控制指令。
所述动作捕捉模块40包括多个表面设置有特殊反光材料的标记物(marker点)。所述多个标记物贴附在受测者的四肢和/或躯干的表面。所述多个标记物可以以高于100Hz的信息采集频率采集受测者在产生应激反应的瞬间的运动特征数据。
所述生理电信号采集模块50包括脑电信号采集单元和肌电表皮信号采集单元。所述脑电信号采集单元包括电极片和电极膏。所述电极片固定于受测者头部。通过所述电极膏连通电极片与大脑表皮,采集受测者在应激感知过程和应激决策过程中产生的脑电信号。所述肌电表皮信号采集单元包括固定于受测者主要肌肉群表面的电极片。当受测者受到应激 刺激时,受测者肌肉发力,所述肌电表皮信号采集单元可以以高于2000Hz的频率采集肌肉表皮的电信号。所述肌电表皮信号采集单元用于判断应激反应下,受测者主要肌肉群的激活水平。
所述足底压力测试模块60可以包括多个薄膜压力传感器。所述多个薄膜压力传感器可以安装于受测者的鞋垫上。当所述受测者产生应激反应时,所述多个薄膜压力传感器可以实时采集受测者在移动过程中,足部对地面的压力分布情况。
所述受测者的身体设置有位置特征采集模块70。当所述受测者产生应激反应时,用于采集所述受测者在所述虚拟道路交通场景100中的位置信息与视野信息。具体地,所述位置特征采集模块70可以包括定位器。所述定位器与所述虚拟场景控制模块30连接。当所述受测者产生应激反应时,所述定位器可以实时获取受测者在所述虚拟道路交通场景100中的位置信息,并发送至所述虚拟场景控制模块30。
所述位置特征采集模块70可以包括视野检测装置。所述视野检测装置可以安装于所述受测者的头部,用于实时获取所述受测者在所述虚拟道路交通场景100中的视野信息。所述视野信息可以包括所述受测者的视野方向和/或视野范围。所述视野检测装置获取所述视野信息的方式可以为多种方式。可选地,所述视野检测装置可以采集所述受测者的瞳孔运动轨迹,以确定所述视野信息。
此外,所述人体应激反应测试系统还可以包括现实检测场地。所述现实检测场地可以包括现实活动区域。所述现实活动区域可以包括待测活动区域和待测试验区120。所述待测活动区域与所述待测区110对应。所述待测试验区120与所述试验区120对应。所述试验活动区域可以为15米×5米的试验场地。所述试验活动区域的边缘可以设置多个光学动作捕捉摄像头。所述多个光学动作捕捉摄像头与所述动作捕捉模块40配合使用,以采集受测者的运动特征数据。
本实施例中,通过设置虚拟现实环境模块10建立虚拟道路交通场景100,使得受测者可以在突发情况下,下意识产生真实的应激反应。通过设置虚拟场景显示设备20,将虚拟道路交通场景100呈现于受测者的脑海。通过设置虚拟场景控制模块30,实现虚拟道路交通场景100的变换,建立虚拟应激事件对所述受测者施加刺激,以使所述受测者产生应激反应。通过设置动作捕捉模块40,在所述受测者产生应激反应的过程中,实现对所述受测者的运动特征数据的采集。通过设置生理电信号采集模块50,在所述受测者产生应激反应的过程中,实现对所述受测者的生理电信号的采集。通过设置足底压力测试模块60,在所述受测者产生应激反应的过程中,实现对所述受测者的足底压力数据的采集。本申请提供的基于虚拟现实技术的人体应激反应测试系统,可以对受测者产生真实的刺激,使得受测 者做出真实的应激反应。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。

Claims (21)

  1. 一种人体应激反应测试方法,其特征在于,包括:
    向虚拟现实环境模块(10)发送建立虚拟道路交通场景(100)的请求;
    在所述虚拟现实环境模块(10)建立所述虚拟道路交通场景(100)后,获取受测者在所述虚拟道路交通场景(100)中的位置信息与视野信息;
    当所述受测者处于所述虚拟道路交通场景(100)中的待测区(110),且所述受测者的视野方向面向所述虚拟道路交通场景(100)中的试验区(120)时,引导所述受测者进入所述试验区(120),并同时开始获取所述受测者的应激反应数据;
    在确定所述受测者处于所述试验区(120)内后,控制所述虚拟现实环境模块(10)在所述试验区(120)建立虚拟应激事件,对所述受测者施加刺激,以使所述受测者产生应激反应。
  2. 根据权利要求1所述的人体应激反应测试方法,其特征在于,还包括:
    依据所述位置信息,判断所述受测者是否处于所述虚拟道路交通场景(100)中的待测区(110)内。
  3. 根据权利要求2所述的人体应激反应测试方法,其特征在于,还包括:
    当所述受测者处于所述虚拟道路交通场景(100)中的待测区(110)时,根据所述视野信息判断所述受测者的视野方向是否面向所述虚拟道路交通场景(100)中的试验区(120)。
  4. 根据权利要求3所述的人体应激反应测试方法,其特征在于,还包括:
    在所述受测者受到所述刺激持续预设时间段后,中止所述应激反应数据的获取。
  5. 根据权利要求4所述的人体应激反应测试方法,其特征在于,所述虚拟道路交通场景(100)包括:
    第一车道(130),沿第一方向延伸,所述第一车道(130)包括多个互相平行设置的子车道(131),所述子车道(131)均沿所述第一方向延伸。
  6. 根据权利要求5所述的人体应激反应测试方法,其特征在于,所述虚拟道路交通场景(100)还包括:
    第二车道(140),沿第二方向延伸,所述第一方向与所述第二方向垂直,所述第二车道(140)与所述第一车道(130)相交形成十字交叉口(150)。
  7. 根据权利要求6所述的人体应激反应测试方法,其特征在于,所述虚拟道路交通场景(100)还包括:
    试验区(120),设置于所述第一车道(130),沿所述第二方向延伸,且贯穿所述第一车道(130),用于引导所述受测者安全通过所述第一车道(130);所述试验区(120)与所述十字交叉口(150)相邻设置。
  8. 根据权利要求7所述的人体应激反应测试方法,其特征在于,所述虚拟道路交通场景(100)还包括:
    交通信号灯(121),设置于所述试验区(120),用于提示所述受测者所述试验区(120)的通行状态。
  9. 根据权利要求8所述的人体应激反应测试方法,其特征在于,所述当所述受测者的视野方向面向所述虚拟道路交通场景(100)中的试验区(120)时,引导所述受测者进入所述试验区(120),并同时开始获取所述受测者的应激反应数据的步骤包括:
    当所述受测者的视野方向面向所述虚拟道路交通场景(100)中的试验区(120)时,向所述虚拟现实环境模块(10)发送指令,控制所述交通信号灯(121)显示为不可通行状态,并控制辅助车辆(162)出现,以第一预设行驶速度行驶于所述第一车道(131)的第一子车道(132)上;
    获取所述辅助车辆(162)的位置,依据所述辅助车辆(162)的位置,沿所述第一方向,计算所述辅助车辆(162)的位置至所述试验区(120)的直线距离;
    判断所述辅助车辆(162)的位置至所述试验区(120)的直线距离是否大于预设距离;
    当所述辅助车辆(162)的位置至所述试验区(120)的直线距离小于或等于所述预设距离时,控制所述辅助车辆(162)停止行驶,并控制所述交通信号灯(121)显示为可通行状态;
    向语音指示模块(70)发送指令,控制所述语音指示模块(70)发出指示语音,以引导所述受测者通过所述试验区(120)。
  10. 根据权利要求9所述的人体应激反应测试方法,其特征在于,所述当所述受测者的视野方向面向所述虚拟道路交通场景(100)中的试验区(120)时,引导所述受测者进入所述试验区(120),并同时开始获取所述受测者的应激反应数据的步骤还包括:
    当所述辅助车辆(162)的位置至所述试验区(120)的直线距离大于所述预设距离时,控制所述辅助车辆(162)继续行驶,直至所述辅助车辆(162)的位置至所述试验区(120)的直线距离小于或等于所述预设距离。
  11. 根据权利要求10所述的人体应激反应测试方法,其特征在于,所述在确定所述受测者处于所述试验区(120)内后,控制所述虚拟现实环境模块(10)在所述试验区(120)建立虚拟应激事件,对所述受测者施加刺激,以使所述受测者产生应激反应的步骤包括:
    在所述试验区(120)内设置一个目标位置点(122),所述目标位置点(122)为多个交叉点中的一个,所述交叉点由所述子车道(131)与所述试验区(120)相交形成;
    获取所述受测者的位置信息和所述受测者的行走速度,计算所述受测者到达所述目标位置点(122)的时间;
    依据所述受测者到达所述目标位置点(122)的时间,控制试验车辆(163)出现于试验车辆初始位置,并以第二预设速度匀速行驶于第二子车道(133)上,以使得所述试验车辆(163)与所述受测者在所述目标位置点(122)处触碰。
  12. 根据权利要求11所述的人体应激反应测试方法,其特征在于,所述试验车辆初始位置设置于第二子车道(133),所述第二子车道与(163)所述试验区(120)相交形成所述目标位置点(122)。
  13. 根据权利要求12所述的人体应激反应测试方法,其特征在于,所述试验车辆初始位置至所述目标位置点(122)的直线距离满足以下公式,以使所述试验车辆(163)与所述受测者在所述目标位置点(122)处触碰:
    Figure PCTCN2019101880-appb-100001
    其中,X为所述试验车辆初始位置至所述目标位置点(122)的直线距离,X 1为所述试验车辆(163)在刹车前的行驶距离,X 2为所述试验车辆(163)在刹车后的行驶距离,V 0为所述第二预设速度,V 1为所述试验车辆(163)到达所述目标位置点(122)时的速度,t 1为所述试验车辆(163)在刹车前的行驶时间,t 2为所述试验车辆(163)的刹车时间,a为所述试验车辆(163)的刹车减速度,t为所述受测者到达所述目标位置点(122)的时间。
  14. 根据权利要求13所述的人体应激反应测试方法,其特征在于,所述第一子车道(132)为靠近所述受测者的子车道,所述第二子车道(133)为远离所述受测者的子车道,以使所述试验车辆(163)行驶时,所述辅助车辆(162)能够遮挡所述试验车辆(163)。
  15. 根据权利要求14所述的人体应激反应测试方法,其特征在于,所述在确定所述受测者处于所述试验区(120)内后,控制所述虚拟现实环境模块(10)在所述试验区(120)建立虚拟应激事件,对所述受测者施加刺激,以使所述受测者产生应激反应的还包括:
    在所述受测者的视野范围内,控制干扰车辆(164)和/或虚拟行人(165)出现,以在所述试验车辆(163)行驶时,吸引所述受测者的注意力。
  16. 根据权利要求15所述的人体应激反应测试方法,其特征在于,所述应激反应数据包括位置特征数据、运动特征数据、生理电信号与足底压力数据中的一种或多种。
  17. 根据权利要求16所述的人体应激反应测试方法,其特征在于,所述当所述受测者的视野方向面向所述虚拟道路交通场景(100)中的试验区(120)时,引导所述受测者进入所述试验区(120),并同时开始获取所述受测者的应激反应数据的步骤还包括:
    当所述受测者的视野方向面向所述虚拟道路交通场景(100)中的试验区(120)时,同时向动作捕捉模块(40)发送运动特征数据采集开始指令,向生理电信号采集模块(50)发送生理电信号采集开始指令,向足底压力测试模块(60)发送足底压力数据采集开始指令;
    实时获取所述动作捕捉模块(40)发送的运动特征数据、所述电信号采集模块(50)发送的生理电信号以及所述足底压力测试模块(60)发送的足底压力数据。
  18. 根据权利要求17所述的人体应激反应测试方法,其特征在于,所述在所述受测者受到刺激持续预设时间段后,中止所述应激反应数据的获取的步骤包括:
    在所述试验车辆(163)与所述受测者触碰时开始计时,持续预设时间段后,向所述动作捕捉模块(40)发送运动特征数据采集中止指令,向所述生理电信号采集模块(50)发送生理电信号采集中止指令,向所述足底压力测试模块(60)发送足底压力数据采集中止指令;
    停止接收所述动作捕捉模块(40)发送的运动特征数据、所述电信号采集模块(50)发送的生理电信号以及所述足底压力测试模块(60)发送的足底压力数据。
  19. 一种人体应激反应测试系统,其特征在于,包括:
    虚拟现实环境模块(10),用于建立虚拟道路交通场景(100);
    虚拟场景显示设备(20),佩戴于受测者的头部,用于在所述受测者的脑海中呈现所述虚拟道路交通场景(100);以及
    虚拟场景控制模块(30),与所述虚拟现实环境模块(10)和所述虚拟场景显示设备(20)分别连接,包括存储器和一个或多个处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行以下步骤:
    向所述虚拟现实环境模块(10)发送建立所述虚拟道路交通场景(100)的请求;
    在所述虚拟现实环境模块(10)建立所述虚拟道路交通场景(100)后,获取受测者在所述虚拟道路交通场景(100)中的位置信息与视野信息;
    当所述受测者处于所述虚拟道路交通场景(100)中的待测区(110),且所述受测者的视野方向面向所述虚拟道路交通场景(100)中的试验区(120)时,引导所述受测者进入 所述试验区(120),并同时开始获取所述受测者的应激反应数据;
    在确定所述受测者处于所述试验区(120)内后,控制所述虚拟现实环境模块(10)在所述试验区(120)建立虚拟应激事件,对所述受测者施加刺激,以使所述受测者产生应激反应。
  20. 根据权利要求19所述的人体应激反应测试系统,其特征在于,还包括;
    动作捕捉模块(40),与所述虚拟场景控制模块(30)连接,设置于所述受测者的四肢和/或躯干,用于在所述受测者产生应激反应的过程中,采集所述受测者的运动特征数据并发送至所述虚拟场景控制模块(30);
    生理电信号采集模块(50),与所述虚拟场景控制模块(30)连接,设置于所述受测者的皮肤表面,用于在所述受测者产生应激反应的过程中,采集所述受测者的生理电信号并发送至所述虚拟场景控制模块(30);
    足底压力测试模块(60),与所述虚拟场景控制模块(30)连接,贴附于所述受测者的足底,用于在所述受测者产生应激反应的过程中,采集所述受测者的足底压力数据并发送至所述虚拟场景控制模块(30);
    位置特征采集模块(70),与所述虚拟场景控制模块(30)连接,用于在所述受测者产生应激反应的过程中,采集所述受测者在所述虚拟道路交通场景(100)的位置信息与视野信息并发送至所述虚拟场景控制模块(30);以及
    语音指示模块(80),与所述虚拟场景控制模块(30)连接,用于在所述虚拟场景控制模块(30)的控制下发出提示语音,以引导所述受测者执行动作。
  21. 一种计算机可读存储介质,其上存储有计算机可读指令,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    向所述虚拟现实环境模块(10)发送建立所述虚拟道路交通场景(100)的请求;
    在所述虚拟现实环境模块(10)建立所述虚拟道路交通场景(100)后,获取受测者在所述虚拟道路交通场景(100)中的位置信息与视野信息;
    当所述受测者处于所述虚拟道路交通场景(100)中的待测区(110),且所述受测者的视野方向面向所述虚拟道路交通场景(100)中的试验区(120)时,引导所述受测者进入所述试验区(120),并同时开始获取所述受测者的应激反应数据;
    在确定所述受测者处于所述试验区(120)内后,控制所述虚拟现实环境模块(10)在所述试验区(120)建立虚拟应激事件,对所述受测者施加刺激,以使所述受测者产生应激反应。
PCT/CN2019/101880 2019-06-05 2019-08-22 人体应激反应测试方法、系统与计算机可读存储介质 WO2020244060A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/006,419 US11751785B2 (en) 2019-06-05 2020-08-28 Testing method and testing system for human stress reaction, and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910486529.1 2019-06-05
CN201910486529.1A CN110222639B (zh) 2019-06-05 2019-06-05 人体应激反应测试方法与系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/006,419 Continuation US11751785B2 (en) 2019-06-05 2020-08-28 Testing method and testing system for human stress reaction, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2020244060A1 true WO2020244060A1 (zh) 2020-12-10

Family

ID=67819374

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/101880 WO2020244060A1 (zh) 2019-06-05 2019-08-22 人体应激反应测试方法、系统与计算机可读存储介质

Country Status (3)

Country Link
US (1) US11751785B2 (zh)
CN (1) CN110222639B (zh)
WO (1) WO2020244060A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114264849A (zh) * 2021-12-22 2022-04-01 上海临港电力电子研究有限公司 一种功率模块的可调测试装置及其应用方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111818322A (zh) * 2020-03-31 2020-10-23 同济大学 基于虚拟现实技术的沉浸式行人交通行为实验平台及方法
CN113080900B (zh) * 2020-04-26 2024-02-06 合肥工业大学 一种突发噪声环境下人体应激反应测试方法及系统
CN111685736A (zh) * 2020-06-19 2020-09-22 中南大学 一种列车司乘人员应激反应测试系统及其测试方法
RU2766391C1 (ru) * 2021-04-28 2022-03-15 Елена Леонидовна Малиновская Способ анализа поведения испытуемого для выявления его психологических особенностей посредством технологий виртуальной реальности

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150328458A1 (en) * 2008-07-02 2015-11-19 Microtransponder, Inc. Methods for enhancing exposure therapy using pairing with vagus nerve stimulation
CN107928685A (zh) * 2017-12-13 2018-04-20 吉林大学 一种基于驾驶人眼动特性的应激响应分析装置及方法
CN107957772A (zh) * 2016-10-17 2018-04-24 阿里巴巴集团控股有限公司 现实场景中采集vr 图像的处理方法以及实现vr 体验的方法
CN108091203A (zh) * 2017-12-07 2018-05-29 中国航空工业集团公司西安航空计算技术研究所 一种基于虚拟现实技术的应激交通场景驾驶培训系统
CN108109673A (zh) * 2018-01-22 2018-06-01 阿呆科技(北京)有限公司 一种人体感官数据测量系统及方法
CN108113686A (zh) * 2017-12-21 2018-06-05 北京航空航天大学 一种基于触觉检测反应任务的驾驶员认知负荷测试装置
US20180190376A1 (en) * 2017-01-04 2018-07-05 StoryUp, Inc. System and method for modifying biometric activity using virtual reality therapy
CN108542404A (zh) * 2018-03-16 2018-09-18 成都虚实梦境科技有限责任公司 注意力评估方法、装置、vr设备及可读存储介质
CN109044374A (zh) * 2018-07-19 2018-12-21 杭州心景科技有限公司 整合视听连续执行测试方法、装置及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2621488C2 (ru) * 2013-02-14 2017-06-06 Сейко Эпсон Корпорейшн Укрепляемый на голове дисплей и способ управления для укрепляемого на голове дисплея
DE102015200157A1 (de) * 2015-01-08 2016-07-28 Avl List Gmbh Verfahren zum Betrieb eines Fahrsimulators
IL310060A (en) * 2016-05-09 2024-03-01 Magic Leap Inc Augmented reality systems and methods for user health analysis
CN205983964U (zh) * 2016-06-02 2017-02-22 杭州路宏电力科技有限公司 一种触感培训系统
CN108968989A (zh) * 2018-08-04 2018-12-11 淄博职业学院 一种基于心理学的应激训练系统及其使用方法
US10885710B2 (en) * 2019-03-14 2021-01-05 Microsoft Technology Licensing, Llc Reality-guided roaming in virtual reality

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150328458A1 (en) * 2008-07-02 2015-11-19 Microtransponder, Inc. Methods for enhancing exposure therapy using pairing with vagus nerve stimulation
CN107957772A (zh) * 2016-10-17 2018-04-24 阿里巴巴集团控股有限公司 现实场景中采集vr 图像的处理方法以及实现vr 体验的方法
US20180190376A1 (en) * 2017-01-04 2018-07-05 StoryUp, Inc. System and method for modifying biometric activity using virtual reality therapy
CN108091203A (zh) * 2017-12-07 2018-05-29 中国航空工业集团公司西安航空计算技术研究所 一种基于虚拟现实技术的应激交通场景驾驶培训系统
CN107928685A (zh) * 2017-12-13 2018-04-20 吉林大学 一种基于驾驶人眼动特性的应激响应分析装置及方法
CN108113686A (zh) * 2017-12-21 2018-06-05 北京航空航天大学 一种基于触觉检测反应任务的驾驶员认知负荷测试装置
CN108109673A (zh) * 2018-01-22 2018-06-01 阿呆科技(北京)有限公司 一种人体感官数据测量系统及方法
CN108542404A (zh) * 2018-03-16 2018-09-18 成都虚实梦境科技有限责任公司 注意力评估方法、装置、vr设备及可读存储介质
CN109044374A (zh) * 2018-07-19 2018-12-21 杭州心景科技有限公司 整合视听连续执行测试方法、装置及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114264849A (zh) * 2021-12-22 2022-04-01 上海临港电力电子研究有限公司 一种功率模块的可调测试装置及其应用方法
CN114264849B (zh) * 2021-12-22 2023-06-16 上海临港电力电子研究有限公司 一种功率模块的可调测试装置及其应用方法

Also Published As

Publication number Publication date
CN110222639B (zh) 2020-03-31
US11751785B2 (en) 2023-09-12
US20200390380A1 (en) 2020-12-17
CN110222639A (zh) 2019-09-10

Similar Documents

Publication Publication Date Title
WO2020244060A1 (zh) 人体应激反应测试方法、系统与计算机可读存储介质
Zhenhai et al. Driver drowsiness detection based on time series analysis of steering wheel angular velocity
KR20180134310A (ko) 조종사 상태의 통합 감시 제어 장치 및 이를 이용한 조종사의 임무 수행 능력 유도 방법
CN209808619U (zh) 实验小鼠自主给光成瘾行为的检测系统
KR20170018696A (ko) 감성 인식 차량 제어 장치, 시스템 및 방법
US11273284B2 (en) Mental and physical state inducement apparatus, mental and physical state inducement method, and storage medium storing control program
JP2013000300A (ja) 気分判定装置及び気分判定方法
Wu et al. Effects of chevron alignment signs on driver eye movements, driving performance, and stress
JP2006280513A (ja) 乗り物操縦者のモニター方法及びシステム
Dugué et al. Cerebellar re-encoding of self-generated head movements
CN112419719A (zh) 高速公路交通运营安全评价方法及系统
Hoogendoorn Empirical research and modeling of longitudinal driving behavior under adverse conditions
CN115904091A (zh) 一种作业区警示标志的前置距离的确定方法及系统
JP6455569B2 (ja) 運転支援方法及び運転支援装置
JP7331728B2 (ja) 運転者状態推定装置
JP2021130390A (ja) 運転者状態推定装置
Kim et al. Detection of multi-class emergency situations during simulated driving from ERP
KR20200060649A (ko) 차량용 제세동 시스템 및 그 제어방법
CN109697847A (zh) 一种车辆距离检测系统和方法
CN208384772U (zh) 一种驾驶人疲劳驾驶监测系统
Tiwari et al. Design approach for EEG-based human computer interaction driver monitoring system
Xie et al. Effects of freeway alignment on driving performance and workload based on simulated driving
CN215730339U (zh) 一种驾驶仿真模拟设备
JP2021053240A (ja) 眠気又は覚醒レベル評価装置及び評価プログラム
Kim et al. Neuro-driving: Automatic perception technique for upcoming emergency situations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19932118

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19932118

Country of ref document: EP

Kind code of ref document: A1