CN109044374B - Method, device and system for integrated audio-visual continuous execution test - Google Patents

Method, device and system for integrated audio-visual continuous execution test Download PDF

Info

Publication number
CN109044374B
CN109044374B CN201810799866.1A CN201810799866A CN109044374B CN 109044374 B CN109044374 B CN 109044374B CN 201810799866 A CN201810799866 A CN 201810799866A CN 109044374 B CN109044374 B CN 109044374B
Authority
CN
China
Prior art keywords
test
result
visual
attention
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810799866.1A
Other languages
Chinese (zh)
Other versions
CN109044374A (en
Inventor
曹群
章显钻
王柳苏
骆宏
黄晶
宋海东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Med Vision Technology Co ltd
Original Assignee
Hangzhou Med Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Med Vision Technology Co ltd filed Critical Hangzhou Med Vision Technology Co ltd
Priority to CN201810799866.1A priority Critical patent/CN109044374B/en
Publication of CN109044374A publication Critical patent/CN109044374A/en
Application granted granted Critical
Publication of CN109044374B publication Critical patent/CN109044374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Abstract

The invention provides a method, a device and a system for integrating audio-visual continuous execution testing, which relate to the technical field of virtual reality, and display a virtual classroom scene to a target object to be tested through VR glasses; receiving the operation executed by a target object to be tested through a handle according to a virtual classroom scene and preset audio-visual test content; the presetting of the audiovisual test content comprises: first test content, second test content and third test content; acquiring test data corresponding to preset test content based on operation; analyzing according to the test data to obtain a test result of the target object to be tested; the first test result is an attention stabilization result; the second test result is an attention stable result and an attention distribution result under far interference; the third test result is an attention stabilization result, an attention allocation result, and an attention diversion result. According to the invention, the attention stability, the attention distribution and the attention transfer condition of the target object can be accurately evaluated through three test contents displayed in the VR scene.

Description

Method, device and system for integrated audio-visual continuous execution test
Technical Field
The invention relates to the technical field of virtual reality, in particular to a method, a device and a system for integrating audio-visual continuous execution testing.
Background
The integrated Visual and audio Continuous Performance Test (IVA-CPT) method has important application in the aspects of autism, anxiety, attention deficit hyperactivity disorder of children and the like.
In the existing IVA-CPT method based on VR (virtual reality) virtual reality technology, the evaluation process is relatively simple, only comprises an exercise stage and a stage of evaluating the stability of attention and the allocation of attention, and data are not included in the exercise stage, so that external interference factors are too large, and the evaluation result is very inaccurate.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus and a system for integrating audiovisual continuous execution testing, which can accurately evaluate attention stability, attention distribution and attention diversion of a target object through three testing contents displayed in a VR scene.
In a first aspect, an embodiment of the present invention provides an integrated audiovisual continuous execution testing method, applied to a system composed of VR glasses, a handle, and a display terminal, including:
displaying a virtual classroom scene to a target object to be tested through VR glasses;
receiving the operation executed by a target object to be tested through a handle according to a virtual classroom scene and preset audio-visual test content; the presetting of the audiovisual test content comprises: first test content, second test content and third test content; the operation comprises the following steps: a first operation, a second operation and a third operation corresponding to the first test content, the second test content and the third test content, respectively;
acquiring test data corresponding to preset test content based on operation; the test data includes: first test data, second test data, and third test data;
analyzing according to the test data to obtain a test result of the target object to be tested; the test results include: a first test result, a second test result and a third test result; the first test result is an attention stabilization result; the second test result is an attention stable result and an attention distribution result under far interference; the third test result is an attention stabilization result, an attention allocation result, and an attention diversion result.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the first test content includes: sequentially displaying a 2min visual channel, a 2min auditory channel and a 4min audio-visual channel in a virtual classroom scene; the visual pathway includes: 40 signals and 40 noises randomly presented in a picture; the auditory channels include: 40 signals and 40 noises randomly presented in an audio manner; the audio-visual binary channel comprises: 40 signals and 40 noises which are respectively randomly presented in an audio and picture mode;
the second test content includes: displaying 4min of audio-visual double channels in a virtual classroom scene; presenting far-interference stimulation every 30s in 4min of audio-visual double channels; the audio-visual binary channel comprises: 40 signals and 40 noises which are respectively randomly presented in an audio and picture mode;
the third test content includes: displaying 2min of audio-visual double channels, 6s of visual channels and 2min of audio-visual double channels in a virtual classroom scene; in the visual channel of 6s, a near interferential stimulus is presented.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the first test data includes: the number of the reaction time, the hit signal and the false alarm signal respectively corresponding to the auditory channel, the visual channel and the audio-visual channel;
the second test data includes: the corresponding reaction time of the audio-visual double channels, the number of the hit signals and the false alarm signals, and the number of the hit signals when the far interference stimulus occurs;
the third test data includes: the reaction time, the number of hit signals and the number of false alarm signals corresponding to the audiovisual double channels before and after the near interference stimulation.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where analyzing according to test data to obtain a test result of a target object to be tested, specifically including:
calculating the alertness d' value, the signal hit rate, the reaction time mean value and the standard deviation of the reaction time which respectively correspond to the auditory channel, the visual channel and the audiovisual double channel according to the reaction time, the hit signal and the number of the false alarm signals which respectively correspond to the auditory channel, the visual channel and the audiovisual double channel;
generating a trend graph of the change of the signal hit rate, the mean value of the reaction time and the standard deviation of the reaction time according to the signal hit rate, the mean value of the reaction time and the standard deviation of the reaction time;
and taking the alertness d' value, the signal hit rate, the reaction time mean value, the standard deviation of the reaction time and the trend chart as attention stable results in the test results of the target object to be tested.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, wherein calculating an alertness d' value, a signal hit rate, a reaction time mean value, and a standard deviation of a reaction time, which correspond to the auditory channel, the visual channel, and the audiovisual channel, respectively, specifically includes:
calculating the alertness d' value by the following formula:
d′=Zhit in-ZFalse newspaper
Wherein Z isHit inA corresponding numerical value in the POZ conversion table represents the probability of hitting the signal; zFalse newspaperRepresenting the corresponding numerical value of the probability of the false signal in the POZ conversion table;
the signal hit rate is the number of hit signals/the number of total signals;
wherein, the number of the total signals is the sum of the number of the hit signals and the number of the false signal;
the mean reaction time was:
Figure BDA0001736755800000031
standard deviation of reaction time:
Figure BDA0001736755800000032
with reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where in a case where the second test data is obtained, analysis is performed according to the test data to obtain a test result of the target object to be tested, and the method further includes:
calculating an attention allocation result Q in the test results of the target object to be tested by:
Figure BDA0001736755800000041
wherein, S1 is the correct response times of single sound stimulation; f1 is the correct response times of single light stimulation; s2 is the sound correct reaction times when the sound and light are stimulated; f2 is the number of light correct responses in the case of acousto-optic stimulation;
when Q is less than 0.5, judging that the distribution value is not noticed;
when Q is more than 0.5 and less than 1.0, judging that the attention distribution value exists;
when Q is 1.0, judging that the attention allocation value is maximum;
when Q >1.0, the attention allocation value is judged to be invalid.
With reference to the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where in a case where third test data is obtained, analysis is performed according to the test data to obtain a test result of a target object to be tested, and the method further includes:
according to the matched sample t test, determining an attention transfer result in the test result of the target object to be tested:
the calculation formula of the t value of the paired sample t test is as follows:
Figure BDA0001736755800000042
wherein the content of the first and second substances,
Figure BDA0001736755800000043
Figure BDA0001736755800000044
means of the difference between the reaction times of the paired samples before and after the near interferential stimulation; s is the standard deviation of the difference value of the reaction time of the matched samples before and after the near interference stimulation, and n is the near interferenceThe number of paired samples before and after the disturbing stimulation.
In a second aspect, an embodiment of the present invention further provides an integrated audiovisual continuous execution testing apparatus, which is applied to a system composed of VR glasses, a handle, and a display terminal, and includes:
the display module is used for displaying a virtual classroom scene to a target object to be tested through VR glasses;
the operation receiving module is used for receiving the operation executed by the target object to be tested through the handle according to the virtual classroom scene and the preset audio-visual test content; the presetting of the audiovisual test content comprises: first test content, second test content and third test content; the operation comprises the following steps: a first operation, a second operation and a third operation corresponding to the first test content, the second test content and the third test content, respectively;
the data acquisition module is used for acquiring test data corresponding to preset test content based on operation; the test data includes: first test data, second test data, and third test data;
the data analysis module is used for analyzing according to the test data to obtain a test result of the target object to be tested; the test results include: a first test result, a second test result and a third test result; the first test result is an attention stabilization result; the second test result is an attention stable result and an attention distribution result under far interference; the third test result is an attention stabilization result, an attention allocation result, and an attention diversion result.
In a third aspect, an embodiment of the present invention further provides an integrated audiovisual continuous execution testing system, including VR glasses, a handle, and a display terminal;
the VR glasses and the handle are respectively in communication connection with the display terminal;
the display terminal is provided with the integrated audio-visual continuous execution testing device according to the second aspect.
In a fourth aspect, the present invention also provides a computer readable medium having non-volatile program code executable by a processor, where the program code causes the processor to execute the method according to the first aspect.
The embodiment of the invention has the following beneficial effects:
the integrated audio-visual continuous execution testing method provided by the embodiment of the invention is applied to a system consisting of VR glasses, a handle and a display terminal, and firstly, a virtual classroom scene is displayed to a target object to be tested through the VR glasses; receiving the operation executed by a target object to be tested through a handle according to a virtual classroom scene and preset audio-visual test content; the presetting of the audiovisual test content comprises: first test content, second test content and third test content; the operation comprises the following steps: a first operation, a second operation and a third operation corresponding to the first test content, the second test content and the third test content, respectively; acquiring test data corresponding to preset test content based on operation; the test data includes: first test data, second test data, and third test data; analyzing according to the test data to obtain a test result of the target object to be tested; the test results include: a first test result, a second test result and a third test result; the first test result is an attention stabilization result; the second test result is an attention stable result and an attention distribution result under far interference; the third test result is an attention stabilization result, an attention allocation result, and an attention diversion result. The invention can accurately evaluate the attention stability, the attention distribution and the attention transfer condition of the target object through the test content of the non-interference stimulation displayed in the VR scene and the test content of the far interference stimulation and the near interference stimulation respectively.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating an integrated audiovisual continuous execution test method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for integrating audiovisual continuous execution testing according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an integrated audiovisual continuous execution testing apparatus according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of an integrated audiovisual continuous execution test system according to a third embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the existing IVA-CPT method based on VR (virtual reality) virtual reality technology, the evaluation process mode is simple, and the evaluation result is very inaccurate under the condition of large external interference factors. Based on this, embodiments of the present invention provide an integrated audiovisual continuous execution testing method, apparatus, and system, which can accurately evaluate attention stability, attention allocation, and attention diversion of a target object through a test content of an interference-free stimulus and a test content of a far-interference stimulus and a near-interference stimulus respectively displayed in a VR scene.
For the understanding of the present embodiment, a detailed description will be given to an integrated audiovisual continuous execution testing method disclosed in the present embodiment.
The first embodiment is as follows:
the embodiment of the invention provides an integrated audio-visual continuous execution testing method, which is applied to a system consisting of VR glasses, a handle and a display terminal, and is shown in figure 1, and the method comprises the following steps:
s11: and displaying a virtual classroom scene to the target object to be tested through VR glasses.
In this example, the target subject to be tested is a child of 6-15 years old, and the time of the whole testing process is about 16 min. First, the display terminal presents a virtual classroom scene to the child to be tested through the VR glasses, for example, the virtual classroom scene includes: a plurality of tables and chairs, girl teachers, blackboards and residual lights can see large windows, doorways of classrooms, children sitting on the desks, and the like.
Before the test starts, the guide words of about 30s appear in the virtual scene, and the guide words are explained by the role of the girl teacher, so that the children to be tested slowly adapt to the scene and the test contents. The child to be tested can then be given a period of practice, data is not counted, signal stimuli (cat) and noise stimuli (dog, bird, fish) can be presented on the virtual blackboard, and the child is required to press the handle when hearing or seeing the signal stimuli (cat).
S12: and receiving the operation executed by the target object to be tested through the handle according to the virtual classroom scene and the preset audio-visual test content.
After the children to be tested finish the exercise, the formal testing stage is started, and the preset audio-visual testing content comprises: first test content, second test content and third test content; the operation performed by the child to be tested through the handle includes: a first operation, a second operation, and a third operation corresponding to the first test content, the second test content, and the third test content, respectively.
Wherein the first test content comprises: sequentially displaying a 2min visual channel, a 2min auditory channel and a 4min audio-visual channel in a virtual classroom scene; the visual pathway includes: 40 signals and 40 noises randomly presented in a picture; the auditory channels include: 40 signals and 40 noises randomly presented in an audio manner; the audio-visual binary channel comprises: 40 signals and 40 noises each randomly presented in an audio, picture manner.
The second test content includes: displaying 4min of audio-visual double channels in a virtual classroom scene; presenting far-interference stimulation every 30s in 4min of audio-visual double channels; the audio-visual binary channel comprises: 40 signals and 40 noises each randomly presented in an audio, picture manner.
This stage incorporates far-interfering stimuli of varying degrees presented at fixed times and fixed orientations, belonging to the moderately difficult attention stabilization and attention allocation tests. Some children with poor attention stability may experience distractions (i.e., the signal is missed when the disturbing stimulus is present). One disturbing stimulus (i.e., a roaring sound of a vehicle outside the window, a vehicle passing outside the window, a footstep sound outside the classroom door, a person walking at the classroom doorway) is presented in sequence every 30 seconds. The interfering stimulus must occur in concert with the signal. Other signals and noise are presented in random order. The second test content includes the following modes:
(1) visual channel exercises + visual disturbance stimuli such as: people walking outside the window through the door of the vehicle and the classroom;
(2) visual channel exercises + auditory disturbance stimuli such as: vehicle rumbling outside the window, footstep outside the classroom door;
(3) auditory channel exercises + visual disturbance stimuli such as: people walking outside the window through the door of the vehicle and the classroom;
(4) auditory channel exercises + auditory interference stimuli such as: vehicle roaring outside the window, footsteps outside the classroom.
The third test content includes: displaying 2min of audio-visual double channels, 6s of visual channels and 2min of audio-visual double channels in a virtual classroom scene; in the visual channel of 6s, a near interferential stimulus is presented.
The attention of the child is attracted and diverted by a near disturbance stimulus (such as a hovering paper plane) appearing in the center of the child's vision, and the child's attention diversion ability is assessed by comparing the stability of attention before and after the disturbance stimulus appears. During the period of near interference, the attention of the child is attracted, and whether the child responds correctly or not is not included in the final data record, so that the purity of the final data can be ensured.
In the above test contents, the signal and noise presenting manner includes various forms, which are not specifically limited, preferably independent of color, and may be fruit, pattern, etc. and the pronunciation syllables are consistent, such as: apple/banana, flower/leaf). The stimulus (sound/picture) presentation time is 150 milliseconds, the interval is 1350 milliseconds, and the child presses the left mouse button to complete the operation according to the test contents.
S13: and acquiring test data corresponding to the preset test content based on the operation.
And after the test is finished, extracting test data corresponding to the first test content, the second test content and the third test content, namely the first test data, the second test data and the third test data.
Wherein the first test data comprises: the number of the reaction time, the hit signal and the false alarm signal respectively corresponding to the auditory channel, the visual channel and the audio-visual channel. The second test data includes: the corresponding reaction time of the audio-visual double channels, the number of the hit signals and the number of the false alarm signals, and the number of the hit signals when the far interference stimulation occurs. The third test data includes: the reaction time, the number of hit signals and the number of false alarm signals corresponding to the audiovisual double channels before and after the near interference stimulation.
S14: and analyzing according to the test data to obtain a test result of the target object to be tested.
And after the test data are extracted, analyzing and calculating the test data to obtain a final test result of the target object to be tested. The test results include: a first test result, a second test result and a third test result; the first test result is an attention stabilization result; the second test result is an attention stable result and an attention distribution result under far interference; the third test result is an attention stabilization result, an attention allocation result, and an attention diversion result.
Analyzing according to the test data to obtain a test result of the target object to be tested, specifically including the following steps, as shown in fig. 2:
s141: and calculating the alertness d' value, the signal hit rate, the reaction time mean value and the standard deviation of the reaction time which respectively correspond to the auditory channel, the visual channel and the audiovisual double channel according to the reaction time, the hit signal and the false alarm signal which respectively correspond to the auditory channel, the visual channel and the audiovisual double channel.
Calculating the alertness d' value by the following formula:
d′=Zhit in-ZFalse newspaper
Wherein Z isHit inA corresponding numerical value in the POZ conversion table represents the probability of hitting the signal; zFalse newspaperRepresenting the corresponding numerical value of the probability of the false signal in the POZ conversion table;
the signal hit rate is the number of hit signals/the number of total signals;
wherein, the number of the total signals is the sum of the number of the hit signals and the number of the false signal;
the mean reaction time was:
Figure BDA0001736755800000101
standard deviation of reaction time:
Figure BDA0001736755800000102
s142: and generating a trend graph of the change of the signal hit rate, the reaction time mean value and the standard deviation of the reaction time according to the signal hit rate, the reaction time mean value and the standard deviation of the reaction time.
The trend graph is a line graph of the change of the hit rate, the mean value of the reaction time and the standard deviation of the reaction time, which are extracted at fixed time intervals.
S143: and taking the alertness d' value, the signal hit rate, the reaction time mean value, the standard deviation of the reaction time and the trend chart as attention stable results in the test results of the target object to be tested.
When the second test data is acquired, step S14: analyzing according to the test data to obtain a test result of the target object to be tested, and further comprising the following steps of:
calculating an attention allocation result Q in the test results of the target object to be tested by:
Figure BDA0001736755800000111
wherein, S1 is the correct response times of single sound stimulation; f1 is the correct response times of single light stimulation; s2 is the sound correct reaction times when the sound and light are stimulated; f2 is the number of light correct responses in the case of acousto-optic stimulation;
when Q is less than 0.5, judging that the distribution value is not noticed;
when Q is more than 0.5 and less than 1.0, judging that the attention distribution value exists;
when Q is 1.0, judging that the attention allocation value is maximum;
when Q >1.0, the attention allocation value is judged to be invalid.
By calculating the Q value, the attention allocation result of the target object to be tested can be obtained.
When the third test data is acquired, step S14: analyzing according to the test data to obtain a test result of the target object to be tested, and further comprising the following steps of:
according to the matched sample t test, determining an attention transfer result in the test result of the target object to be tested:
the calculation formula of the t value of the paired sample t test is as follows:
Figure BDA0001736755800000112
wherein the content of the first and second substances,
Figure BDA0001736755800000113
Figure BDA0001736755800000114
means of the difference between the reaction times of the paired samples before and after the near interferential stimulation; s is the standard deviation of the difference value of the reaction time of the paired samples before and after the near interference stimulation, and n is the number of the paired samples before and after the near interference stimulation. Here default is presetThe difference in the total sample size is 0, so there will be a difference in the above equation
Figure BDA0001736755800000115
And evaluating the attention transfer condition of the target object to be tested by comparing two groups of reaction time before and after the occurrence of the interference stimulus, wherein the used statistical method is a matched sample t test.
Furthermore, the formula β ═ O can also be usedHit in/OFalse newspaperJudging the performance of the target object to be tested in the test process, wherein in the formula, OHit inAs hit rate (P) in POZ translation tableHit in) The corresponding vertical axis value; o isFalse newspaperIn the POZ conversion table, the false report rate (P)False newspaper) Corresponding vertical axis values.
Generally, the condition that beta is more than 1 indicates that the standard mastered by the testee is tighter; the beta value is close to or equal to 1, which indicates that the standard mastered by the testee is not strict and loose; beta values less than 1 indicate that the test subjects were responding at will.
The integrated audio-visual continuous execution testing method provided by the embodiment of the invention can accurately evaluate the attention stability, attention distribution and attention transfer conditions of the target object through the testing content of the interference-free stimulation and the testing content of the far interference stimulation and the near interference stimulation which are displayed in the VR scene.
Example two:
an embodiment of the present invention further provides an integrated audiovisual continuous execution testing apparatus, which is applied to a system composed of VR glasses, a handle and a display terminal, and as shown in fig. 3, the apparatus includes: a display module 21, an operation receiving module 22, a data acquisition module 23 and a data analysis module 24.
The display module 21 is configured to display a virtual classroom scene to a target object to be tested through VR glasses; an operation receiving module 22, configured to receive an operation performed by a handle according to a virtual classroom scene and a preset audiovisual test content by a target object to be tested; the presetting of the audiovisual test content comprises: first test content, second test content and third test content; the operation comprises the following steps: a first operation, a second operation and a third operation corresponding to the first test content, the second test content and the third test content, respectively; the data acquisition module 23 is used for acquiring test data corresponding to preset test contents based on operation; the test data includes: first test data, second test data, and third test data; the data analysis module 24 is used for analyzing according to the test data to obtain a test result of the target object to be tested; the test results include: a first test result, a second test result and a third test result; the first test result is an attention stabilization result; the second test result is an attention stable result and an attention distribution result under far interference; the third test result is an attention stabilization result, an attention allocation result, and an attention diversion result.
In the integrated audiovisual continuous execution testing device provided by the embodiment of the invention, each module has the same technical characteristics as the integrated audiovisual continuous execution testing method, so that the functions can be realized. The specific working process of each module in the device refers to the above method embodiment, and is not described herein again.
Example three:
the embodiment of the present invention further provides an integrated audio-visual continuous execution testing system, which is shown in fig. 4 and includes VR glasses 31, a handle 33 and a display terminal 32. Wherein, the VR glasses 31 and the handle 33 are respectively connected with the display terminal 32 in a communication manner; the display terminal 32 is provided with the integrated audio-visual continuous execution testing device 321 as described in the second embodiment.
In the integrated audiovisual continuous execution test system provided by the embodiment of the invention, each module has the same technical characteristics as the integrated audiovisual continuous execution test device, so that the functions can be realized. The specific working process of each module in the system refers to the above method embodiment, and is not described herein again.
The computer program product for integrating audiovisual continuous execution testing methods provided in the embodiments of the present invention includes a computer-readable storage medium storing a nonvolatile program code executable by a processor, where instructions included in the program code may be used to execute the methods described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, which are not described herein again.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the electronic device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. An integrated audio-visual continuous execution testing method is applied to a system consisting of VR glasses, a handle and a display terminal, and is characterized by comprising the following steps:
displaying a virtual classroom scene to a target object to be tested through the VR glasses;
receiving the operation executed by the target object to be tested through the handle according to the virtual classroom scene and the preset audio-visual test content; the preset audiovisual test content comprises: first test content, second test content and third test content; the operations include: a first operation, a second operation and a third operation corresponding to the first test content, the second test content and the third test content, respectively;
the first test content includes: sequentially displaying a 2min visual channel, a 2min auditory channel and a 4min audio-visual channel in the virtual classroom scene; the visual channel includes: 40 signals and 40 noises randomly presented in a picture; the auditory channels include: 40 signals and 40 noises randomly presented in an audio manner; the audiovisual channel includes: 40 signals and 40 noises which are respectively randomly presented in an audio and picture mode;
the second test content includes: displaying 4min of audio-visual double channels in the virtual classroom scene; presenting far-interference stimulation every 30s in the 4min audiovisual double channels; the audiovisual channel includes: 40 signals and 40 noises which are respectively randomly presented in an audio and picture mode;
the third test content includes: displaying 2min of audio-visual double channels, 6s of visual channels and 2min of audio-visual double channels in the virtual classroom scene; in the 6s visual channel, presenting a near interferential stimulus;
acquiring test data corresponding to the preset audio-visual test content based on the operation; the test data includes: first test data, second test data, and third test data;
analyzing according to the test data to obtain a test result of the target object to be tested; the test results include: a first test result, a second test result and a third test result; the first test result is an attention stable result under non-interference stimulation; the second test result is an attention stable result and an attention distribution result under far interference; the third test result is an attention stabilization result, an attention allocation result and an attention diversion result under the near disturbance stimulus.
2. The method of claim 1,
the first test data includes: the number of the reaction time, the hit signal and the false alarm signal respectively corresponding to the auditory channel, the visual channel and the audio-visual channel;
the second test data includes: the corresponding reaction time of the audio-visual double channels, the number of the hit signals and the false alarm signals, and the number of the hit signals when the far interference stimulus occurs;
the third test data includes: the reaction time, the number of hit signals and the number of false alarm signals corresponding to the audiovisual double channels before and after the near interference stimulation.
3. The method according to claim 2, wherein the analyzing according to the test data to obtain the test result of the target object to be tested specifically comprises:
calculating alertness d' values, signal hit rates, reaction time mean values and standard deviations of reaction times respectively corresponding to the auditory channels, the visual channels and the audiovisual double channels according to the reaction times, the hit signals and the number of false alarm signals respectively corresponding to the auditory channels, the visual channels and the audiovisual double channels;
generating a trend graph of the change of the signal hit rate, the mean reaction time and the standard deviation of the reaction time according to the signal hit rate, the mean reaction time and the standard deviation of the reaction time;
and taking the alertness d' value, the signal hit rate, the reaction time mean value, the standard deviation of the reaction time and the trend chart as attention stable results in the test results of the target object to be tested.
4. The method according to claim 3, wherein the calculating the alertness d' value, the signal hit rate, the reaction time mean value and the standard deviation of the reaction time corresponding to the auditory channel, the visual channel and the audiovisual channel respectively comprises:
calculating the alertness d' value by the following formula:
d′=Zhit in-ZFalse newspaper
Wherein Z isHit inA corresponding numerical value in the POZ conversion table represents the probability of hitting the signal; zFalse newspaperRepresenting the corresponding numerical value of the probability of the false signal in the POZ conversion table;
the signal hit rate is the number of hit signals/the number of total signals;
wherein, the number of the total signals is the sum of the number of the hit signals and the number of the false signal;
the mean reaction time was:
Figure FDA0003009956190000031
standard deviation of reaction time:
Figure FDA0003009956190000032
5. the method according to claim 3, wherein in a case where second test data is obtained, the analyzing according to the test data to obtain a test result of the target object to be tested further comprises:
calculating an attention allocation result Q in the test results of the target object to be tested by:
Figure FDA0003009956190000033
wherein, S1 is the correct response times of single sound stimulation; f1 is the correct response times of single light stimulation; s2 is the sound correct reaction times when the sound and light are stimulated; f2 is the number of light correct responses in the case of acousto-optic stimulation;
when Q is less than 0.5, judging that the distribution value is not noticed;
when Q is more than 0.5 and less than 1.0, judging that the attention distribution value exists;
when Q is 1.0, judging that the attention allocation value is maximum;
when Q >1.0, the attention allocation value is judged to be invalid.
6. The method according to claim 5, wherein in a case where third test data is obtained, the analyzing according to the test data to obtain a test result of the target object to be tested further comprises:
according to the paired sample t test, determining an attention transfer result in the test result of the target object to be tested:
the calculation formula of the t value of the paired sample t test is as follows:
Figure FDA0003009956190000034
wherein the content of the first and second substances,
Figure FDA0003009956190000035
Figure FDA0003009956190000036
means of the difference between the reaction times of the paired samples before and after the near interferential stimulation; s is the standard deviation of the difference value of the reaction time of the paired samples before and after the near interference stimulation, and n is the number of the paired samples before and after the near interference stimulation.
7. An integrated audio-visual continuous execution testing device applied to a system composed of VR glasses, a handle and a display terminal is characterized by comprising:
the display module is used for displaying a virtual classroom scene to a target object to be tested through the VR glasses;
the operation receiving module is used for receiving the operation executed by the target object to be tested through the handle according to the virtual classroom scene and the preset audio-visual test content; the preset audiovisual test content comprises: first test content, second test content and third test content; the operations include: a first operation, a second operation and a third operation corresponding to the first test content, the second test content and the third test content, respectively; the first test content includes: sequentially displaying a 2min visual channel, a 2min auditory channel and a 4min audio-visual channel in the virtual classroom scene; the visual channel includes: 40 signals and 40 noises randomly presented in a picture; the auditory channels include: 40 signals and 40 noises randomly presented in an audio manner; the audiovisual channel includes: 40 signals and 40 noises which are respectively randomly presented in an audio and picture mode; the second test content includes: displaying 4min of audio-visual double channels in the virtual classroom scene; presenting far-interference stimulation every 30s in the 4min audiovisual double channels; the audiovisual channel includes: 40 signals and 40 noises which are respectively randomly presented in an audio and picture mode; the third test content includes: displaying 2min of audio-visual double channels, 6s of visual channels and 2min of audio-visual double channels in the virtual classroom scene; in the 6s visual channel, presenting a near interferential stimulus;
the data acquisition module is used for acquiring test data corresponding to the preset audio-visual test content based on the operation; the test data includes: first test data, second test data, and third test data;
the data analysis module is used for analyzing according to the test data to obtain a test result of the target object to be tested; the test results include: a first test result, a second test result and a third test result; the first test result is an attention stable result under non-interference stimulation; the second test result is an attention stable result and an attention distribution result under far interference; the third test result is an attention stabilization result, an attention allocation result and an attention diversion result under the near disturbance stimulus.
8. An integrated audio-visual continuous execution test system comprises VR glasses, a handle and a display terminal;
the VR glasses and the handle are respectively in communication connection with the display terminal;
the display terminal is mounted with the integrated audio-visual continuous execution test device as claimed in claim 7.
9. A computer-readable medium having non-volatile program code executable by a processor, wherein the program code causes the processor to perform the method of any of claims 1 to 6.
CN201810799866.1A 2018-07-19 2018-07-19 Method, device and system for integrated audio-visual continuous execution test Active CN109044374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810799866.1A CN109044374B (en) 2018-07-19 2018-07-19 Method, device and system for integrated audio-visual continuous execution test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810799866.1A CN109044374B (en) 2018-07-19 2018-07-19 Method, device and system for integrated audio-visual continuous execution test

Publications (2)

Publication Number Publication Date
CN109044374A CN109044374A (en) 2018-12-21
CN109044374B true CN109044374B (en) 2021-05-14

Family

ID=64817578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810799866.1A Active CN109044374B (en) 2018-07-19 2018-07-19 Method, device and system for integrated audio-visual continuous execution test

Country Status (1)

Country Link
CN (1) CN109044374B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111345828A (en) * 2018-12-24 2020-06-30 南京新镜医疗科技有限公司 Attention tester and use method thereof
CN109961520B (en) * 2019-01-29 2023-05-09 深圳职业技术学院 VR/MR classroom based on third view angle technology and construction method thereof
CN110222639B (en) 2019-06-05 2020-03-31 清华大学 Human body stress response testing method and system
CN110507979B (en) * 2019-07-30 2024-03-26 浙江工业大学 Virtual-real combined test experiment device during reaction
CN110675923B (en) * 2019-10-08 2022-02-22 中国科学院心理研究所 Audio-visual integration method based on portable eye tracker
CN110720935A (en) * 2019-10-29 2020-01-24 浙江工商大学 Multidimensional attention concentration capability evaluation method

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6356781B1 (en) * 2000-03-31 2002-03-12 Lucent Technologies, Inc. Functional magnetic resonance imaging capable of detecting the occurrence of neuronal events with high temporal accuracy
CN1623502A (en) * 2003-12-03 2005-06-08 上海浩顺科技有限公司 Aiagnosis device for ADHA of children
CN101797150A (en) * 2009-01-12 2010-08-11 神经-技术解决方案公司 Computerized test apparatus and methods for quantifying psychological aspects of human responses to stimuli
WO2012064999A1 (en) * 2010-11-11 2012-05-18 The Regents Of The University Of California Enhancing cognition in the presence of distraction and/or interruption
CN103402440A (en) * 2010-12-06 2013-11-20 国立大学法人冈山大学 Method and device for verifying onset of dementia
CN103561651A (en) * 2010-11-24 2014-02-05 数字制品有限责任公司 Systems and methods to assess cognitive function
CN104000614A (en) * 2014-06-09 2014-08-27 南通大学 System and method for evaluating comprehensive quality of long-distance passenger transportation driver
CN104203100A (en) * 2012-02-09 2014-12-10 人类电工公司 Performance assessment tool
CN105664332A (en) * 2016-04-14 2016-06-15 北京阳光易德科技股份有限公司 Psychological stress training system and method
CN106470608A (en) * 2014-01-13 2017-03-01 人类电工公司 Performance appraisal instrument
WO2017040417A1 (en) * 2015-08-28 2017-03-09 Atentiv Llc System and program for cognitive skill training
WO2018027080A1 (en) * 2016-08-03 2018-02-08 Akili Interactive Labs, Inc. Cognitive platform including computerized evocative elements
WO2018039610A1 (en) * 2016-08-26 2018-03-01 Akili Interactive Labs, Inc. Cognitive platform coupled with a physiological component
WO2018081134A1 (en) * 2016-10-24 2018-05-03 Akili Interactive Labs, Inc. Cognitive platform configured as a biomarker or other type of marker

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6356781B1 (en) * 2000-03-31 2002-03-12 Lucent Technologies, Inc. Functional magnetic resonance imaging capable of detecting the occurrence of neuronal events with high temporal accuracy
CN1623502A (en) * 2003-12-03 2005-06-08 上海浩顺科技有限公司 Aiagnosis device for ADHA of children
CN101797150A (en) * 2009-01-12 2010-08-11 神经-技术解决方案公司 Computerized test apparatus and methods for quantifying psychological aspects of human responses to stimuli
WO2012064999A1 (en) * 2010-11-11 2012-05-18 The Regents Of The University Of California Enhancing cognition in the presence of distraction and/or interruption
CN103561651A (en) * 2010-11-24 2014-02-05 数字制品有限责任公司 Systems and methods to assess cognitive function
CN103402440A (en) * 2010-12-06 2013-11-20 国立大学法人冈山大学 Method and device for verifying onset of dementia
CN104203100A (en) * 2012-02-09 2014-12-10 人类电工公司 Performance assessment tool
CN106470608A (en) * 2014-01-13 2017-03-01 人类电工公司 Performance appraisal instrument
CN104000614A (en) * 2014-06-09 2014-08-27 南通大学 System and method for evaluating comprehensive quality of long-distance passenger transportation driver
WO2017040417A1 (en) * 2015-08-28 2017-03-09 Atentiv Llc System and program for cognitive skill training
CN105664332A (en) * 2016-04-14 2016-06-15 北京阳光易德科技股份有限公司 Psychological stress training system and method
WO2018027080A1 (en) * 2016-08-03 2018-02-08 Akili Interactive Labs, Inc. Cognitive platform including computerized evocative elements
WO2018039610A1 (en) * 2016-08-26 2018-03-01 Akili Interactive Labs, Inc. Cognitive platform coupled with a physiological component
WO2018081134A1 (en) * 2016-10-24 2018-05-03 Akili Interactive Labs, Inc. Cognitive platform configured as a biomarker or other type of marker

Also Published As

Publication number Publication date
CN109044374A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN109044374B (en) Method, device and system for integrated audio-visual continuous execution test
Talcott et al. Dynamic sensory sensitivity and children's word decoding skills
Boets et al. Preschool impairments in auditory processing and speech perception uniquely predict future reading problems
Bartelet et al. What basic number processing measures in kindergarten explain unique variability in first-grade arithmetic proficiency?
Ekman A methodological discussion of nonverbal behavior
Wetzel et al. Distraction and facilitation—two faces of the same coin?
Rabbitt et al. Practice and drop-out effects during a 17-year longitudinal study of cognitive aging
Liu Sensory processing and motor skill performance in elementary school children with autism spectrum disorder
Law et al. The relationship of phonological ability, speech perception, and auditory perception in adults with dyslexia
Sevinç Language anxiety in the immigrant context: Sweaty palms?
Tremblay et al. Comparing behavioral discrimination and learning abilities in monolinguals, bilinguals and multilinguals
Gutierrez-Sigut et al. Early use of phonological codes in deaf readers: An ERP study
Starrfelt et al. Reading in developmental prosopagnosia: Evidence for a dissociation between word and face recognition.
CN101797150A (en) Computerized test apparatus and methods for quantifying psychological aspects of human responses to stimuli
Eberhard-Moscicka et al. Temporal dynamics of early visual word processing–early versus late N1 sensitivity in children and adults
Cohen et al. Does the relation between rapid automatized naming and reading depend on age or on reading level? A behavioral and ERP study
Law et al. The relationship of phonological development and language dominance in bilingual Cantonese-Putonghua children
van Haaften et al. The psychometric evaluation of a speech production test battery for children: The reliability and validity of the Computer Articulation Instrument
Datta et al. Automaticity of speech processing in early bilingual adults and children
Zhao et al. Robust and efficient online auditory psychophysics
Bower et al. Built environment color modulates autonomic and EEG indices of emotional response
Baum et al. Testing sensory and multisensory function in children with autism spectrum disorder
Roepke et al. Vowel errors produced by preschool-age children on a single-word test of articulation
Schaadt et al. Auditory phoneme discrimination in illiterates: Mismatch negativity—A question of literacy?
Pattamadilok et al. How does reading performance modulate the impact of orthographic knowledge on speech processing? A comparison of normal readers and dyslexic adults

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant