CN110970130A - Data processing method for attention defect hyperactivity disorder - Google Patents

Data processing method for attention defect hyperactivity disorder Download PDF

Info

Publication number
CN110970130A
CN110970130A CN201911398269.9A CN201911398269A CN110970130A CN 110970130 A CN110970130 A CN 110970130A CN 201911398269 A CN201911398269 A CN 201911398269A CN 110970130 A CN110970130 A CN 110970130A
Authority
CN
China
Prior art keywords
subject
data
characteristic parameters
task
hyperactivity disorder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911398269.9A
Other languages
Chinese (zh)
Other versions
CN110970130B (en
Inventor
段新
段拙然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Chuangshijia Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201911398269.9A priority Critical patent/CN110970130B/en
Publication of CN110970130A publication Critical patent/CN110970130A/en
Priority to PCT/CN2020/129452 priority patent/WO2021135692A1/en
Application granted granted Critical
Publication of CN110970130B publication Critical patent/CN110970130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application is applicable to the technical field of computers, and provides a data processing method for attention deficit hyperactivity disorder, which comprises the following steps: collecting characteristic parameters output by patients and normal persons when the patients and the normal persons use the virtual reality environment to complete tasks, training a machine learning model through the collected characteristic parameters to obtain a trained machine learning model, and finally performing prediction classification on the characteristic parameters collected by any subject by using the virtual reality environment by using the trained machine learning model. According to the method and the device, the test data acquired by the tasks in the virtual reality environment are completed by the subject, the subject can complete the test in a relaxed environment, the condition that the acquired data is inaccurate due to tension of the subject is avoided, in addition, the characteristic parameters are predicted and classified through the machine learning model, the judgment of artificial subjective factors is avoided, and the objective accuracy of the judgment of attention defect hyperactivity disorder is improved.

Description

Data processing method for attention defect hyperactivity disorder
Technical Field
The application belongs to the technical field of computers, and particularly relates to a data processing method for attention defect hyperactivity disorder.
Background
Attention Deficit Hyperactivity Disorder (ADHD) is a common mental disorder in childhood and is classified into Attention deficit disorder (ADHD-I), hyperactivity and impulsivity (ADHD-H, ADHD-hyperactivity), and combined expression (ADHD-C, ADHD-combination). Attention deficit hyperactivity disorder is mainly manifested in inattention, hyperactivity, impulsion and self-control, which affects the learning, communication and performance of children patients.
At present, the diagnosis of attention deficit hyperactivity disorder is realized through interview, observation and questionnaire in various forms and evaluation, and the diagnosis process is subjective and often results in misdiagnosis and missed diagnosis due to stubborn skin or tension of children.
Disclosure of Invention
The embodiment of the application provides a data processing method for attention-deficit hyperactivity disorder, which can solve the problem of strong subjectivity in judging attention-deficit hyperactivity disorder.
In a first aspect, an embodiment of the present application provides a data processing method for attention deficit hyperactivity disorder, including:
obtaining input data of a subject completing a task in a virtual reality environment;
calculating characteristic parameters of the subject based on the input data;
and outputting a classification result based on the characteristic parameters and the trained machine learning model.
In a second aspect, an embodiment of the present application provides an attention deficit hyperactivity disorder data processing apparatus, including:
the data acquisition module is used for acquiring input data of a subject for completing tasks in the virtual reality environment;
the data calculation module is used for calculating characteristic parameters of the subject based on the input data;
and the result output module is used for outputting a classification result based on the characteristic parameters and the trained machine learning model.
In a third aspect, an embodiment of the present application provides a terminal device, including: memory, processor and computer program stored in the memory and executable on the processor, wherein the processor implements the data processing method for attention deficit hyperactivity disorder according to any one of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, where the computer program is executed by a processor to implement the data processing method for attention deficit hyperactivity disorder according to any one of the above first aspects.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the data processing method for attention deficit hyperactivity disorder according to any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that: according to the method and the device, the test input data are obtained by completing tasks in the virtual reality environment through the subject, the characteristic parameters are obtained through calculation according to the input data, and finally the characteristic parameters are input into the machine learning model to evaluate the subject, so that the classification result is obtained. According to the method, the test data acquired by the subject completing the task in the virtual reality environment can be tested in a relaxed environment, so that the condition that the acquired data is inaccurate due to tension when the subject is in interview or in a paper-pen test is avoided; secondly, the machine learning model is used for automatically classifying according to the characteristic parameters, so that the evaluation of artificial subjective factors is avoided, and the objective accuracy of the evaluation of attention deficit hyperactivity disorder is improved; thirdly, the Virtual reality environment (VR) scene design is convenient, the behavior, cognition and physiological data generated in the VR environment can be used as the input effective characteristics of the classifier for distinguishing patients from normal subjects, and biomarkers with ADHD characteristics are easily obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario of a data processing method for attention deficit hyperactivity disorder according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating a data processing method for attention deficit hyperactivity disorder according to an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram illustrating a specific method of FIG. 2 after data entry according to an embodiment of the present application;
fig. 4 is a first flowchart illustrating a method for calculating the feature parameter in fig. 2 according to an embodiment of the present application;
fig. 5 is a flowchart illustrating a second method for calculating the feature parameters in fig. 2 according to an embodiment of the present application;
fig. 6 is a third schematic flowchart of a method for calculating feature parameters in fig. 2 according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating a method for training a machine learning model according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of a data processing apparatus for attention deficit hyperactivity disorder according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 10 is a block diagram of a partial structure of a computer provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Fig. 1 is a schematic view of an application scenario of a data processing method for attention deficit hyperactivity disorder provided in an embodiment of the present application, which can be used for evaluating ADHD of a subject. The terminal device 20 is configured to obtain test data of the testee 10 completing a task in the virtual reality environment, analyze and evaluate the test data, and finally obtain a classification result, and a doctor can determine whether the testee 10 suffers from attention deficit hyperactivity disorder and type thereof according to the classification result of the terminal device 20.
The data processing method for attention deficit hyperactivity disorder according to the embodiment of the present application will be described in detail below with reference to fig. 1.
Fig. 2 shows a schematic flow chart of the data processing method for attention deficit hyperactivity disorder provided by the present application, and with reference to fig. 2, the data processing method for attention deficit hyperactivity disorder is described in detail as follows:
s101, input data of a subject completing a task in a virtual reality environment is acquired.
In this embodiment, the VR stores an interesting game capable of distinguishing ADHD from normal subjects, such as finding a difference between two environmental situations, archery, and recognizing expressions of characters in social situations, and the subjects can acquire different input data such as attention and jerk during the period of completion of the task, in which the subjects can classify ADHD, by completing the game task in the VR. Multiple games can be set in one task, for example, environment differences with different difficulties can be set in different game tasks of two environment scenes, and expressions with different complexity can be set in the task of identifying character expressions in social situations for testing.
By way of example:
and the VR finds different scenes to evaluate the working memory capacity of the space, the attention selection capacity and the visual information searching capacity. The "dissimilarity" in the scene was found to be a goal-oriented behavior, requiring attention, influenced by the location of the eyeball gaze and the latency of the gaze. ADHD patients are less able to detect changes than normal, and particularly, subtle changes are easily overlooked, mainly due to defects in the control and attention of eye movements of ADHD patients. In setting up the game, static and dynamic different daily or sports scenes may be set, for example, appearance or absence of colors presented, changes in position, to allow the subject to look for different places. ADHD patients may be answered faster than normal, but may recognize the difference less accurately than normal and more incorrectly than normal.
The VR fixed target archery game scene evaluates concentration and endurance, can set audio-visual interference factors, and in an interference environment, a fixed target needs to stare at a target center in a static shooting mode, and the more the duration time of the staring at the target center is close to the specified time, the higher the score of a shooting target ring is.
VR moving target game scenarios are Continuous Performance Test (CPT) tasks that require the subject to react as quickly as possible to the target stimulus, not to non-target stimuli, and call for auditory and visual selectivity. In the interference environment, visual, audio and visual-visual space interference can be set, such as a bird, a rabbit, a hamster and the like, a target such as a flying saucer is thrown into the air randomly, and a subject is required to shoot the target.
VR recognizes the expression of a person in a social situation and may set two different degrees of difficulty for emotion recognition, 1), static and dynamic expression recognition, and positive and negative expression recognition (e.g., negative emotion with anger, sadness, fear, etc.), and various emotions with different intensities (e.g., four intensities of 30%, 50%, 70%, 100%). 2) Context task VR context: emotion recognition and processing is performed in social contexts, such as: visual attention and emotion recognition in interpersonal interaction, complex and subtle emotion changes in situations, better visual attention for interpretation and discrimination, finger recognition and natural language interaction (answer) of a subject, objective expression (emotion) recognition investigation, psychological (intelligent) theory, attention selection, reaction time selection test, visual information search and the like.
Different VR scene tasks focus on different characteristics such as attention and multi-movement impulse for evaluating ADHD, at least one VR scene can be selected, and each scene can obtain at least one group of input data. The input data includes: task performance data, motion sensing data, eye tracking data, and electroencephalogram data.
As shown in fig. 3, in a possible implementation manner, the implementation process of step S101 may include:
and S1011, acquiring the task performance data under the current task through the gesture tracker and/or the language processing device.
In this embodiment, the target stimulus refers to an instruction or task content of a task in the VR, such as: finding out the expression of the character in the social situation, wherein the expression required to be specified is the target stimulation; the flying saucer in the flying saucer shooting game is the target stimulus.
The task performance data may be obtained by a gesture tracker, for example, by pointing out a smiling face in the context with a hand, by which it may be determined whether the subject is performing the task correctly, and also based on the hand motion, the start time of the subject's reaction at the time of a target stimulus, and if a target stimulus occurs but the subject's hand is not moving, it indicates that the subject is not reacting to the current target stimulus.
The task performance data may be obtained by a language processing device, for example: finding out smiling face in the situation, the subject can speak the direction or serial number of the smiling face through natural language, and the computer finally determines whether the subject correctly completes the task by acquiring the language information of the subject, and through information conversion and information identification.
Alternatively, the gesture tracking may be optical tracking, such as: leap Motion (somatosensory controller); or inertial tracking, or a data glove with sensors on the hand. By adopting a hand motion capture technology, hand motion can be tracked without hand-held equipment or wearing data gloves, and the hand motion naturally interacts with a virtual scene.
And S1012, acquiring the motion sensing data acquired by the motion recorder under the current task.
In this embodiment, ADHD is defined by the body movement behavior of the diagnosed subject, and the stability of the subject's body posture is one of the criteria used to judge whether the subject is ADHD. In the prior art, physical movement data of a diagnosed person within a period of time is collected, and the difference between the physical movement data of the diagnosed person and the physical movement data of a normal person is compared to judge whether the diagnosed person is ill or not.
Optionally, the motion recorder may capture motion by optical tracking or inertial tracking, and the motion recorder may be implemented by a wearable device or a scene depth analysis scheme; the scene depth analysis scheme is that the optical sensor receives the optical signal to analyze the depth information of the scene, and further the body position and the posture of the subject are determined; the wearable device is used for fixing a sensor on a joint or a key point of a subject, and obtaining the body movement condition of the subject by testing the position change or the bending degree of the joint or the key point, wherein the key point can comprise a head, a wrist, an ankle and the like. The motion recorder can record the body motion data of a testee through two devices, one is an accelerometer device, and the accelerometer device records three-axis motion by using a three-axis accelerometer so as to obtain motion and inertia measurement data; the other is an infrared optical position detection and analysis system, which is a moving object analysis system developed based on an optical sensitive device and stereometry.
And S1013, acquiring the eye movement tracking data acquired by the eye movement tracking equipment under the current task.
In this embodiment, the eye movement may represent the gaze time and gaze direction. The eye tracking equipment can obtain eye tracking data of the gaze direction, time, sequence and the like of the individual eyeballs, and further obtain characteristic parameters of the gaze times, the gaze time, the visual scanning path, the visual scanning strategy and the like. The eye tracking device may objectively record visual attention and visual search patterns and may provide evaluation indicators that distinguish different visual attention and visual search patterns for ADHD patients and normal persons.
In this embodiment, the eye tracking device may be integrated in the VR head display device, and is mainly used for tracking the eyeball and estimating the sight line of the eyeball, and the time when the eyeball gazes at a point, the time when the eyeball displaces, the sequence of the eyeball displacement, and the like may be acquired through tracking the eyeball.
And S1014, acquiring the electroencephalogram data acquired by the electroencephalogram acquisition device under the current task.
In this embodiment, the electroencephalogram acquisition device can acquire the electroencephalogram response of the subject when the target is stimulated, i.e. the event-related potential, the external stimulation is usually video and audio stimulation, and the internal stimulation is usually a task related to attention, decision-making power and working memory, which is called a psychological task.
Specifically, the electroencephalogram data are electroencephalograms (EEGs) and electroencephalogram evoked Event Related Potentials (ERPs), and can comprise EEG signals α, β, theta waves and the like, potentials P200 and P300, peaks of frequency spectrums at about 11Hz, peaks of P2-N2 and the like.
In this example, there are many abnormalities in the electroencephalogram of ADHD patients, and the brain wave activity of ADHD patients is more than that of normal patients, especially in the frontal lobe area, there is much wave activity; ADHD patients have a larger P2 component and a smaller N2 component. ADHD patients have specific expression of P2-N2 peak-peak value and peak value with frequency of about 11 Hz. Therefore, the acquisition and research of electroencephalogram data can become an index for evaluating ADHD patients.
S102, calculating characteristic parameters of the subject based on the input data.
As shown in fig. 4, in a possible implementation manner, in the case that the input data is task performance data, the implementation process of step S102 may include:
specifically, the task performance data may include: the number of correct responses of the subject to a target stimulus, the onset of a response of the subject at the onset of a target stimulus, and the number of unresponsive responses of the subject at the onset of the target stimulus, wherein a task comprises at least one target stimulus.
In this embodiment, the number of correct responses refers to the total number of indications that the subject needs to complete to correctly answer the target stimulus when the target stimulus is displayed in the VR, for example: finding a smiling face in the social context, the subject correctly pointing out the smiling face, the subject correctly reacting to the target stimulus.
And S1021, calculating the ratio of the correct reaction times to the total number of the target stimuli in the current task to obtain the correct rate in the characteristic parameters.
In this example, the accuracy reflects the concentration of the subject, and a higher accuracy indicates a higher concentration of the subject. The target stimulation total refers to the number of instructions that need to be completed, for example: finding out three groups of environmental scenes in total in different tasks, wherein each group has two scenes, and finding out the difference of the two scenes in each group, wherein in the tasks, the total number of the target stimuli is three; in the task of launching flying saucer, if there are 15 flying saucer in total, the total number of targets is 15.
By way of example, the calculation of the accuracy includes:
Figure BDA0002346868750000091
wherein a is the accuracy; a is the correct reaction frequency; z is the total number of target stimuli.
And S1022, calculating a difference value between the reaction starting time of the correct reaction and the target stimulation starting time, and calculating a standard deviation of the reaction in the characteristic parameters based on the difference value.
In this embodiment, the standard deviation of the reaction time is the standard deviation of the above-mentioned difference, and by calculating the difference between the reaction start time of the correct reaction and the target stimulation start time, the standard deviation of the reaction time, that is, the measure of attention, can be calculated.
The response start time is the time at which the subject begins to respond to the current target stimulus, for example, the subject's response may include: an action or a language.
And S1023, recording the times of the reaction starting time before the corresponding target stimulation starting time in the current task as the error number in the characteristic parameters.
In this embodiment, the number of errors includes the number of times the subject has reacted when the target stimulus is not present. If the difference between the response starting time and the target stimulation time is smaller, the response time of the subject is short, if the response time of the subject is shorter but the error number is larger, the impulsiveness characteristic of the subject is more obvious, and if the response time of the subject is longer but the error number is larger, the inattention characteristic of the subject is more obvious.
And S1024, recording the number of times of no response of the subject when the target stimulus occurs as the number of missed reports in the characteristic parameters.
In this embodiment, when the target stimulus occurs, the subject does not respond before the start of the target stimulus and after the start of the target stimulus, i.e., does not respond, indicating that the subject does not respond to the current target stimulus.
As shown in fig. 5, in a possible implementation manner, in the case that the input data is motion sensing data, the implementation process of step S102 may include:
specifically, the motion sensing data may include: a resting period of the subject, an action time of the subject when transforming an action, position coordinates of a motion path recorded by the action recorder at each movement of the subject, a motion path of the subject during completion of the task and a number of the motion paths.
In this embodiment, the motion sensing data may be the motion data of the subject recorded at the current task, or may be the motion data of the subject recorded at multiple tasks during the entire test.
And S1025, summing all the static time periods, and determining the static time length in the characteristic parameters according to the quotient of the summation result and the number of the static time periods.
In this embodiment, the resting period refers to the average period of time of the subject during the resting period.
And S1026, searching the number of the action time in a preset time period and recording the number of the action time as the movement times in the characteristic parameters.
S1027, calculating the area of each movement region of the subject based on the position coordinates, and recording the sum of all the area regions as the movement region of the subject in the characteristic parameters.
In this embodiment, the motion region of the subject may include the total number of motion areas covered by the motion sensor device through motion.
S1028, summing the intersection points of all the motion paths, and determining the motion complexity in the characteristic parameters according to the quotient of the sum result and the number of the motion paths.
In the present embodiment, a lower motion complexity indicates that the motion path of the subject tends to be simple linear during the test, and a higher motion complexity indicates that the motion path of the subject is complex and entangled during the test.
S1029, calculating the displacement of the subject in the characteristic parameters to complete the task based on the position coordinates.
In this embodiment, since a task may include a plurality of target stimuli, and one target stimulus has one displacement, the displacement of the subject to complete the task may include the total displacement of the subject when the current task is completed, and may also include the total displacement of the subject when the test is completed.
Optionally, the characteristic parameter may further include a time scale, and the time scale reflects the degree of activity of the subject, and if the subject is in a moving state and is recorded as 1, and if the subject is in a static state and is recorded as 0, the time scale may be calculated by counting the ratio of the number of 1 to the number of 0.
As shown in fig. 6, in a possible implementation manner, in the case that the input data is eye tracking data, the implementation process of step S102 may include:
in particular, the eye tracking data includes eye coordinates of the subject, a time of eye gaze target stimulation, and a sequence of eye gaze target stimulation.
S10210, determining the number of times the subject gazes at each of the target stimuli in the characteristic parameter based on the eye coordinates.
In this embodiment, the staying time and the staying times of the eyeball at a certain position can be calibrated by the coordinates of the eyeball, so that the watching times of the subject watching the target stimulus can be counted by counting the times of the appearance of the coordinates of the eyeball.
And S10211, recording the time of the eyeball watching target stimulation as the watching time in the characteristic parameters.
S10212, determining a visual scan path of the subject in the characteristic parameters based on the order of the eyeball fixation target stimuli.
In this embodiment, the visual scanning path of the subject can be known through the sequence of the target stimulus viewed by the subject, that is, what the subject sees first and then what the subject sees, and the tendency of the subject to view the target stimulus can be known by analyzing the visual scanning path of the subject, for example: in the task of distinguishing the smiling face from the crying face picture, if the visual scanning path of the subject is from the smiling face to the crying face, the fact that the subject tends to observe the smiling face is shown, and if the subject sees the crying face first and then sees the smiling face, the fact that the subject tends to watch the attention bias of the crying face is shown.
S10213, deriving a visual scanning strategy of the subject in the characteristic parameters based on the visual scanning path and the fixation time.
In this embodiment, the visual scanning strategy reflects the dwell time and sensitivity of the subject to the target stimulus, such as: in the VR finding different tasks, the times of gazing the change area by the normal subject are more, the time is longer, and the time of gazing for the first time is longer than that of the ADHD subject; while ADHD subjects are looking for long periods of time throughout the scene.
In a possible implementation manner, in the case that the input data is electroencephalogram data, the implementation process of step S102 may include:
and obtaining time-frequency domain characteristics, P300 characteristics and the like in the characteristic parameters based on the electroencephalogram data.
In the embodiment, the EEG signal is preprocessed, wherein the preprocessing mainly comprises data inspection, band-pass filtering, artifact removal and segmentation, and time-frequency domain feature extraction is carried out after preprocessing is carried out to be used as EEG data feature parameters; and extracting the characteristics of the peaks, the latent periods, the mean values and the like of the electric potentials P200 and P300 to be used as the characteristic parameters of the electroencephalogram data, and the like.
And S103, outputting a classification result based on the characteristic parameters and the trained machine learning model.
In this embodiment, the feature parameters are input into the trained machine learning model, and the classification result is automatically output, so that the doctor can judge whether the subject is ADHD patient and what type of ADHD the subject belongs to according to the classification result.
As an example, the output result may be set to four types of attention deficit, hyperactivity and impulsion, combined performance, and normal.
It should be noted that, in addition to the above-mentioned common characteristic parameters, different VR game task scenes may also obtain different specific characteristic parameters according to different scenes.
By way of example:
1) the feature parameters in the task of identifying the expression of the character in the social context by the VR further comprise:
also available through the task performance data are: selecting reaction time, correct times, error rate and the like.
Also obtained by the eye tracking data are: the method comprises the following steps of Entering Time (ET) of an interest area, First Fixation Time (FFT), total Fixation Time (FT) of the interest area, the number of fixation points and the like, and the deviation of the interest area can be analyzed by the fixation time ratio of the interest areas to reflect the attention deviation value. Studies have found that ADHD patients pay attention to happy faces for a shorter time than normal, and to neutral faces for a longer time than normal, ADHD patients pay attention to unhappy faces while normal people prefer happy faces when presented with a set of happy-unhappy faces. ADHD patients pay more attention to the mouth of an emotional face than to the eyes than to normal persons, may show a definite positive or negative emotion even more when mouth opens and closes, and the discrimination of the eye catch is much more difficult, and ADHD patients do not pay attention to other people's facial and body language information when they are angry to other people in a social situation.
The electroencephalogram data can also be used for obtaining: ADHD patients have facial expression processing components different from those of normals. The processing of different face holes is weakened for ADHD patients, and the amplitude of P100 in the occipital areas on both sides is lower than that of normal patients, and the amplitude of N170 in the occipital areas on both sides is lower than that of normal patients under the stimulation of various expressions such as happiness, neutrality, anger, fear and the like. The amplitudes of P100 and N170 waves of ADHD patients under various expression stimuli have no obvious difference; the amplitude of the P100 wave of the left occipital area of the normal person is obviously higher than that of the neutral face due to the happy, angry and fear faces, and the amplitude of the N170 wave of the left temporal area is obviously higher than that of the neutral face due to the fear faces.
2) The characteristic parameters in the VR archery game scene may further include:
also available through the task performance data are: hit score, correct response rate, false response rate, and error rate, etc.
The motion sensing data can also be used to obtain: the reaction time is accelerated, the variation rate of the reaction time is accelerated, the ADHD patients have cognitive defects, and when the throwing interval of the target is shorter and denser, the reaction accelerating capacity is reduced, and the missing times are increased; indicators of hyperactivity can be evaluated: the total time of the stillness and the activity, the average transformation action times, the distance path of the activity and the activity area in a certain time; an indication of the impulse can be evaluated: number of errors, error rate, etc.
Also obtained by the eye tracking data are: total Fixation Time (FT), number of times of eye miss, time of eye miss, etc. of the region of interest.
3) Finding the feature parameters in different scenes in the VR may further include:
also obtained by the eye tracking data are: the number of times the change area is watched, the time the change area is watched for the first time, the area of interest, etc.
As shown in fig. 2, in a possible implementation manner, the method may further include:
s201, training a machine learning model based on an input sample to obtain the trained machine learning model.
In this embodiment, the machine learning model may include: algorithms based on classical machine learning (e.g. Support Vector Machine (SVM), Artificial Neural Network (ANN)) and on deep learning. For example: SVM models, Convolutional Neural Network (CNN) models, and models trained and optimized based on a caffe model, among others. The development environment may use TensorFlow, Python, MXNet, Torch, Theano, CNTK, cuda, cuDNN, Caffe open source library, or LibSVM toolkit, etc.
As shown in fig. 7, in a possible implementation manner, the implementation process of step S201 may include:
s2011, taking at least one index in characteristic parameters obtained when a testee completes a task in the virtual reality environment as an input parameter, wherein the testee comprises a normal subject and a diseased subject;
s2012, taking the input parameters as input samples, and training the machine learning model based on the input samples to obtain the trained machine learning model.
In this embodiment, the input parameter may be at least one index of characteristic parameters obtained when the subject completes at least one task in VR.
In this embodiment, the machine learning model may include a support vector machine or a convolutional neural network, or the like. The input samples include: data collected from a number of ADHD patients and a number of normals through tasks in completing VR, wherein ADHD includes three types ADHD-I, ADHD-H and ADHD-C.
Specifically, in the case where the machine learning model is a support vector machine, the training of the machine learning model includes:
for example, a characteristic parameter is selected as input: an SVM-RFE (support vector machine recursive feature elimination) classification method based on the characteristic input of the electroencephalogram evoked potential time-frequency domain. The time-frequency domain feature extraction is carried out on the electroencephalogram evoked potential of the tested person during the relevant game (task), and the feature classification is carried out by utilizing a support vector machine, so that the individual prediction is realized.
For example, ensemble learning methods that select various feature parameters as inputs: the ensemble learning method is used for combining a single classification method, and the multi-kernel learning (MKL) method is used for combining a plurality of kernel functions, so that the classification capability can be remarkably improved. Task performance data, motion sensing data, eye tracking data, electroencephalogram data and other multi-modal data, and a multi-core classifier is trained by fusing 4 features based on multi-core learning (MKL). By utilizing the SVM, a nested cross validation method is adopted to preprocess various data, extract features, select features and finally classify the data.
Specifically, deep learning can learn features through a deep nonlinear network structure, and can realize complex function approximation by combining low-level features to form a more abstract deep representation (attribute category or feature), so that essential features of a data set can be learned. In the case where the machine learning model is a convolutional neural network, the training of the machine learning model includes:
a plurality of nodes are used in an input layer, task performance data, motion sensing data, eye movement tracking data, electroencephalogram data and the like are used as input vectors X1 … … … Xn (characteristics), and an output layer is composed of 4 neurons such as normal neurons, ADHD-I, ADHD-H, ADHD-C and the like. Each node realizes nonlinear transformation through an activation function and then is input, and can output through a plurality of hidden layers; comparing the output of the convolutional neural network with the actual output or the expected output, measuring the error between the predicted value and the actual value by using the mean square error, changing the connection weight value w and the partial derivative value b by adopting a back-propagation algorithm (BP algorithm) according to the error, and enabling each predicted result value and each actual result value to be more and more close to each other by continuously iterating and minimizing the loss function, wherein the loss function value is not changed any more until the error reaches to be close to 0.001. For the multi-classification problem, a Softmax loss function can be adopted, and finally, after the training phase is finished, the weight value is fixed to the final value to obtain the trained convolutional neural network.
Different VR scene tasks focus on different characteristics such as attention and multi-impulse of ADHD, and the input parameters are machine learning algorithms which take various indexes in characteristic parameters obtained when a testee completes multiple tasks in VR as input parameters.
For example, the classification processes of the various modal parameters are unified into a complete CNN (convolutional neural network) network structure, which fuses the individual VR scene task parameters input into the CNN ADHD classification network, which is also a CNN model composed of a series of convolutional, pooling, and ReLU activation layers. A Point-to-Point Gated Boltzmann Machine (PGBM) based network fusion method may be employed. Splicing feature vectors of the last layer of two or more CNNs in a VR scene, training the spliced feature vectors as the input of a visible layer of a PGBM part, and training the PGBM part by adopting a contrast divergence method. And acquiring the feature representation of the task related part in the spliced feature vector through the trained network connection weight, wherein the feature representation of the part is used as the input of the newly added full-connection layer to train the newly added full-connection layer. Also, the back propagation depth of the network is limited to the newly added fully connected layer. The Softmax loss function is also used as a guide for the network training process.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 8 shows a block diagram of a data processing apparatus for attention deficit hyperactivity disorder according to an embodiment of the present application, and only the relevant portions of the data processing apparatus for attention deficit hyperactivity disorder are shown for convenience of illustration.
Referring to fig. 8, the apparatus 100 may include: a data acquisition module 110 and a data calculation module 120, and a result output module 130.
The data acquisition module 110 is configured to acquire input data of a subject completing a task in a virtual reality environment;
a data calculation module 120 for calculating characteristic parameters of the subject based on the input data;
and a result output module 130, configured to output a classification result based on the feature parameters and the trained machine learning model.
In one possible implementation, at least one game scenario is stored in the virtual reality environment, and the subject completes a task in the game scenario.
In one possible implementation, the input data includes at least one of task performance data, motion sensing data, eye tracking data, and electroencephalogram data; the data acquisition module 110 may be specifically configured to:
acquiring the task performance data under the current task through a gesture tracker and/or a language processing device;
acquiring the motion sensing data acquired by the action recorder under the current task;
acquiring the eye movement tracking data acquired by eye movement tracking equipment under the current task;
and acquiring the electroencephalogram data acquired by the electroencephalogram acquisition device under the current task.
In one possible implementation, the task performance data includes: the number of correct responses of the subject to a target stimulus, the onset of response of the subject at the onset of a target stimulus, and the number of unresponsive responses of the subject at the onset of the target stimulus, wherein a task comprises at least one target stimulus; in a case that the input data is task performance data, the data calculation module 120 may specifically be configured to:
calculating the ratio of the number of correct reactions to the total number of target stimuli in the current task to obtain the correct rate in the characteristic parameters;
calculating a difference between a reaction start time of a correct reaction and a target stimulation start time, and calculating a standard deviation of the reaction in the characteristic parameters based on the difference;
recording the number of times that the reaction starting time is before the corresponding target stimulation starting time under the current task as the error number in the characteristic parameters;
and recording the number of times of no response of the subject when the target stimulus occurs as the number of missed reports in the characteristic parameters.
In one possible implementation, the motion sensing data includes: a resting time period of the subject, an action time of the subject when transforming an action, position coordinates of a motion path recorded by the action recorder at each movement of the subject, a motion path of the subject during completion of the task, and a number of the motion paths; in the case that the input data is motion sensing data, the data calculating module 120 may specifically be configured to:
summing all the static time periods, and determining the static time length in the characteristic parameters according to the quotient of the summation result and the number of the static time periods;
searching the number of the action time within a preset time period and recording the number of the action time as the movement times in the characteristic parameters;
calculating the area of each movement of the subject based on the position coordinates, and recording the sum of all the area as the movement area of the subject in the characteristic parameters;
summing the intersection points of all the motion paths, and determining the motion complexity in the characteristic parameters according to the quotient of the summation result and the number of the motion paths;
based on the position coordinates, calculating a displacement of the subject in the feature parameters to complete the task.
In one possible implementation, the eye tracking data includes eye coordinates of the subject, a time of eye gaze target stimulation, and a sequence of eye gaze target stimulation; in the case that the input data is eye tracking data, the data calculation module 120 may be specifically configured to:
determining, based on the eye coordinates, a number of times the subject gazed at each of the target stimuli in the characteristic parameter;
recording the time when the eyeball watches the target stimulation as the watching time in the characteristic parameters;
determining a visual scan path of the subject in the characteristic parameters based on the order of the eye gaze target stimuli;
deriving a visual scanning strategy of the subject in the feature parameters based on the visual scanning path and the gaze time.
In one possible implementation, the apparatus 100 further includes:
and the training module is used for training the machine learning model based on the input sample to obtain the trained machine learning model.
In a possible implementation, the training module may be specifically configured to:
taking at least one index in characteristic parameters obtained when a testee completes at least one task in the virtual reality environment as an input parameter, wherein the testee comprises a normal subject and a diseased subject;
and taking all the input parameters as input samples, and training the machine learning model based on the input samples to obtain the trained machine learning model.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a terminal device, and referring to fig. 9, the terminal device 400 may include: at least one processor 410, a memory 420, and a computer program stored in the memory 420 and executable on the at least one processor 410, wherein the processor 410 when executing the computer program implements the steps of any of the method embodiments described above, such as the steps S101 to S103 in the embodiment shown in fig. 2. Alternatively, the processor 410, when executing the computer program, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 110 to 130 shown in fig. 8.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in the memory 420 and executed by the processor 410 to accomplish the present application. The one or more modules/units may be a series of computer program segments capable of performing specific functions, which are used to describe the execution of the computer program in the terminal device 400.
Those skilled in the art will appreciate that fig. 9 is merely an example of a terminal device and is not limiting and may include more or fewer components than shown, or some components may be combined, or different components such as input output devices, network access devices, buses, etc.
The Processor 410 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 420 may be an internal storage unit of the terminal device, or may be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. The memory 420 is used for storing the computer programs and other programs and data required by the terminal device. The memory 420 may also be used to temporarily store data that has been output or is to be output.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The data processing method for paying attention to the defect hyperactivity disorder provided by the embodiment of the application can be applied to terminal equipment such as a computer, a tablet computer, a notebook computer, a netbook, a Personal Digital Assistant (PDA) and the like, and the embodiment of the application does not limit the specific type of the terminal equipment at all.
Take the terminal device as a computer as an example. Fig. 10 is a block diagram showing a partial structure of a computer provided in an embodiment of the present application. Referring to fig. 10, the computer includes: a communication circuit 510, a memory 520, an input unit 530, a display unit 540, an audio circuit 550, a wireless fidelity (WiFi) module 560, a processor 570, and a power supply 580.
The following describes each component of the computer in detail with reference to fig. 10:
the communication circuit 510 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives an image sample transmitted by the image capturing device and then processes the image sample to the processor 570; in addition, the image acquisition instruction is sent to the image acquisition device. Typically, the communication circuit includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the communication circuit 510 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), etc.
The memory 520 may be used to store software programs and modules, and the processor 570 performs various functional applications of the computer and data processing by operating the software programs and modules stored in the memory 520. The memory 520 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the computer, etc. Further, the memory 520 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 530 may be used to receive input numeric or character information and generate key signal inputs related to subject setting and function control of the computer. Specifically, the input unit 530 may include a touch panel 531 and other input devices 532. The touch panel 531, also called a touch screen, can collect the touch operations of the subject (such as the operations of the subject on or near the touch panel 531 with any suitable object or accessory such as a finger, a stylus pen, etc.), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a subject, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 570, and can receive and execute commands sent by the processor 570. In addition, the touch panel 531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 530 may include other input devices 532 in addition to the touch panel 531. In particular, other input devices 532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 540 may be used to display information input by the subject or information provided to the subject and various menus of the computer. The Display unit 540 may include a Display panel 541, and optionally, the Display panel 541 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 531 may cover the display panel 541, and when the touch panel 531 detects a touch operation on or near the touch panel 531, the touch panel is transmitted to the processor 570 to determine the type of the touch event, and then the processor 570 provides a corresponding visual output on the display panel 541 according to the type of the touch event. Although the touch panel 531 and the display panel 541 are shown in fig. 10 as two separate components to implement the input and output functions of the computer, in some embodiments, the touch panel 531 and the display panel 541 may be integrated to implement the input and output functions of the computer.
The audio circuit 550 may provide an audio interface between the subject and the computer. The audio circuit 550 may transmit the received electrical signal converted from the audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 550 and converted into audio data, which is then processed by the audio data output processor 570, and then transmitted to, for example, another computer via the communication circuit 510, or the audio data is output to the memory 520 for further processing.
WiFi is a short-range wireless transmission technology, and the computer can help the subject send and receive e-mails, browse web pages, access streaming media, etc. through the WiFi module 560, which provides the subject with wireless broadband internet access. Although fig. 10 shows the WiFi module 560, it is understood that it does not belong to the essential constitution of the computer, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 570 is a control center of the computer, connects various parts of the entire computer using various interfaces and lines, performs various functions of the computer and processes data by operating or executing software programs and/or modules stored in the memory 520 and calling data stored in the memory 520, thereby monitoring the entire computer. Optionally, processor 570 may include one or more processing units; preferably, the processor 570 may integrate an application processor, which primarily handles operating systems, subject interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 570.
The computer also includes a power supply 580 (e.g., a battery) for powering the various components, and preferably, the power supply 580 is logically coupled to the processor 570 via a power management system that provides management of charging, discharging, and power consumption.
Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the embodiments of the data processing method for attention deficit hyperactivity disorder described above.
The embodiment of the present application provides a computer program product, which, when running on a mobile terminal, enables the mobile terminal to implement the steps in the embodiments of the data processing method for attention deficit hyperactivity disorder described above when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A data processing method for attention deficit hyperactivity disorder, comprising:
obtaining input data of a subject completing a task in a virtual reality environment;
calculating characteristic parameters of the subject based on the input data;
and outputting a classification result based on the characteristic parameters and the trained machine learning model.
2. The data processing method of attention deficit hyperactivity disorder according to claim 1, wherein at least one game scenario is stored in the virtual reality environment, and the subject completes a task in the game scenario.
3. The data processing method of attention deficit hyperactivity disorder according to claim 1, further comprising:
training a machine learning model based on an input sample to obtain the trained machine learning model;
the training of the machine learning model based on the input samples to obtain the trained machine learning model comprises the following steps:
taking at least one index in characteristic parameters obtained by a testee completing a task in the virtual reality environment as an input parameter, wherein the testee comprises a normal subject and a diseased subject;
and taking the input parameters as input samples, and training the machine learning model based on the input samples to obtain the trained machine learning model.
4. The data processing method of attention deficit hyperactivity disorder according to claim 1, wherein said input data includes at least one of task performance data, motion sensing data, eye tracking data, and brain electrical data;
the acquiring input data for a subject to complete a task in a virtual reality environment comprises:
acquiring the task performance data under the current task through a gesture tracker and/or a language processing device;
acquiring the motion sensing data acquired by the action recorder under the current task;
acquiring the eye movement tracking data acquired by eye movement tracking equipment under the current task;
and acquiring the electroencephalogram data acquired by the electroencephalogram acquisition device under the current task.
5. The data processing method of attention deficit hyperactivity disorder according to claim 4, wherein said task performance data includes: the number of correct responses of the subject to a target stimulus, the onset of response of the subject at the onset of a target stimulus, and the number of unresponsive responses of the subject at the onset of the target stimulus, wherein a task comprises at least one target stimulus;
in the case where the input data is task performance data, the calculating characteristic parameters of the subject based on the input data includes:
calculating the ratio of the number of correct reactions to the total number of target stimuli in the current task to obtain the correct rate in the characteristic parameters;
calculating a difference between a reaction start time of a correct reaction and a target stimulation start time, and calculating a standard deviation of the reaction in the characteristic parameters based on the difference;
recording the number of times that the reaction starting time is before the corresponding target stimulation starting time under the current task as the error number in the characteristic parameters;
and recording the number of times of no response of the subject when the target stimulus occurs as the number of missed reports in the characteristic parameters.
6. The data processing method of attention deficit hyperactivity disorder according to claim 4, wherein the motion sensing data includes: a resting time period of the subject, an action time of the subject when transforming an action, position coordinates of a motion path recorded by the action recorder at each movement of the subject, a motion path of the subject during completion of the task, and a number of the motion paths;
in the case that the input data is motion sensing data, the calculating characteristic parameters of the subject based on the input data comprises:
summing all the static time periods, and determining the static time length in the characteristic parameters according to the quotient of the summation result and the number of the static time periods;
searching the number of the action time within a preset time period and recording the number of the action time as the movement times in the characteristic parameters;
calculating the area of each movement of the subject based on the position coordinates, and recording the sum of all the area as the movement area of the subject in the characteristic parameters;
summing the intersection points of all the motion paths, and determining the motion complexity in the characteristic parameters according to the quotient of the summation result and the number of the motion paths;
based on the position coordinates, calculating a displacement of the subject in the feature parameters to complete the task.
7. The method of data processing for attention deficit hyperactivity disorder according to claim 4, wherein the eye tracking data includes eye coordinates, eye fixation target stimulus times, and eye fixation target stimulus orders for the subject;
in the case where the input data is eye tracking data, the calculating characteristic parameters of the subject based on the input data comprises:
determining, based on the eye coordinates, a number of times the subject gazed at each of the target stimuli in the characteristic parameter;
recording the time when the eyeball watches the target stimulation as the watching time in the characteristic parameters;
determining a visual scan path of the subject in the characteristic parameters based on the order of the eye gaze target stimuli;
deriving a visual scanning strategy of the subject in the feature parameters based on the visual scanning path and the gaze time.
8. A data processing apparatus for attention deficit hyperactivity disorder, comprising:
the data acquisition module is used for acquiring input data of a subject for completing tasks in the virtual reality environment;
the data calculation module is used for calculating characteristic parameters of the subject based on the input data;
and the result output module is used for outputting a classification result based on the characteristic parameters and the trained machine learning model.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the data processing method of attention deficit hyperactivity disorder according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, implements a method of data processing of attention deficit hyperactivity disorder according to any one of claims 1-7.
CN201911398269.9A 2019-12-30 2019-12-30 Data processing device for attention deficit hyperactivity disorder Active CN110970130B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911398269.9A CN110970130B (en) 2019-12-30 2019-12-30 Data processing device for attention deficit hyperactivity disorder
PCT/CN2020/129452 WO2021135692A1 (en) 2019-12-30 2020-11-17 Data processing method and device for attention deficit hyperactivity disorder and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911398269.9A CN110970130B (en) 2019-12-30 2019-12-30 Data processing device for attention deficit hyperactivity disorder

Publications (2)

Publication Number Publication Date
CN110970130A true CN110970130A (en) 2020-04-07
CN110970130B CN110970130B (en) 2023-06-27

Family

ID=70037418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911398269.9A Active CN110970130B (en) 2019-12-30 2019-12-30 Data processing device for attention deficit hyperactivity disorder

Country Status (2)

Country Link
CN (1) CN110970130B (en)
WO (1) WO2021135692A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111528859A (en) * 2020-05-13 2020-08-14 浙江大学人工智能研究所德清研究院 Child ADHD screening and evaluating system based on multi-modal deep learning technology
CN111528867A (en) * 2020-05-13 2020-08-14 湖州维智信息技术有限公司 Expression feature vector determination method for child ADHD screening and evaluating system
CN111563633A (en) * 2020-05-15 2020-08-21 上海乂学教育科技有限公司 Reading training system and method based on eye tracker
WO2021135692A1 (en) * 2019-12-30 2021-07-08 佛山创视嘉科技有限公司 Data processing method and device for attention deficit hyperactivity disorder and terminal device
CN113435335A (en) * 2021-06-28 2021-09-24 平安科技(深圳)有限公司 Microscopic expression recognition method and device, electronic equipment and storage medium
CN113425293A (en) * 2021-06-29 2021-09-24 上海交通大学医学院附属新华医院 Auditory dyscognition assessment system and method
CN113456075A (en) * 2021-07-02 2021-10-01 西安中盛凯新技术发展有限责任公司 Concentration assessment training method based on eye movement tracking and brain wave monitoring technology
CN113576482A (en) * 2021-09-28 2021-11-02 之江实验室 Attention deviation training evaluation system and method based on composite expression processing
CN117198537A (en) * 2023-11-07 2023-12-08 北京无疆脑智科技有限公司 Task completion data analysis method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003075762A1 (en) * 2002-03-11 2003-09-18 Adhd Solutions Ltd. A method and system for diagnosis and treatment of adhd and add.
US20040220493A1 (en) * 2000-05-17 2004-11-04 Teicher Martin H. Method for determining fluctuation in attentional state and overall attentional state
US20080280276A1 (en) * 2007-05-09 2008-11-13 Oregon Health & Science University And Oregon Research Institute Virtual reality tools and techniques for measuring cognitive ability and cognitive impairment
CN107519622A (en) * 2017-08-21 2017-12-29 南通大学 Spatial cognition rehabilitation training system and method based on virtual reality and the dynamic tracking of eye
WO2019035910A1 (en) * 2017-08-15 2019-02-21 Akili Interactive Labs, Inc. Cognitive platform including computerized elements
CN109712710A (en) * 2018-04-26 2019-05-03 南京大学 A kind of infant development obstacle intelligent evaluation method based on three-dimensional eye movement characteristics
CN110024014A (en) * 2016-08-03 2019-07-16 阿克利互动实验室公司 Arouse the cognition platform of element including computerization
CN110070944A (en) * 2019-05-17 2019-07-30 段新 Training system is assessed based on virtual environment and the social function of virtual role
US20190261908A1 (en) * 2016-07-19 2019-08-29 Akili Interactive Labs, Inc. Platforms to implement signal detection metrics in adaptive response-deadline procedures

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050216243A1 (en) * 2004-03-02 2005-09-29 Simon Graham Computer-simulated virtual reality environments for evaluation of neurobehavioral performance
WO2012161657A1 (en) * 2011-05-20 2012-11-29 Nanyang Technological University Systems, apparatuses, devices, and processes for synergistic neuro-physiological rehabilitation and/or functional development
CN110970130B (en) * 2019-12-30 2023-06-27 佛山创视嘉科技有限公司 Data processing device for attention deficit hyperactivity disorder

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040220493A1 (en) * 2000-05-17 2004-11-04 Teicher Martin H. Method for determining fluctuation in attentional state and overall attentional state
WO2003075762A1 (en) * 2002-03-11 2003-09-18 Adhd Solutions Ltd. A method and system for diagnosis and treatment of adhd and add.
US20080280276A1 (en) * 2007-05-09 2008-11-13 Oregon Health & Science University And Oregon Research Institute Virtual reality tools and techniques for measuring cognitive ability and cognitive impairment
US20190261908A1 (en) * 2016-07-19 2019-08-29 Akili Interactive Labs, Inc. Platforms to implement signal detection metrics in adaptive response-deadline procedures
CN110024014A (en) * 2016-08-03 2019-07-16 阿克利互动实验室公司 Arouse the cognition platform of element including computerization
WO2019035910A1 (en) * 2017-08-15 2019-02-21 Akili Interactive Labs, Inc. Cognitive platform including computerized elements
CN107519622A (en) * 2017-08-21 2017-12-29 南通大学 Spatial cognition rehabilitation training system and method based on virtual reality and the dynamic tracking of eye
CN109712710A (en) * 2018-04-26 2019-05-03 南京大学 A kind of infant development obstacle intelligent evaluation method based on three-dimensional eye movement characteristics
CN110070944A (en) * 2019-05-17 2019-07-30 段新 Training system is assessed based on virtual environment and the social function of virtual role

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021135692A1 (en) * 2019-12-30 2021-07-08 佛山创视嘉科技有限公司 Data processing method and device for attention deficit hyperactivity disorder and terminal device
CN111528859A (en) * 2020-05-13 2020-08-14 浙江大学人工智能研究所德清研究院 Child ADHD screening and evaluating system based on multi-modal deep learning technology
CN111528867A (en) * 2020-05-13 2020-08-14 湖州维智信息技术有限公司 Expression feature vector determination method for child ADHD screening and evaluating system
CN111528859B (en) * 2020-05-13 2023-04-18 浙江大学人工智能研究所德清研究院 Child ADHD screening and evaluating system based on multi-modal deep learning technology
CN111563633A (en) * 2020-05-15 2020-08-21 上海乂学教育科技有限公司 Reading training system and method based on eye tracker
CN113435335A (en) * 2021-06-28 2021-09-24 平安科技(深圳)有限公司 Microscopic expression recognition method and device, electronic equipment and storage medium
CN113425293A (en) * 2021-06-29 2021-09-24 上海交通大学医学院附属新华医院 Auditory dyscognition assessment system and method
CN113456075A (en) * 2021-07-02 2021-10-01 西安中盛凯新技术发展有限责任公司 Concentration assessment training method based on eye movement tracking and brain wave monitoring technology
CN113576482A (en) * 2021-09-28 2021-11-02 之江实验室 Attention deviation training evaluation system and method based on composite expression processing
CN117198537A (en) * 2023-11-07 2023-12-08 北京无疆脑智科技有限公司 Task completion data analysis method and device, electronic equipment and storage medium
CN117198537B (en) * 2023-11-07 2024-03-26 北京无疆脑智科技有限公司 Task completion data analysis method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110970130B (en) 2023-06-27
WO2021135692A1 (en) 2021-07-08

Similar Documents

Publication Publication Date Title
CN110970130B (en) Data processing device for attention deficit hyperactivity disorder
Huynh et al. Engagemon: Multi-modal engagement sensing for mobile games
JP6125670B2 (en) Brain-computer interface (BCI) system based on temporal and spatial patterns of collected biophysical signals
US20120203725A1 (en) Aggregation of bio-signals from multiple individuals to achieve a collective outcome
RU2708807C2 (en) Algorithm of integrated remote contactless multichannel analysis of psychoemotional and physiological state of object based on audio and video content
Elzeiny et al. Machine learning approaches to automatic stress detection: A review
Conati et al. Modeling user affect from causes and effects
CN103154953A (en) Measuring affective data for web-enabled applications
Al-Ghannam et al. Prayer activity monitoring and recognition using acceleration features with mobile phone
CN114648354A (en) Advertisement evaluation method and system based on eye movement tracking and emotional state
Chen et al. FaceEngage: Robust estimation of gameplay engagement from user-contributed (YouTube) videos
Putze et al. Understanding hci practices and challenges of experiment reporting with brain signals: Towards reproducibility and reuse
CN115713246A (en) Multi-modal man-machine interaction performance evaluation method for virtual scene
Ferrari et al. Using voice and biofeedback to predict user engagement during requirements interviews
Baskaran et al. Multi-dimensional task recognition for human-robot teaming: literature review
Jianwattanapaisarn et al. Emotional characteristic analysis of human gait while real-time movie viewing
Hou Deep Learning-Based Human Emotion Detection Framework Using Facial Expressions
CN114052736B (en) System and method for evaluating cognitive function
CN113974589A (en) Multi-modal behavior paradigm evaluation optimization system and cognitive ability evaluation method
Ekiz et al. Long short-term memory network based unobtrusive workload monitoring with consumer grade smartwatches
KR20220158957A (en) System for prediction personal propensity using eye tracking and real-time facial expression analysis and method thereof
Destyanto Emotion Detection Research: A Systematic Review Focuses on Data Type, Classifier Algorithm, and Experimental Methods
Salman et al. Improvement of Eye Tracking Based on Deep Learning Model for General Purpose Applications
Sharifara et al. A robot-based cognitive assessment model based on visual working memory and attention level
Nam et al. FacialCueNet: unmasking deception-an interpretable model for criminal interrogation using facial expressions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201012

Address after: Room 601, 6th floor, No.17, Qinghui Road, Central District neighborhood committee, Daliang sub district office, Shunde District, Foshan City, Guangdong Province

Applicant after: Foshan chuangshijia Technology Co.,Ltd.

Address before: 528300 Guangdong Foshan Shunde District Daliang Street New Gui Nan Road Yi Ju Garden two phase 19 19 401

Applicant before: Duan Xin

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant