WO2021135692A1 - 一种注意缺陷多动障碍的数据处理方法、装置以及终端设备 - Google Patents

一种注意缺陷多动障碍的数据处理方法、装置以及终端设备 Download PDF

Info

Publication number
WO2021135692A1
WO2021135692A1 PCT/CN2020/129452 CN2020129452W WO2021135692A1 WO 2021135692 A1 WO2021135692 A1 WO 2021135692A1 CN 2020129452 W CN2020129452 W CN 2020129452W WO 2021135692 A1 WO2021135692 A1 WO 2021135692A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
data
task
time
hyperactivity disorder
Prior art date
Application number
PCT/CN2020/129452
Other languages
English (en)
French (fr)
Inventor
段新
段拙然
Original Assignee
佛山创视嘉科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 佛山创视嘉科技有限公司 filed Critical 佛山创视嘉科技有限公司
Publication of WO2021135692A1 publication Critical patent/WO2021135692A1/zh

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application relates to the field of computer technology, and in particular to a data processing method, device and terminal equipment for attention deficit hyperactivity disorder.
  • Attention deficit hyperactivity disorder Hyperactivity disorder (ADHD) is a common mental disorder in childhood. It is divided into attention disorder (ADHD-I, ADHD-inattention), hyperactivity and impulsivity (ADHD-H, ADHD-hyperactivity), and combined manifestations (ADHD- C, ADHD-combine) type three. Attention deficit hyperactivity disorder is mainly manifested in inattention, hyperactivity, impulsivity and poor self-control ability, etc. It has an impact on children's learning, communication with others and conduct.
  • the diagnosis of attention deficit hyperactivity disorder is mostly through interviews, observations and questionnaires, and the diagnosis is realized through evaluation.
  • the diagnosis process is mostly subjective, and often misdiagnosis and missed diagnosis are often caused by children's naughty or nervousness.
  • One of the objectives of the embodiments of the present application is to provide a data processing method, device, and terminal device for attention deficit hyperactivity disorder, aiming to solve the problem of strong subjectivity in the judgment of attention deficit hyperactivity disorder.
  • a data processing method for attention deficit hyperactivity disorder is provided to obtain input data for subjects to complete tasks in a virtual reality environment
  • the classification result is output.
  • a data processing device for attention deficit hyperactivity disorder including:
  • the data acquisition module is used to acquire input data for subjects to complete tasks in the virtual reality environment
  • a data calculation module configured to calculate characteristic parameters of the subject based on the input data
  • the result output module is used to output classification results based on the feature parameters and the trained machine learning model.
  • a data processing terminal device for attention deficit hyperactivity disorder including: a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the When the processor executes the computer program, the data processing method for attention deficit hyperactivity disorder according to any one of the first aspects is realized.
  • the embodiment of the application has the beneficial effects that: the test input data is obtained by the subject completing the task in the virtual reality environment, the characteristic parameter is calculated according to the input data, and finally the characteristic parameter is input into the machine learning
  • the subjects are evaluated in the model and the classification results are obtained.
  • This application is based on test data collected by subjects completing tasks in a virtual reality environment. Subjects can complete the test in a relaxed environment, avoiding inaccurate data collected due to tension during interviews or paper-written tests.
  • this application uses a machine learning model to automatically classify according to characteristic parameters, avoiding judgments by human subjective factors, and improving the objective accuracy of the judgment of attention deficit hyperactivity disorder; again, the virtual reality environment (Virtual reality, VR) scene design is convenient.
  • the behavior, cognition and physiological data generated in the VR environment can be used as the input effective features of the classifier to distinguish between patients and normal subjects, and it is easy to obtain biomarkers with ADHD characteristics.
  • FIG. 1 is a schematic diagram of an application scenario of a data processing method for attention deficit hyperactivity disorder provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a data processing method for attention deficit hyperactivity disorder according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart of a specific method after data input in FIG. 2 according to an embodiment of the present application;
  • FIG. 4 is a schematic flowchart 1 of the method for calculating the characteristic parameters in FIG. 2 provided by an embodiment of the present application;
  • FIG. 5 is a second schematic flowchart of the method for calculating the characteristic parameters in FIG. 2 provided by an embodiment of the present application;
  • FIG. 6 is a third flowchart of the method for calculating the characteristic parameters in FIG. 2 provided by an embodiment of the present application;
  • FIG. 7 is a schematic flowchart of a machine learning model training method provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a data processing device for attention deficit hyperactivity disorder according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • FIG. 10 is a block diagram of a part of the structure of a computer provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of an application scenario of a data processing method for attention deficit hyperactivity disorder provided by an embodiment of the application.
  • the above data processing method for attention deficit hyperactivity disorder can be used to evaluate subjects with ADHD.
  • the terminal device 20 is used to obtain test data of the testee 10 completing tasks in the virtual reality environment, analyze and evaluate the test data, and finally obtain a classification result.
  • the doctor can judge the testee based on the classification result of the terminal device 20 10 Whether you have attention deficit hyperactivity disorder and its types.
  • FIG. 2 shows a schematic flowchart of a data processing method for attention deficit hyperactivity disorder provided by the present application.
  • the data processing method for attention deficit hyperactivity disorder is described in detail as follows:
  • S101 Obtain input data for a subject to complete a task in a virtual reality environment.
  • the VR stores interesting games that can distinguish ADHD from normal subjects, such as finding the difference between two environmental situations, archery, and recognizing the expressions of characters in social situations.
  • the subjects complete the VR
  • different input data such as attention and hyperactivity impulse that can be classified into ADHD can be collected during the completion of the task.
  • Multiple games can be set in one task. For example, different game tasks in two environment scenarios can be set with different difficulty environments, and in the task of identifying character expressions in social situations, expressions of different levels of complexity can also be set for testing.
  • VR finds different scenes to evaluate spatial working memory ability, selective attention ability and visual information searching ability. It is found that the "difference" of the scene is a goal-oriented behavior that requires attention and is affected by the position of the eyeball and the latency of the gaze.
  • the ability of ADHD patients to detect changes is weaker than that of normal ones, especially for subtle changes, which are mainly caused by the defects of the control and attention of the free eye movements of ADHD patients.
  • the VR fixed target archery game scene evaluates the concentration and endurance of attention. Audio-visual interference factors can be set.
  • the fixed target static shooting requires staring at the bullseye. Within the stipulated time, the duration of staring at the bullseye is closer to the stipulated time. , The higher the shooting target ring score.
  • the VR moving target game scene is a continuous performance test (Continuous performance test). test, CPT) task, the above task is to require the subject to respond to the target stimulus as soon as possible, and not to respond to the non-target stimulus, it is necessary to use auditory and visual selectivity.
  • CPT Continuous performance test
  • VR recognizes the expressions of characters in social situations, and can set two kinds of emotion recognition under different difficulty, 1) the recognition of static and dynamic expressions, and the recognition of positive and negative expressions (such as negative emotions such as anger, sadness, fear, etc.) ), and the identification of different intensities of various emotions (such as 30%, 50%, 70%, and 100%).
  • Situational task VR situation Emotion recognition and processing in social situations, such as: visual attention and emotional recognition in interpersonal communication, complex and subtle emotional changes in the situation require better visual attention to interpret and identify. Participants can use finger recognition or natural language to interact (answer), objectively investigate facial expression (emotion) recognition, psychological (intelligence) theory, selective attention, selective response time test, visual information search, etc.
  • VR scene tasks focus on evaluating different characteristics of ADHD, such as attention and hyperactivity.
  • At least one VR scene can be selected, and at least one set of input data can be obtained for each scene.
  • Input data includes: task performance data, motion sensing data, eye tracking data, and EEG data.
  • step S101 may include:
  • S1011 Obtain the task performance data under the current task through a gesture tracker and/or a language processing device.
  • the target stimulus refers to the instruction or task content of the task in VR.
  • the desired expression is the target stimulus; the flying saucer in the flying saucer game is the target stimulus.
  • Task performance data can be obtained through a gesture tracker, such as pointing a smiling face in the context with a hand, through the gesture tracker, you can determine whether the subject has completed the task correctly, and you can also determine the subject when a target stimulus occurs based on hand movements If a target stimulus occurs but the subject’s hand does not move, it means that the subject did not respond to the current target stimulus.
  • a gesture tracker such as pointing a smiling face in the context with a hand
  • Task performance data can be obtained through language processing devices, for example: to find the smiling face in the context, the subject can speak the position or serial number of the smiling face through natural language, and the computer can obtain the subject’s language information through information conversion and information Identify and finally determine whether the subject has completed the task correctly.
  • the gesture tracking can be optical tracking, such as Leap Motion (somatosensory controller); it can also be inertial tracking, which can be a data glove with a sensor on the hand.
  • Leap Motion somatosensory controller
  • inertial tracking can be a data glove with a sensor on the hand.
  • S1012 Acquire the motion sensing data collected by the motion recorder in the current task.
  • the definition of ADHD is described by the physical movement behavior of the diagnosed person, and the stability of the subject's body posture is one of the criteria used to judge whether the subject is ADHD.
  • Existing methods are often to collect the body movement data of the diagnosed person over a period of time, and compare the difference in the body movement data of the diagnosed person and the normal person to determine whether the diagnosed person is ill.
  • the action recorder can use optical tracking or inertial tracking to capture the action
  • the implementation of the action recorder can be a wearable device or a scene depth analysis solution
  • the scene depth analysis solution is to use an optical sensor to receive optical signals to analyze the scene. Analyze the depth information of the subject to determine the subject’s body position and posture; the wearable device fixes the sensor on the subject’s joints or key points, and obtains the subject’s position changes or bending degree by testing the joints or key points.
  • the subject’s physical movement, among which key points can include the head, wrists, and ankles.
  • the motion recorder can record the body motion data of the subject through two devices, one is an accelerometer device, which uses a three-axis accelerometer to record three-axis activities to obtain motion and inertial measurement data; the other It is an infrared optical position detection and analysis system.
  • the infrared optical position detection and analysis system is a moving object analysis system developed based on optical sensitive devices and stereo measurement.
  • S1013 Acquire the eye tracking data collected by the eye tracking device under the current task.
  • eye movement can represent the gaze time and gaze direction.
  • the eye tracking device can obtain eye tracking data such as the location, time, and sequence of individual eye gaze, and then obtain the number of gazes, gaze time, visual scanning path, visual scanning strategy and other characteristic parameters.
  • the eye tracking device can objectively record visual attention and visual search patterns, and can provide evaluation indicators that distinguish the different visual attention and visual search patterns between ADHD patients and normal people.
  • the eye tracking device can be integrated in the VR head-mounted display device, and is mainly used for tracking the eyeball and estimating the line of sight of the eyeball. Through the tracking of the eyeball, the time to fix the eyeball and the displacement of the eyeball can be realized. Time and the sequence of eyeball movement are collected.
  • S1014 Acquire the EEG data collected by the EEG acquisition device in the current task.
  • the EEG acquisition device can collect the subject’s electrical response to the subject’s brain during target stimulation, that is, event-related potentials.
  • target stimulation that is, event-related potentials.
  • Common external stimuli include video and audio stimuli, and internal stimuli are commonly associated with attention and decision-making.
  • Tasks related to power and working memory are called mental tasks.
  • EEG data are electroencephalogram (EEG) and EEG evoked event-related potentials (ERP), which can include: EEG signal ⁇ , ⁇ , ⁇ waves, etc., potentials P200, P300, and peaks of frequency spectrum around 11 Hz and P2- N2 peak-to-peak value, etc.
  • EEG electroencephalogram
  • ERP EEG evoked event-related potentials
  • S102 Calculate and obtain characteristic parameters of the subject based on the input data.
  • step S102 when the input data is task performance data, the implementation process of step S102 may include:
  • the task performance data may include: the number of correct responses of the subject when the target stimulus occurs, the start time of the subject’s response when the target stimulus occurs, and the number of times the subject does not respond when the target stimulus occurs, Among them, a task includes at least one target stimulus.
  • the number of correct responses refers to the total number of instructions that the subject can correctly answer the target stimulus when the target stimulus is displayed in the VR, for example: to find a smiling face in a social situation, the subject If the smiling face is pointed out correctly, the subject will respond correctly to the target stimulus.
  • S1021 Calculate the ratio of the number of correct responses to the total number of target stimuli in the current task to obtain the correct rate in the characteristic parameter.
  • the accuracy rate reflects the concentration of the subject's attention. The higher the accuracy rate, the more concentrated the attention of the subject.
  • the total number of target stimuli refers to the number of instructions that need to be completed. For example, there are three sets of environmental scenarios in different tasks, and each group has two scenarios. Find the difference between the two scenarios in each group. In the above task, the target The total number of stimuli is three; in the flying saucer mission, if there are 15 flying saucers in total, the total number of targets is 15.
  • a is the correct rate
  • A is the number of correct responses
  • Z is the total number of target stimuli.
  • S1022 Calculate the difference between the start time of the reaction and the start time of the target stimulus at the time of correct reaction, and calculate the standard deviation of the reaction time in the characteristic parameter based on the difference.
  • the standard deviation of the reaction time is the standard deviation of the above-mentioned difference.
  • the response start time is the time when the subject begins to respond to the current target stimulus.
  • the subject's response may include actions or language.
  • the number of errors includes the number of times the subject has responded when the target stimulus does not appear. If the difference between the reaction start time and the target stimulation time is small, the subject’s response time is short. If the subject’s response time is short but the number of errors is high, the subject’s impulsivity characteristics are more obvious. The subject's response time is longer, but the number of errors is more, the subjects' inattention characteristics are more obvious.
  • S1024 Record the number of times the subject does not respond when the target stimulus occurs as the number of missed reports in the characteristic parameter.
  • the subject when the target stimulus occurs, the subject does not respond before and after the target stimulus starts, that is, neither responds, which means that the subject did not respond to the current target stimulus. reaction.
  • step S102 when the input data is motion sensing data, the implementation process of step S102 may include:
  • the motion sensing data may include: the static time period of the subject, the motion time of the subject when changing motions, and the motion recorded by the motion recorder each time the subject moves.
  • the motion sensing data may be the motion data of the subject recorded during the current task, or the motion data of the subject recorded under multiple tasks during the entire test period.
  • S1025 Summing each of the stationary time periods, and determining the stationary time length in the characteristic parameter according to the quotient of the summation result and the number of the stationary time periods.
  • the resting duration refers to the average duration of the subject during the resting period.
  • S1026 Search for the number of the action times within a preset time period and record it as the number of exercises in the characteristic parameter.
  • S1027 Calculate the area of each movement of the subject based on the position coordinates, and record the sum of the areas of all the areas as the movement area of the subject in the characteristic parameter.
  • the motion area of the subject may include the total number of motion areas covered by the motion sensor device through motion.
  • S1028 Sum the intersection points of all the motion paths, and determine the motion complexity in the characteristic parameter according to the quotient of the summation result and the number of the motion paths.
  • S1029 Calculate the displacement of the subject in the characteristic parameters for completing the task based on the position coordinates.
  • the displacement of the subject to complete the task may include the total displacement of the subject when the current task is completed, and also This can include the total displacement of the subject at the completion of the test.
  • the characteristic parameters can also include a time scale.
  • the time scale reflects the degree of activity of the subject. When the subject is in motion, it is recorded as 1, and when the subject is in a static state, it is recorded as 0. Then the number of 1 can be counted. And the number of zeros to calculate the time scale.
  • step S102 when the input data is eye tracking data, the implementation process of step S102 may include:
  • the eye tracking data includes the eye coordinates of the subject, the time when the eye is gazing at the target stimulus, and the order in which the eye is gazing at the target stimulus.
  • the eyeball coordinates can be used to calibrate the stay time and the number of times the eyeball stays at a certain position. Therefore, by counting the number of times the eyeball coordinates appear, the number of gazes of the subject at the target stimulus can be counted.
  • S10211 Record the time for the eyeball to fixate on the target stimulus as the fixation time in the characteristic parameter.
  • S10212 Determine a visual scan path of the subject in the characteristic parameter based on the sequence of the eyeball gazing at the target stimulus.
  • the visual scanning path of the subject can be known through the order in which the subject views the target stimulus, that is, what content the subject sees first, and then what content is seen, by analyzing the subject
  • the subject’s visual scanning path can know the subject’s tendency to watch the target stimulus. For example, in the task of distinguishing between smiling and crying faces, if the subject’s visual scanning path is from a smiling face to a crying face, it means that the subject is more They tend to observe the smiling face. If the subject sees the crying face first and then the smiling face, it means that the subject is more inclined to look at the crying face’s attention bias.
  • S10213 Obtain a visual scanning strategy of the subject in the characteristic parameter based on the visual scanning path and the gaze time.
  • the visual scanning strategy reflects the subject’s residence time in the target stimulus and the degree of sensitivity to the target stimulus. Longer, the first fixation time is also longer than ADHD subjects; and ADHD subjects look at the entire scene for a longer time.
  • step S102 when the input data is EEG data, the implementation process of step S102 may include:
  • time-frequency domain features, P300 features, etc., among the feature parameters are obtained.
  • the EEG signal is preprocessed, and the preprocessing mainly includes data inspection, bandpass filtering, anti-counterfeiting, and segmentation.
  • time-frequency domain feature extraction is performed as the EEG data feature parameter; Perform feature extraction on potential P200, P300 peaks, latency, average, etc., as EEG data feature parameters, and so on.
  • the characteristic parameters are input to the trained machine learning model, and the classification result is automatically output.
  • the doctor can judge whether the subject is an ADHD patient and what type of ADHD according to the classification result.
  • the output results can be set to four types: attention disorder, hyperactivity and impulsivity, combined performance, and normal.
  • the characteristic parameters can also include:
  • the bias towards the area of interest reflects the value of attention bias.
  • the study found that ADHD patients pay attention to happy faces for less time than normal people, and pay attention to neutral faces longer than normal people. When presenting a group of happy-unhappy faces, ADHD patients pay attention to unhappy faces, while normal people tend to be happy. face. Compared with normal people, ADHD patients pay more attention to the mouth of emotional faces instead of the eyes. It is possible that the mouth opening and closing can show clear positive and negative emotions, and it is more difficult to distinguish the eyes. In social situations, ADHD patients do not pay attention to the facial and body language information of others when they are angry.
  • the characteristic parameters in the above VR archery game scene can also include:
  • accelerated reaction time accelerated reaction time variation rate
  • ADHD patients have cognitive deficits, when the target throw interval is shorter and denser, its ability to accelerate response decreases, and the number of misses increases.
  • hyperactive indicators the total time of rest and activity, the average number of changes in a certain period of time, the distance path of the activity and the area of activity
  • the total fixation time (FT) of the area of interest the number of eye misses, and the time of eye misses.
  • the characteristic parameters in the above-mentioned VR search for different scenes can also include:
  • the eye tracking data you can also get: the number of gazes at the changing area, the time of gazing at the changing area, the time of gazing at the changing area for the first time, the area of interest, etc.
  • the foregoing method may further include:
  • the machine learning model may include algorithms based on classic machine learning (for example, support vector machines (SVM), artificial neural networks (Artificial Neural Network, ANN)) and deep learning.
  • SVM support vector machines
  • ANN artificial Neural Network
  • CNN Convolutional Neural Network
  • the development environment can use TensorFlow, Python, MXNet, Torch, Theano, CNTK, cuda, cuDNN, Caffe open source library or LibSVM toolbox, etc.
  • step S201 may include:
  • the input parameter may be at least one index among the characteristic parameters obtained when the testee completes at least one task in the VR as the input parameter.
  • the machine learning model may include a support vector machine or a convolutional neural network.
  • the input samples include data collected by a certain number of ADHD patients and a certain number of normal persons by completing tasks in VR.
  • ADHD includes three types: ADHD-I, ADHD-H and ADHD-C.
  • the training of the machine learning model includes:
  • SVM recursive feature elimination method based on EEG time-frequency domain as feature input (support vector machine Recursive feature elimination, SVM-RFE) classification method.
  • SVM-RFE support vector machine Recursive feature elimination
  • an ensemble learning method that selects multiple feature parameters as input: the ensemble learning method is used to combine a single classification method, and the multiple kernel learning (MKL) method is used to combine multiple kernel functions, which can significantly improve the classification ability.
  • Task performance data motion sensing data, eye tracking data, EEG data and other multi-modal data, based on multi-core learning (multi-core learning, MKL) fusion of 4 features to train a multi-core classifier.
  • MKL multi-core learning
  • SVM adopts nested cross validation method to preprocess all kinds of data, extract features, select features, and finally classify them.
  • deep learning can learn features through deep non-linear network structure, and form a more abstract deep representation (attribute category or feature) by combining low-level features to achieve complex function approximation, so that the essential features of the data set can be learned.
  • the machine learning model is a convolutional neural network
  • the training of the machine learning model includes:
  • nodes are used in the input layer for task performance data, motion sensor data, eye tracking data, EEG data, etc., as input vectors X1...Xn (features), and the output layer is normal, ADHD-I, ADHD -H, ADHD-C and other 4 neurons.
  • Each node realizes the input after nonlinear transformation through the activation function, and can output through multiple hidden layers; compare the output of the convolutional neural network with the actual output or the expected output, and use the mean square error to measure the difference between the predicted value and the true value.
  • the back-propagation algorithm (back-propagation algorithm, BP algorithm) is used to change the connection weight w value and the partial derivative b value, and the loss function is minimized through continuous iteration to make each predicted result value and the true result value Getting closer and closer, the loss function value does not change, for example, until the error reaches close to 0.001.
  • the Softmax loss function can be used.
  • the input parameters are machine learning algorithms that take multiple indicators of the characteristic parameters obtained when the subject completes multiple tasks in VR as input parameters. .
  • CNN Convolutional Neural Network
  • CNN ADHD classification network which is also composed of a series of volumes.
  • CNN model composed of integration, pooling and ReLU activation layers.
  • a point-wise Gated Boltzmann machine can be used.
  • Boltzmann Machine, PGBM Boltzmann Machine
  • the feature vectors of the last layer of two or more CNNs in the VR scene are spliced, and the spliced feature vectors are used as the input of the visible layer of the PGBM part for training in this part, and the PGBM part uses the contrast divergence method for training.
  • the feature representation of the task-related part of the spliced feature vector can be obtained.
  • This part of the feature representation is used as the input of the newly added fully connected layer to train the newly added fully connected layer.
  • the back propagation depth of the network is limited to the newly added fully connected layer.
  • the Softmax loss function is also used as a guide for the network training process.
  • FIG. 8 shows a structural block diagram of the data processing device for attention deficit hyperactivity disorder provided by an embodiment of the present application. For ease of description, only The parts related to the embodiments of this application are described.
  • the device 100 may include: a data acquisition module 110 and a data calculation module 120 and a result output module 130.
  • the data acquisition module 110 is used to acquire input data for subjects to complete tasks in the virtual reality environment
  • the data calculation module 120 is configured to calculate the characteristic parameters of the subject based on the input data
  • the result output module 130 is configured to output classification results based on the feature parameters and the trained machine learning model.
  • At least one game scene is stored in the virtual reality environment, and the subject completes a task in the game scene.
  • the input data includes at least one of task performance data, motion sensing data, eye tracking data, and EEG data; the data acquisition module 110 may be specifically used for:
  • the task performance data includes: the number of correct responses of the subject when the target stimulus occurs, the start time of the subject's response when the target stimulus occurs, and when the target stimulus occurs The number of unresponsiveness of the subject, where one task includes at least one target stimulus; in the case that the input data is task performance data, the data calculation module 120 may be specifically used to:
  • the number of times the subject does not respond when the target stimulus occurs is recorded as the number of missed reports in the characteristic parameters.
  • the motion sensing data includes: the static time period of the subject, the action time of the subject when changing actions, and the motion time of the subject every time the subject moves.
  • the calculation module 120 may be specifically used for:
  • Finding the number of the action times within a preset time period is recorded as the number of exercises in the characteristic parameter
  • the eye tracking data includes the eye coordinates of the subject, the time of the eye gazing at the target stimulus, and the order of the eye gazing at the target stimulus;
  • the input data is the eye tracking data.
  • the data calculation module 120 may be specifically used for:
  • a visual scanning strategy of the subject in the characteristic parameter is obtained.
  • the device 100 further includes:
  • the training module is used to train the machine learning model based on the input samples to obtain the trained machine learning model.
  • the training module can be specifically used to:
  • testee includes normal subjects and diseased subjects
  • All the input parameters are used as input samples, and the machine learning model is trained based on the input samples to obtain the trained machine learning model.
  • the embodiment of the present application also provides a terminal device.
  • the terminal device 400 may include: at least one processor 410, a memory 420, and stored in the memory 420 and available on the at least one processor 410.
  • a running computer program when the processor 410 executes the computer program, the steps in any of the foregoing method embodiments, such as steps S101 to S103 in the embodiment shown in FIG. 2, are implemented.
  • the processor 410 executes the computer program, the functions of the modules/units in the foregoing device embodiments, for example, the functions of the modules 110 to 130 shown in FIG. 8 are realized.
  • the computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 420 and executed by the processor 410 to complete the application.
  • the one or more modules/units may be a series of computer program segments capable of completing specific functions, and the program segments are used to describe the execution process of the computer program in the terminal device 400.
  • FIG. 9 is only an example of a terminal device, and does not constitute a limitation on the terminal device. It may include more or less components than those shown in the figure, or a combination of certain components, or different components, such as Input and output equipment, network access equipment, bus, etc.
  • the processor 410 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), and application specific integrated circuits (Application Specific Integrated Circuits). Integrated Circuit, ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 420 may be an internal storage unit of the terminal device, or an external storage device of the terminal device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory card. (Flash Card) and so on.
  • the memory 420 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 420 can also be used to temporarily store data that has been output or will be output.
  • the bus can be an industry standard architecture (Industry Standard Architecture, ISA) bus, Peripheral Component (PCI) bus or extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the buses in the drawings of this application are not limited to only one bus or one type of bus.
  • the data processing method for attention deficit hyperactivity disorder provided in the embodiments of this application can be applied to computers, tablets, notebooks, netbooks, and personal digital assistants (personal digital assistants).
  • personal digital assistants personal digital assistants
  • the embodiments of this application do not impose any restrictions on the specific types of terminal devices.
  • FIG. 10 shows a block diagram of a part of the structure of a computer provided in an embodiment of the present application.
  • the computer includes: a communication circuit 510, a memory 520, an input unit 530, a display unit 540, an audio circuit 550, a wireless fidelity (WiFi) module 560, a processor 570, a power supply 580 and other components.
  • WiFi wireless fidelity
  • the communication circuit 510 can be used for receiving and sending signals during information transmission or communication.
  • the communication circuit includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • the communication circuit 510 may also communicate with the network and other devices through wireless communication.
  • the above-mentioned wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (General Packet Radio) Service, GPRS), Code Division Multiple Access (Code Division Multiple Access, CDMA), Wideband Code Division Multiple Access (Wideband Code Division Multiple Access, WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile communication
  • General Packet Radio Service General Packet Radio Service
  • GPRS General Packet Radio Service
  • Code Division Multiple Access Code Division Multiple Access
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • E-mail Short Messaging Service
  • the memory 520 may be used to store software programs and modules.
  • the processor 570 executes various functional applications and data processing of the computer by running the software programs and modules stored in the memory 520.
  • the memory 520 may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of the computer (such as audio data, phone book, etc.), etc.
  • the memory 520 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the input unit 530 may be used to receive inputted numeric or character information, and generate key signal inputs related to the subject setting and function control of the computer.
  • the input unit 530 may include a touch panel 531 and other input devices 532.
  • the touch panel 531 also called a touch screen, can collect the subject’s touch operations on or near it (for example, the subject uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 531 or on the touch screen. Operation near the panel 531), and drive the corresponding connection device according to the preset program.
  • the touch panel 531 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch position of the subject, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device and converts it into contact coordinates , And then sent to the processor 570, and can receive and execute the command sent by the processor 570.
  • the touch panel 531 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the input unit 530 may also include other input devices 532.
  • the other input device 532 may include, but is not limited to, one or more of a physical keyboard, function keys (such as a volume control button, a switch button, etc.), a trackball, a mouse, and a joystick.
  • the display unit 540 may be used to display information input by the subject or information provided to the subject and various menus of the computer.
  • the display unit 540 may include a display panel 541.
  • a liquid crystal display Liquid Crystal Display, LCD
  • organic light-emitting diode Organic Light-Emitting Diode
  • Diode, OLED Organic Light-Emitting Diode
  • the touch panel 531 can cover the display panel 541. When the touch panel 531 detects a touch operation on or near it, it transmits it to the processor 570 to determine the type of the touch event, and then the processor 570 determines the type of the touch event. The type provides corresponding visual output on the display panel 541.
  • the touch panel 531 and the display panel 541 are used as two independent components to realize the input and input functions of the computer, but in some embodiments, the touch panel 531 and the display panel 541 can be integrated. Realize the computer's input and output functions.
  • the audio circuit 550 can provide an audio interface between the subject and the computer.
  • the audio circuit 550 can transmit the electric signal after the received audio data conversion to the speaker, which is converted into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signal into an electric signal, which is received by the audio circuit 550 and converted into
  • the audio data is processed by the audio data output processor 570, and then sent to, for example, another computer through the communication circuit 510, or the audio data is output to the memory 520 for further processing.
  • WiFi is a short-distance wireless transmission technology.
  • the computer can help subjects send and receive emails, browse web pages, and access streaming media through the WiFi module 560. It provides the subjects with wireless broadband Internet access.
  • FIG. 10 shows the WiFi module 560, it is understandable that it is not a necessary component of the computer, and can be omitted as needed without changing the essence of the invention.
  • the processor 570 is the control center of the computer. It uses various interfaces and lines to connect various parts of the entire computer. It executes by running or executing software programs and/or modules stored in the memory 520, and calling data stored in the memory 520. Various functions of the computer and processing data, so as to monitor the computer as a whole.
  • the processor 570 may include one or more processing units; optionally, the processor 570 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, the subject interface, and For application programs, the modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 570.
  • the computer also includes a power supply 580 (such as a battery) for supplying power to various components.
  • a power supply 580 (such as a battery) for supplying power to various components.
  • the power supply 580 can be logically connected to the processor 570 through a power management system, so that functions such as charging, discharging, and power management can be managed through the power management system. .
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the data processing method that can realize the aforementioned attention deficit hyperactivity disorder is realized. Steps in the embodiment.
  • the embodiments of the present application provide a computer program product.
  • the steps in each embodiment of the data processing method that can realize the aforementioned attention deficit hyperactivity disorder can be realized when the mobile terminal is executed.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer program can be stored in a computer-readable storage medium.
  • the computer program can be stored in a computer-readable storage medium.
  • the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may at least include: any entity or device capable of carrying computer program code to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory
  • electrical carrier signals telecommunications signals
  • software distribution media Such as U disk, mobile hard disk, floppy disk or CD-ROM, etc.
  • computer-readable media cannot be electrical carrier signals and telecommunication signals.
  • the disclosed apparatus/network equipment and method may be implemented in other ways.
  • the device/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种注意缺陷多动障碍的数据处理方法,包括:采集患病者和正常者使用虚拟现实环境完成任务时输出的特征参数,并通过采集到的特征参数训练机器学习模型,得到训练后的机器学习模型,最后利用训练后的机器学习模型对任一受试者运用虚拟现实环境采集的特征参数进行预测分类。该方法通过受试者完成虚拟现实环境中的任务采集的测试数据,受试者可以在轻松的环境中完成测试,避免了受试者因紧张造成采集的数据不准确的情况,另外,该方法通过机器学习模型对特征参数进行预测分类的,避免了人为主观因素的评判,提高了对注意缺陷多动障碍的评判客观准确性。

Description

一种注意缺陷多动障碍的数据处理方法、装置以及终端设备
本申请要求于2019年12月30日在中国专利局提交的、申请号为201911398269.9的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及一种注意缺陷多动障碍的数据处理方法、装置以及终端设备。
背景技术
注意缺陷多动障碍(Attention deficit hyperactivity disorder,ADHD)是一种儿童时期常见的精神障碍疾病,分为注意障碍(ADHD-I,ADHD- inattention),多动和冲动(ADHD-H,ADHD-hyperactivity),以及组合表现(ADHD-C, ADHD-combine)三型。注意缺陷多动障碍主要表现在注意力不集中、多动、冲动和自控能力差等,对患儿的学习、与人交流方面和品行方面均有影响。
目前对注意缺陷多动障碍的诊断多通过面谈、观察以及问卷的形式,通过测评实现诊断,在诊断过程中多带有主观性,常常因为儿童顽皮或紧张造成误诊和漏诊。
技术问题
本申请实施例的目的之一在于:提供一种注意缺陷多动障碍的数据处理方法、装置、终端设备,旨在解决对注意缺陷多动障碍的评判主观性强的问题。
技术解决方案
为解决上述技术问题,本申请实施例采用的技术方案是:
第一方面,提供了一种注意缺陷多动障碍的数据处理方法,获取受试者完成虚拟现实环境中的任务的输入数据;
基于所述输入数据计算得到所述受试者的特征参数;
基于所述特征参数和训练后的机器学习模型,输出分类结果。
第二方面,提供了一种注意缺陷多动障碍的数据处理装置,包括:
数据获取模块,用于获取受试者完成虚拟现实环境中的任务的输入数据;
数据计算模块,用于基于所述输入数据计算得到所述受试者的特征参数;
结果输出模块,用于基于所述特征参数和训练后的机器学习模型,输出分类结果。
第三方面,提供一种注意缺陷多动障碍的数据处理终端设备,包括:存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现上述第一方面中任一项所述的注意缺陷多动障碍的数据处理方法。
有益效果
本申请实施例与现有技术相比存在的有益效果是:本申请通过受试者完成虚拟现实环境中的任务获得测试的输入数据,根据输入数据计算得到特征参数,最后将特征参数输入机器学习模型中对受试者进行评价,得到分类结果。本申请是通过受试者完成虚拟现实环境中的任务采集的测试数据,受试者可以在轻松的环境中完成测试,避免了受试者在面谈或者纸笔试时因紧张造成采集的资料不准确的情况;其次,本申请是通过机器学习模型根据特征参数进行自动分类,避免了人为主观因素的评断,提高了对注意缺陷多动障碍的评判客观准确性;再次,虚拟现实环境(Virtual reality,VR)场景设计方便,VR环境下产生的行为、认知及生理数据可作为区分病人和正常受试者的分类器的输入有效特征,容易获得具有ADHD特征性生物标记物。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或示范性技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1是本申请一实施例提供的注意缺陷多动障碍的数据处理方法的应用场景示意图;
图2是本申请一实施例提供的注意缺陷多动障碍的数据处理方法的流程示意图;
图3是本申请一实施例提供的图2中输入数据后的的具体方法的流程示意图;
图4是本申请一实施例提供的图2中特征参数的计算方法的流程示意图一;
图5是本申请一实施例提供的图2中特征参数的计算方法的流程示意图二;
图6是本申请一实施例提供的图2中特征参数的计算方法的流程示意图三;
图7是本申请一实施例提供的机器学习模型训练方法的流程示意图;
图8是本申请一实施例提供的注意缺陷多动障碍的数据处理装置的结构示意图;
图9是本申请一实施例提供的终端设备的结构示意图;
图10是本申请一实施例提供的计算机的部分结构的框图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
为了说明本申请所述的技术方案,以下结合具体附图及实施例进行详细说明。
图1为本申请实施例提供的注意缺陷多动障碍的数据处理方法的应用场景示意图,上述注意缺陷多动障碍的数据处理方法可以用于对受试者进行ADHD的评价。其中,终端设备20用于获取被测试者10完成虚拟现实环境中的任务的测试数据,并对测试数据进行分析评价,最后得出分类结果,医生可以根据终端设备20的分类结果判断被测试者10是否患有注意缺陷多动障碍以及其分型。
以下结合图1对本申请实施例的注意缺陷多动障碍的数据处理方法进行详细说明。
图2示出了本申请提供的注意缺陷多动障碍的数据处理方法的示意性流程图,参照图2,对该注意缺陷多动障碍的数据处理方法的详述如下:
S101,获取受试者完成虚拟现实环境中的任务的输入数据。
在本实施例中,VR中存储有具有能够区分出ADHD和正常受试者的趣味游戏,例如找两环境情境的不同、射箭和识别社会情境中人物表情等,受试者通过完成VR中的游戏任务,可以采集到受试者在完成任务期间的能够分类出ADHD的注意力及多动冲动等不同的输入数据。一个任务中可以设置多次游戏,例如找两环境情景的不同的游戏任务中可以设置不同难度的环境区别,识别社会情境中人物表情的任务中也可以设置不同复杂程度的表情进行测试。
作为举例:
VR找不同场景评价空间工作记忆能力、选择注意能力及视觉信息搜索力。发现场景“不同之处”是目标为导向的行为,需要注意力,是受眼球注视的位置和注视的潜伏期影响的。ADHD患者对于变化的检测能力比正常者弱,特别对于细微的变化容易忽视,这主要是ADHD患者眼球随意运动的控制和注意的缺陷造成的。在设置游戏时,可以设置静态和动态不同的日常或者运动场景,例如,呈现的颜色出现或缺失、位置的改变,让受试者找不同处。ADHD患者可能得出答案较正常者快,但是对不同之处的辨认精确性要比正常者差,错的也比正常者多。
VR固定靶射箭游戏场景测评注意力集中度和持久力,可以设置视听干扰因素,在干扰环境中,固定靶静态射击,需要凝视靶心,在规定的时间内,凝视靶心持续的时间越接近规定时间,射击靶环得分越高。
VR移动靶游戏场景为持续性操作测验(Continuous performance test,CPT)任务,上述任务是要求受试者对目标刺激尽快做出反应,对非目标刺激不作反应,需要调用听觉和视觉选择性。在干扰环境中,可以设置视、听及视听空间干扰,例如有飞鸟,兔子,地鼠等,随机向空中的抛出目标靶,例如飞碟,要求受试者射击目标靶。
VR识别社会情境中人物表情,可以设置两种不同难度下的情绪识别,1)对静态、动态表情的识别,以及对正负性表情的识别等(如负性情绪有愤怒、悲伤、恐惧等),以及各种情绪不同强度(如30%,50%,70%,100%四个强度)的识别。2)情境任务VR情境:在社会情境中进行情感识别及处理,例如:人际交往中的视觉注意和情感识别,情境下复杂的、微妙的情感变化,需要更好的视觉注意去解读、甄别,受试者可以手指认也可以自然语言进行交互(回答),客观考察表情(情绪)识别,心理(智)理论,选择注意、选择反应时测试,视觉信息搜索等。
其中,不同的VR场景任务侧重于测评ADHD的注意力及多动冲动等不同特征,可以选择至少一种VR场景,每个场景可以获得至少一组输入数据。输入数据包括:任务表现数据、运动传感数据、眼动跟踪数据和脑电数据。
如图3所示,在一种可能的实现方式中,步骤S101的实现过程可以包括:
S1011,通过手势跟踪仪和/或语言处理装置获取当前任务下的所述任务表现数据。
在本实施例中,目标刺激指的是VR中的任务的指令或任务内容,例如:找出社会情境中人物表情,所需指定的表情就是目标刺激;射飞碟游戏中的飞碟就是目标刺激。
任务表现数据可以通过手势跟踪仪获得,例如用手指出情境中的笑脸,通过手势跟踪仪可以确定受试者是否正确完成任务,还可以根据手部动作,确定在一个目标刺激发生时受试者的反应开始时间,如果一个目标刺激发生了,但是受试者的手部没有动作,则说明受试者在当前目标刺激下没有做出反应。
任务表现数据可以通过语言处理装置获得,例如:找出情境中的笑脸,受试者可以通过自然语言说出笑脸的方位或序号,计算机通过获取受试者的语言信息,并通过信息转换和信息识别,最终确定受试者是否正确完成任务。
可选的,手势跟踪可以是光学跟踪,例如:Leap Motion(体感控制器);也可以是惯性跟踪,可以是一种将传感器带在手上的数据手套。采用手部动作捕捉技术,无需手持设备或者穿戴数据手套就能跟踪手部运动,与虚拟场景自然互动。
S1012,获取当前任务下动作记录仪采集的所述运动传感数据。
在本实施例中,ADHD的定义是通过对被诊断者的身体运动行为来描述的,受试者身体姿态的稳定性是用来评判受试者是否为ADHD的标准之一。现有的往往是收集被诊断者一段时间内的身体运动数据,比较被诊断者与正常人的身体运动数据上的差异来判断被诊断者是否患病。
可选的,动作记录仪捕捉动作可以采用光学跟踪或惯性跟踪,动作记录仪的实现方式可以采用可穿戴的设备或场景深度解析方案;场景深度解析方案是通过光学传感器接收到光学信号来对场景的深度信息进行分析,进而确定受试者的身体位置和姿态;可穿戴的设备是在受试者的关节或关键点上固定传感器,通过测试关节或关键点的位置变化或弯曲程度,得到受试者的身体运动情况,其中,关键点可以包括头部、腕部和脚踝等。动作记录仪可以通过两种设备记录受试者的身体运动数据,一种是加速度计装置,加速度计装置运用三轴加速度仪对三轴活动进行记录,进而得到运动和惯性测量数据;另一种是红外光学位置检测分析系统,红外光学位置检测分析系统是基于光学敏感器件和立体测量学研制的运动物体分析系统。
S1013,获取当前任务下眼动跟踪设备采集的所述眼动跟踪数据。
在本实施例中,眼动可以代表注视时间和注视方向。眼动跟踪设备可以获得个体眼球注视的方位,时间,顺序等眼动跟踪数据,进而获得注视次数,注视时间,视觉扫描路径、视觉扫描策略等特征参数。眼动跟踪设备可以客观记录视觉注意和视觉搜索模式,可以提供区别出ADHD患者和正常人不同的视觉注意和视觉搜索模式的评价指标。
在本实施例中,眼动跟踪设备可以集成在VR头显设备中,主要用于对眼球的追踪和眼球视线的估计,通过对眼球的追踪可以实现对眼球注视一点的时间、眼球发生位移的时间和眼球位移的顺序等的采集。
S1014,获取当前任务下脑电采集装置采集的所述脑电数据。
在本实施例中,脑电采集装置可以采集目标刺激时受试者的脑部电响应,也就是事件相关电位,外部刺激常见的有视频和音频刺激,内部刺激常见的有与注意力、决策力和工作记忆相关的任务,称为心理任务。
具体的,脑电数据为脑电图(EEG)及脑电诱发事件相关电位(ERP),可以包括:EEG信号α、β、θ波等,电位P200、P300、频谱在11Hz左右峰值和P2-N2峰峰值等。
在本实施例中,ADHD患者的脑电图有许多异常,相比正常者,ADHD患者的大脑波活动较多,尤其是在额叶区域,有非常多的波活动;ADHD患者有较大的P2成分,较小的N2成分。ADHD患者P2-N2峰峰值、频率在 11Hz 左右的峰值均有特异性表现。因此对脑电数据的采集和研究可以成为评价ADHD患者的一个指标。
S102,基于所述输入数据计算得到所述受试者的特征参数。
如图4所示,在一种可能的实现方式中,在所述输入数据为任务表现数据的情况下,步骤S102的实现过程可以包括:
具体的,任务表现数据可以包括:所述受试者在目标刺激时的正确反应次数、目标刺激发生时所述受试者的反应开始时间和所述目标刺激发生时受试者无反应次数,其中,一个任务包括至少一个目标刺激。
在本实施例中,正确反应次数指的是当VR中显示目标刺激时,受试者能正确回答出目标刺激所需要完成的指示的总数,例如:找出社会情境中的笑脸,受试者正确指出笑脸,则受试者对目标刺激正确反应。
S1021,计算所述正确反应次数和所述当前任务中的目标刺激总数的比值,得到所述特征参数中的正确率。
在本实施例中,正确率反应受试者的注意力集中情况,正确率越高,说明受试者的注意力越集中。目标刺激总数指的是需要完成的指令数,例如:找不同任务中一共有三组环境情景,每组有两种情景,找出每组中的两种情景的不同,在上述任务中,目标刺激总数为三;在射飞碟的任务中,如果一共有15个飞碟,则目标总数为15。
作为举例,正确率的计算包括:a=A/Z;
其中,a为正确率;A为正确反应次数;Z为目标刺激总数。
S1022,计算正确反应时的反应开始时间与目标刺激开始时间的差值,基于所述差值计算所述特征参数中的反应时标准差。
在本实施例中,反应时标准差是上述差值的标准差,通过计算正确反应时的反应开始时间与目标刺激开始时间的差值,可以计算反应时标准差,反应时标准差也就是注意力的量度。
反应开始时间是受试者开始对当前目标刺激有反应的时间,例如,受试者的反应可以包括:动作或语言。
S1023,将当前任务下所述反应开始时间在对应的所述目标刺激开始时间之前的次数记为所述特征参数中的错误数。
在本实施例中,错误数包括目标刺激没有出现时,受试者就已经做出反应的次数。如果反应开始时间与目标刺激时间的差值较小,说明受试者反应时间短,如果受试者反应时间较短,但是错误数较多,则受试者的冲动性特征较明显,如果受试者反应时间较长,但是错误数较多,则受试者的注意力不集中的特征较明显。
S1024,将所述目标刺激发生时受试者无反应次数记为所述特征参数中的漏报告数。
在本实施例中,目标刺激发生时,受试者在目标刺激开始之前和目标刺激开始之后均没有做出反应,也就是均没有做出应答,则说明受试对当前的目标刺激没有做出反应。
如图5所示,在一种可能的实现方式中,在所述输入数据为运动传感数据的情况下,步骤S102的实现过程可以包括:
具体的,运动传感数据可以包括:所述受试者的静止时间段、所述受试者在变换动作时的动作时间、所述受试者每次运动时所述动作记录仪记录的运动路径的位置坐标、所述受试者在完成所述任务期间的运动路径和所述运动路径的个数。
在本实施例中,运动传感数据可以是当前任务时所记录的受试者的运动数据,也可以是在整个测试期间多个任务下所记录的受试者的运动数据。
S1025,对各个所述静止时间段求和,根据求和结果与所述静止时间段的个数的商确定所述特征参数中的静止时长。
在本实施例中,静止时长指的是受试者在静止时间段的平均时长。
S1026,查找预设时间段内的所述动作时间的个数记为所述特征参数中的运动次数。
S1027,基于所述位置坐标,计算所述受试者每次运动的区域面积,将所有所述区域面积之和记为所述特征参数中的所述受试者的运动区域。
在本实施例中,受试者的运动区域可以包括运动传感器设备经过运动所覆盖的运动面积的总数。
S1028,对所有所述运动路径的交叉点求和,根据求和结果与所述运动路径的个数的商确定所述特征参数中的运动复杂度。
在本实施例中,运动复杂度越低表示受试者在测试期间运动路径趋于简单的线性,运动复杂度越高表示受试者在测试期间运动路径复杂缠绕。
S1029,基于所述位置坐标,计算所述特征参数中的所述受试者完成所述任务的位移。
在本实施例中,由于一个任务中可以包括多个目标刺激,一个目标刺激有一个位移,因此,受试者完成所述任务的位移可以包括完成当前任务时的受试者的总位移,也可以包括完成测试时受试者的总位移。
可选的,特征参数还可以包括时间尺度,时间尺度反应受试者活动性的程度,将受试者处于运动状态时记为1,处于静止状态记为0,则可以通过统计1的个数和0的个数的比值来计算时间尺度。
如图6所示,在一种可能的实现方式中,在所述输入数据为眼动跟踪数据的情况下,步骤S102的实现过程可以包括:
具体的,眼动跟踪数据包括所述受试者的眼球坐标、眼球注视目标刺激的时间和眼球注视目标刺激的顺序。
S10210,基于所述眼球坐标,确定所述特征参数中的所述受试者对各个所述目标刺激的注视次数。
在本实施例中,通过眼球的坐标可以标定出眼球在某一个位置时的停留时间和停留次数,因此通过统计眼球坐标出现的次数,可以统计出受试者注视目标刺激的注视次数。
S10211,将所述眼球注视目标刺激的时间记为所述特征参数中的注视时间。
S10212,基于所述眼球注视目标刺激的顺序,确定所述特征参数中的所述受试者的视觉扫描路径。
在本实施例中,通过受试者观看目标刺激的顺序可以知道受试者的视觉扫描路径,也就是受试者先看到的是什么内容,然后看到的是什么内容,通过分析受试者的视觉扫描路径可以知道受试者的观看目标刺激的倾向,例如:在分辨笑脸和哭脸图片的任务中,如果受试者的视觉扫描路径是笑脸到哭脸,则说明受试者更倾向于观察笑脸,如果受试者先看到哭脸再看到笑脸,说明受试者更倾向于注视哭脸的注意偏向。
S10213,基于所述视觉扫描路径和所述注视时间,得到所述特征参数中的所述受试者的视觉扫描策略。
在本实施例中,视觉扫描策略反应的是受试者在目标刺激停留时间和对目标刺激的敏感程度,例如:在VR找不同的任务中,正常受试者注视变化区域次数多,且时间更长,首次注视的时间也比ADHD受试者长;而ADHD受试者注视整个情景的时间长。
在一种可能的实现方式中,在所述输入数据为脑电数据的情况下,步骤S102的实现过程可以包括:
基于所述脑电数据,得到所述特征参数中的时-频域特征、P300特征等。
在本实施例中,对EEG信号进行预处理,预处理主要包括数据检查,带通滤波,去伪,分段,在进行预处理之后进行时-频域特征提取,作为脑电数据特征参数;对电位P200、P300波峰,潜伏期,均值等进行特征提取,作为脑电数据特征参数,等等。
S103,基于所述特征参数和训练后的机器学习模型,输出分类结果。
在本实施例中,将特征参数输入训练后的机器学习模型,自动输出分类结果,医生可以根据分类结果判断受试者是不是ADHD患者以及属于ADHD什么类型。
作为举例,输出结果可以设置为注意障碍、多动和冲动、组合表现和正常四种类型。
需要说明的是,不同的VR游戏任务场景,除了可以得到上述共性特征参数之外,依据场景不同,还可以得到不同的具体特征参数。
作为举例:
1)在上述VR识别社会情境中人物表情的任务中特征参数还可以包括:
通过任务表现数据还可以得到:选择反应时间、正确次数、错误次数和错误率等。
通过眼动跟踪数据还可以得到:兴趣区进入时间(ET)、首次注视时间(FFT)、兴趣区总注视时间(FT)、注视点数目等,通过各兴趣区注视时间占比,可以分析出对兴趣区的偏向,反映出注意偏向值。研究发现ADHD患者注意快乐面孔的时间比正常者短,注意中性面孔比正常者时间长,在呈现一组快乐-不快乐面孔时,ADHD患者注意偏向于不快乐面孔,而正常者偏向于快乐面孔。ADHD患者较之正常者更多的注意情绪面孔的嘴巴,而不是眼睛,可能嘴巴张合更能表现出明确的正性负性情感,而对眼神的分辨就困难的多,在社会情境中,ADHD患者对于别人气愤生气时不注意别人的面部和身体语言信息。
通过脑电数据还可以得到:ADHD患者的面孔表情加工成分和正常者不同。ADHD患者对不同的面孔的加工均有减弱,ADHD患者在高兴、中性、愤怒、恐惧等各种表情的刺激下,两侧枕区 P100 波幅低于正常者,两侧枕、颞区 N170 波幅低于正常者。ADHD患者各种表情刺激下 P100 及 N170 波幅无明显差异;而正常者左枕区 P100 波幅由高兴、愤怒、恐惧面孔诱发明显高于中性面孔,左颞区 N170 波幅由恐惧面孔诱发明显高于中性面孔。
2)在上述VR射箭游戏场景中特征参数还可以包括:
通过任务表现数据还可以得到:中靶分数、正确反应率、错误反应率和失误率等。
通过运动传感数据还可以得到:加速反应时间,加速反应时间的变异率,ADHD患者存在认知缺陷,当目标靶抛出间隔越短越密,其加速反应的能力下降了,遗漏次数变多;可以评估多动的指标:静止和活动的总时间,一定时间内平均变换动作次数、活动的距离路径和活动的区域;可以评估冲动的指标:错误次数、错误率等。
通过眼动跟踪数据还可以得到:兴趣区总注视时间(FT)、眼睛脱靶次数和眼睛脱靶时间等。
3)在上述VR找不同场景中特征参数还可以包括:
通过眼动跟踪数据还可以得到:注视变化区域的次数、注视变化区域的时间、首次注视变化区域的时间、感兴趣的区域等。
如图2所示,在一种可能的实现方式中,上述方法还可以包括:
S201,基于输入样本对机器学习模型进行训练,得到所述训练后的机器学习模型。
在本实施例中,机器学习模型可以包括:基于经典机器学习(例如支持向量机(SVM)、人工神经网络(Artificial Neural Network,ANN))以及基于深度学习的算法。例如:SVM 模型、卷积神经网络(CNN)模型以及基于caffe模型来训练和优化模型等。开发环境可以使用TensorFlow、Python、MXNet、Torch、Theano、CNTK、cuda、cuDNN、Caffe开源库或LibSVM工具箱等。
如图7所示,在一种可能的实现方式中,步骤S201的实现过程可以包括:
S2011,将被测试者完成所述虚拟现实环境中任务时获得的特征参数中的至少一种指标作为输入参数,其中,所述被测试者包括正常受试者和患病受试者;
S2012,将所述输入参数作为输入样本,基于所述输入样本对所述机器学习模型进行训练,得到所述训练后的机器学习模型。
在本实施例中,输入参数可以是将被测试者完成VR中至少一个任务时获得的特征参数中的至少一种指标作为输入参数。
在本实施例中,机器学习模型可以包括支持向量机或卷积神经网络等。输入样本包括:将一定数量的ADHD患者和一定数量的正常者通过完成VR中的任务采集的数据,其中 ADHD包括ADHD-I、ADHD-H和ADHD-C三种类型。
具体的,在机器学习模型是支持向量机的情况下,对机器学习模型的训练包括:
例如选择一种特征参数作为输入:基于脑电诱发电位时频域为特征输入的SVM递归特征消除法(support vector machine recursive feature elimination,SVM-RFE)分类方法。对被测试者进行相关游戏(任务)时的脑电诱发电位,进行时-频域特征提取,利用支持向量机进行特征分类,实现对个体的预测。
例如选择多种特征参数作为输入的集成学习方法:集成学习方法用于组合单一分类方法,多核学习( multiple kernel learning,MKL)方法用于组合多个核函数,可显著提高分类能力。任务表现数据,运动传感数据,眼动跟踪数据,脑电数据等多模态数据,基于多核学习( multiple kernel learning,MKL) 融合4 种特征训练出一个多核分类器。利用SVM采用嵌套交叉验证(nested cross validation)方法,对各类数据进行预处理,特征提取,特征选择,最后进行分类。
具体的,深度学习可通过深层非线性网络结构学习特征,并通过组合低层特征形成更加抽象的深层表示 ( 属性类别或特征) ,实现复杂函数逼近,从而可以学习到数据集的本质特征。在机器学习模型是卷积神经网络的情况下,对机器学习模型的训练包括:
在输入层使用多个节点,为任务表现数据,运动传感数据,眼动跟踪数据,脑电数据等,作为输入向量X1………Xn(特征),输出层为正常、ADHD-I、ADHD-H、ADHD-C等4个神经元组成。各节点通过激活函数实现非线性变换后输入,可以通过多个隐藏层进行输出;将卷积神经网络输出与实际输出或期望输出进行比较,用均方误差来衡量预测值和真实值之间的误差,根据误差的大小,采用反向传播算法(back-propagation algorithm,BP算法)改变连接权值w值和偏导b值,通过不断迭代最小化损失函数,使各预测结果值和真实结果值越来越接近,损失函数值不再变化,例如直到误差达到接近0.001。对于多分类问题,可以采用Softmax损失函数,最后,在训练阶段结束后,将权重值固定到最终值,得到训练后的卷积神经网络。
不同的VR场景任务侧重于测评ADHD的注意力及多动冲动等不同特征,输入参数是将被测试者完成VR中多个任务时获得的特征参数中的多种指标作为输入参数的机器学习算法。
例如,把多种模态参数各个分类过程统一到一个完整的CNN(卷积神经网络)网络结构之中,该网络融合了各个单一VR场景任务参数输入CNN  ADHD分类网络,同样是由一系列卷积、池化和ReLU激活层组成的CNN模型。可以采用基于点开关玻尔兹曼机(Point-wise Gated Boltzmann Machine, PGBM)的网络融合方法。将VR场景两个及以上CNN的最后一层特征向量进行拼接,拼接后的特征向量作为PGBM部分可见层的输入进行该部分的训练,PGBM部分采用对比散度方法进行训练。通过训练后得到的网络连接权重,可以获得拼接后特征向量中任务相关部分的特征表示,该部分特征表示作为新加入的全连接层的输入,对新加入的全连接层进行训练。同样,网络的反向传播深度被限定在新加入的全连接层。同样采用Softmax损失函数作为网络训练过程的指导。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
对应于上文实施例所述的注意缺陷多动障碍的数据处理方法,图8示出了本申请实施例提供的注意缺陷多动障碍的数据处理装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图8,该装置100可以包括:数据获取模块110、数据计算模块120结果输出模块130。
其中,数据获取模块110,用于获取受试者完成虚拟现实环境中的任务的输入数据;
数据计算模块120,用于基于所述输入数据计算得到所述受试者的特征参数;
结果输出模块130,用于基于所述特征参数和训练后的机器学习模型,输出分类结果。
在一种可能的实现方式中,所述虚拟现实环境中存储至少一种游戏场景,所述受试者完成所述游戏场景中的任务。
在一种可能的实现方式中,所述输入数据包括任务表现数据、运动传感数据、眼动跟踪数据和脑电数据中的至少一个;数据获取模块110具体可以用于:
通过手势跟踪仪和/或语言处理装置获取当前任务下的所述任务表现数据;
获取当前任务下动作记录仪采集的所述运动传感数据;
获取当前任务下眼动跟踪设备采集的所述眼动跟踪数据;
获取当前任务下脑电采集装置采集的所述脑电数据。
在一种可能的实现方式中,所述任务表现数据包括:所述受试者在目标刺激时的正确反应次数、目标刺激发生时所述受试者的反应开始时间和所述目标刺激发生时受试者无反应次数,其中,一个任务包括至少一个目标刺激;在所述输入数据为任务表现数据的情况下,数据计算模块120具体可以用于:
计算所述正确反应次数和所述当前任务中的目标刺激总数的比值,得到所述特征参数中的正确率;
计算正确反应时的反应开始时间与目标刺激开始时间的差值,基于所述差值计算所述特征参数中的反应时标准差;
将当前任务下所述反应开始时间在对应的所述目标刺激开始时间之前的次数记为所述特征参数中的错误数;
将所述目标刺激发生时受试者无反应次数记为所述特征参数中的漏报告数。
在一种可能的实现方式中,所述运动传感数据包括:所述受试者的静止时间段、所述受试者在变换动作时的动作时间、所述受试者每次运动时所述动作记录仪记录的运动路径的位置坐标、所述受试者在完成所述任务期间的运动路径和所述运动路径的个数;在所述输入数据为运动传感数据的情况下,数据计算模块120具体可以用于:
对各个所述静止时间段求和,根据求和结果与所述静止时间段的个数的商确定所述特征参数中的静止时长;
查找预设时间段内的所述动作时间的个数记为所述特征参数中的运动次数;
基于所述位置坐标,计算所述受试者每次运动的区域面积,将所有所述区域面积之和记为所述特征参数中的所述受试者的运动区域;
对所有所述运动路径的交叉点求和,根据求和结果与所述运动路径的个数的商确定所述特征参数中的运动复杂度;
基于所述位置坐标,计算所述特征参数中的所述受试者完成所述任务的位移。
在一种可能的实现方式中,所述眼动跟踪数据包括所述受试者的眼球坐标、眼球注视目标刺激的时间和眼球注视目标刺激的顺序;在所述输入数据为眼动跟踪数据的情况下,数据计算模块120具体可以用于:
基于所述眼球坐标,确定所述特征参数中的所述受试者对各个所述目标刺激的注视次数;
将所述眼球注视目标刺激的时间记为所述特征参数中的注视时间;
基于所述眼球注视目标刺激的顺序,确定所述特征参数中的所述受试者的视觉扫描路径;
基于所述视觉扫描路径和所述注视时间,得到所述特征参数中的所述受试者的视觉扫描策略。
在一种可能的实现方式中,该装置100还包括:
训练模块,用于基于输入样本对机器学习模型进行训练,得到所述训练后的机器学习模型。
在一种可能的实现方式中,训练模块具体可以用于:
将被测试者完成所述虚拟现实环境中至少一个任务时获得的特征参数中的至少一种指标作为一个输入参数,其中,所述被测试者包括正常受试者和患病受试者;
将所有的所述输入参数作为输入样本,基于所述输入样本对所述机器学习模型进行训练,得到所述训练后的机器学习模型。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供了一种终端设备,参见图9,该终端设400可以包括:至少一个处理器410、存储器420以及存储在所述存储器420中并可在所述至少一个处理器410上运行的计算机程序,所述处理器410执行所述计算机程序时实现上述任意各个方法实施例中的步骤,例如图2所示实施例中的步骤S101至步骤S103。或者,处理器410执行所述计算机程序时实现上述各装置实施例中各模块/单元的功能,例如图8所示模块110至130的功能。
示例性的,计算机程序可以被分割成一个或多个模块/单元,一个或者多个模块/单元被存储在存储器420中,并由处理器410执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序段,该程序段用于描述计算机程序在终端设备400中的执行过程。
本领域技术人员可以理解,图9仅仅是终端设备的示例,并不构成对终端设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如输入输出设备、网络接入设备、总线等。
处理器410可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器420可以是终端设备的内部存储单元,也可以是终端设备的外部存储设备,例如插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。所述存储器420用于存储所述计算机程序以及终端设备所需的其他程序和数据。所述存储器420还可以用于暂时地存储已经输出或者将要输出的数据。
总线可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(Peripheral Component,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,本申请附图中的总线并不限定仅有一根总线或一种类型的总线。
本申请实施例提供的注意缺陷多动障碍的数据处理方法可以应用于计算机、平板电脑、笔记本电脑、上网本、个人数字助理(personal digital assistant,PDA)等终端设备上,本申请实施例对终端设备的具体类型不作任何限制。
以所述终端设备为计算机为例。图10示出的是与本申请实施例提供的计算机的部分结构的框图。参考图10,计算机包括:通信电路510、存储器520、输入单元530、显示单元540、音频电路550、无线保真(wireless fidelity,WiFi)模块560、处理器570以及电源580等部件。
下面结合图10对计算机的各个构成部件进行具体的介绍:
通信电路510可用于收发信息或通话过程中,信号的接收和发送,特别地,将图像采集设备发送的图像样本接收后,给处理器570处理;另外,将图像采集指令发送给图像采集设备。通常,通信电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,通信电路510还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access, WCDMA)、长期演进(Long Term Evolution,LTE))、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器520可用于存储软件程序以及模块,处理器570通过运行存储在存储器520的软件程序以及模块,从而执行计算机的各种功能应用以及数据处理。存储器520可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据计算机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器520可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元530可用于接收输入的数字或字符信息,以及产生与计算机的受试者设置以及功能控制有关的键信号输入。具体地,输入单元530可包括触控面板531以及其他输入设备532。触控面板531,也称为触摸屏,可收集受试者在其上或附近的触摸操作(比如受试者使用手指、触笔等任何适合的物体或附件在触控面板531上或在触控面板531附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板531可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测受试者的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器570,并能接收处理器570发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板531。除了触控面板531,输入单元530还可以包括其他输入设备532。具体地,其他输入设备532可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元540可用于显示由受试者输入的信息或提供给受试者的信息以及计算机的各种菜单。显示单元540可包括显示面板541,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode, OLED)等形式来配置显示面板541。进一步的,触控面板531可覆盖显示面板541,当触控面板531检测到在其上或附近的触摸操作后,传送给处理器570以确定触摸事件的类型,随后处理器570根据触摸事件的类型在显示面板541上提供相应的视觉输出。虽然在图10中,触控面板531与显示面板541是作为两个独立的部件来实现计算机的输入和输入功能,但是在某些实施例中,可以将触控面板531与显示面板541集成而实现计算机的输入和输出功能。
音频电路550可提供受试者与计算机之间的音频接口。音频电路550可将接收到的音频数据转换后的电信号,传输到扬声器由扬声器转换为声音信号输出;另一方面,传声器将收集的声音信号转换为电信号,由音频电路550接收后转换为音频数据,再将音频数据输出处理器570处理后,经通信电路510以发送给比如另一计算机,或者将音频数据输出至存储器520以便进一步处理。
WiFi属于短距离无线传输技术,计算机通过WiFi模块560可以帮助受试者收发电子邮件、浏览网页和访问流式媒体等,它为受试者提供了无线的宽带互联网访问。虽然图10示出了WiFi模块560,但是可以理解的是,其并不属于计算机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器570是计算机的控制中心,利用各种接口和线路连接整个计算机的各个部分,通过运行或执行存储在存储器520内的软件程序和/或模块,以及调用存储在存储器520内的数据,执行计算机的各种功能和处理数据,从而对计算机进行整体监控。可选的,处理器570可包括一个或多个处理单元;可选的,处理器570可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、受试者界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器570中。
计算机还包括给各个部件供电的电源580(比如电池),可选的,电源580可以通过电源管理系统与处理器570逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述注意缺陷多动障碍的数据处理方法各个实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述注意缺陷多动障碍的数据处理方法各个实施例中的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
以上仅为本申请的可选实施例而已,并不用于限制本申请。对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (9)

  1. 一种注意缺陷多动障碍的数据处理方法,其特征在于,包括:
    获取受试者完成虚拟现实环境中的任务的输入数据;
    基于所述输入数据计算得到所述受试者的特征参数;
    基于所述特征参数和训练后的机器学习模型,输出分类结果。
  2. 根据权利要求1所述的注意缺陷多动障碍的数据处理方法,其特征在于,所述虚拟现实环境中存储至少一种游戏场景,所述受试者完成所述游戏场景中的任务。
  3. 如权利要求1所述的注意缺陷多动障碍的数据处理方法,其特征在于,所述方法还包括:
    基于输入样本对机器学习模型进行训练,得到所述训练后的机器学习模型;
    所述基于输入样本对机器学习模型进行训练,得到所述训练后的机器学习模型,包括:
    将被测试者完成所述虚拟现实环境中任务所获得的特征参数中的至少一种指标作为输入参数,其中,所述被测试者包括正常受试者和患病受试者;
    将所述输入参数作为输入样本,基于所述输入样本对所述机器学习模型进行训练,得到所述训练后的机器学习模型。
  4. 如权利要求1所述的注意缺陷多动障碍的数据处理方法,其特征在于,所述输入数据包括任务表现数据、运动传感数据、眼动跟踪数据和脑电数据中的至少一个;
    所述获取受试者完成虚拟现实环境中的任务的输入数据,包括:
    通过手势跟踪仪和/或语言处理装置获取当前任务下的所述任务表现数据;
    获取当前任务下动作记录仪采集的所述运动传感数据;
    获取当前任务下眼动跟踪设备采集的所述眼动跟踪数据;
    获取当前任务下脑电采集装置采集的所述脑电数据。
  5. 如权利要求4所述的注意缺陷多动障碍的数据处理方法,其特征在于,所述任务表现数据包括:所述受试者在目标刺激时的正确反应次数、目标刺激发生时所述受试者的反应开始时间和所述目标刺激发生时受试者无反应次数,其中,一个任务包括至少一个目标刺激;
    在所述输入数据为任务表现数据的情况下,所述基于所述输入数据计算得到所述受试者的特征参数,包括:
    计算所述正确反应次数和所述当前任务中的目标刺激总数的比值,得到所述特征参数中的正确率;
    计算正确反应时的反应开始时间与目标刺激开始时间的差值,基于所述差值计算所述特征参数中的反应时标准差;
    将当前任务下所述反应开始时间在对应的所述目标刺激开始时间之前的次数记为所述特征参数中的错误数;
    将所述目标刺激发生时受试者无反应次数记为所述特征参数中的漏报告数。
  6. 如权利要求4所述的注意缺陷多动障碍的数据处理方法,其特征在于,所述运动传感数据包括:所述受试者的静止时间段、所述受试者在变换动作时的动作时间、所述受试者每次运动时所述动作记录仪记录的运动路径的位置坐标、所述受试者在完成所述任务期间的运动路径和所述运动路径的个数;
    在所述输入数据为运动传感数据的情况下,所述基于所述输入数据计算得到所述受试者的特征参数,包括:
    对各个所述静止时间段求和,根据求和结果与所述静止时间段的个数的商确定所述特征参数中的静止时长;
    查找预设时间段内的所述动作时间的个数记为所述特征参数中的运动次数;
    基于所述位置坐标,计算所述受试者每次运动的区域面积,将所有所述区域面积之和记为所述特征参数中的所述受试者的运动区域;
    对所有所述运动路径的交叉点求和,根据求和结果与所述运动路径的个数的商确定所述特征参数中的运动复杂度;
    基于所述位置坐标,计算所述特征参数中的所述受试者完成所述任务的位移。
  7. 如权利要求4所述的注意缺陷多动障碍的数据处理方法,其特征在于,所述眼动跟踪数据包括所述受试者的眼球坐标、眼球注视目标刺激的时间和眼球注视目标刺激的顺序;
    在所述输入数据为眼动跟踪数据的情况下,所述基于所述输入数据计算得到所述受试者的特征参数,包括:
    基于所述眼球坐标,确定所述特征参数中的所述受试者对各个所述目标刺激的注视次数;
    将所述眼球注视目标刺激的时间记为所述特征参数中的注视时间;
    基于所述眼球注视目标刺激的顺序,确定所述特征参数中的所述受试者的视觉扫描路径;
    基于所述视觉扫描路径和所述注视时间,得到所述特征参数中的所述受试者的视觉扫描策略。
  8. 一种注意缺陷多动障碍的数据处理装置,其特征在于,包括:
    数据获取模块,用于获取受试者完成虚拟现实环境中的任务的输入数据;
    数据计算模块,用于基于所述输入数据计算得到所述受试者的特征参数;
    结果输出模块,用于基于所述特征参数和训练后的机器学习模型,输出分类结果。
  9. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机计算机程序时实现如权利要求1至7任一项所述的注意缺陷多动障碍的数据处理方法。
PCT/CN2020/129452 2019-12-30 2020-11-17 一种注意缺陷多动障碍的数据处理方法、装置以及终端设备 WO2021135692A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911398269.9 2019-12-30
CN201911398269.9A CN110970130B (zh) 2019-12-30 2019-12-30 一种注意缺陷多动障碍的数据处理装置

Publications (1)

Publication Number Publication Date
WO2021135692A1 true WO2021135692A1 (zh) 2021-07-08

Family

ID=70037418

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129452 WO2021135692A1 (zh) 2019-12-30 2020-11-17 一种注意缺陷多动障碍的数据处理方法、装置以及终端设备

Country Status (2)

Country Link
CN (1) CN110970130B (zh)
WO (1) WO2021135692A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110970130B (zh) * 2019-12-30 2023-06-27 佛山创视嘉科技有限公司 一种注意缺陷多动障碍的数据处理装置
CN111528867A (zh) * 2020-05-13 2020-08-14 湖州维智信息技术有限公司 用于儿童adhd筛查评估系统的表情特征向量确定方法
CN111528859B (zh) * 2020-05-13 2023-04-18 浙江大学人工智能研究所德清研究院 基于多模态深度学习技术的儿童adhd筛查评估系统
CN111563633A (zh) * 2020-05-15 2020-08-21 上海乂学教育科技有限公司 基于眼动仪的阅读训练系统及方法
CN113435335B (zh) * 2021-06-28 2022-08-12 平安科技(深圳)有限公司 微观表情识别方法、装置、电子设备及存储介质
CN113425293B (zh) * 2021-06-29 2022-10-21 上海交通大学医学院附属新华医院 一种听觉失认障碍评估系统及方法
CN113456075A (zh) * 2021-07-02 2021-10-01 西安中盛凯新技术发展有限责任公司 一种基于眼动追踪与脑波监测技术的专注力评估训练方法
CN113576482B (zh) * 2021-09-28 2022-01-18 之江实验室 一种基于复合表情加工的注意偏向训练评估系统和方法
CN114743618A (zh) * 2022-03-22 2022-07-12 湖南心康医学科技有限公司 一种基于人工智能的认知功能障碍治疗系统及方法
TWI831178B (zh) * 2022-04-13 2024-02-01 國立中央大學 注意力不足過動症的分析裝置、診斷系統及分析方法
CN117198537B (zh) * 2023-11-07 2024-03-26 北京无疆脑智科技有限公司 任务完成数据分析方法、装置、电子设备和存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050216243A1 (en) * 2004-03-02 2005-09-29 Simon Graham Computer-simulated virtual reality environments for evaluation of neurobehavioral performance
US20080280276A1 (en) * 2007-05-09 2008-11-13 Oregon Health & Science University And Oregon Research Institute Virtual reality tools and techniques for measuring cognitive ability and cognitive impairment
CN103764021A (zh) * 2011-05-20 2014-04-30 南洋理工大学 一种用于协同神经-生理学修复和/或功能提升的系统、仪器、装置和方法
WO2019035910A1 (en) * 2017-08-15 2019-02-21 Akili Interactive Labs, Inc. COGNITIVE PLATFORM COMPRISING COMPUTERIZED ELEMENTS
CN110024014A (zh) * 2016-08-03 2019-07-16 阿克利互动实验室公司 包括计算机化唤起元素的认知平台
CN110070944A (zh) * 2019-05-17 2019-07-30 段新 基于虚拟环境和虚拟角色的社会功能评估训练系统
CN110970130A (zh) * 2019-12-30 2020-04-07 段新 一种注意缺陷多动障碍的数据处理方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7942828B2 (en) * 2000-05-17 2011-05-17 The Mclean Hospital Corporation Method for determining fluctuation in attentional state and overall attentional state
IL148618A0 (en) * 2002-03-11 2002-09-12 Adhd Solutions Ltd A method for diagnosis and treatment of adhd and add, and a system for use thereof
US11839472B2 (en) * 2016-07-19 2023-12-12 Akili Interactive Labs, Inc. Platforms to implement signal detection metrics in adaptive response-deadline procedures
CN107519622A (zh) * 2017-08-21 2017-12-29 南通大学 基于虚拟现实与眼动追踪的空间认知康复训练系统和方法
CN109712710B (zh) * 2018-04-26 2023-06-20 南京大学 一种基于三维眼动特征的婴幼儿发育障碍智能评估方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050216243A1 (en) * 2004-03-02 2005-09-29 Simon Graham Computer-simulated virtual reality environments for evaluation of neurobehavioral performance
US20080280276A1 (en) * 2007-05-09 2008-11-13 Oregon Health & Science University And Oregon Research Institute Virtual reality tools and techniques for measuring cognitive ability and cognitive impairment
CN103764021A (zh) * 2011-05-20 2014-04-30 南洋理工大学 一种用于协同神经-生理学修复和/或功能提升的系统、仪器、装置和方法
CN110024014A (zh) * 2016-08-03 2019-07-16 阿克利互动实验室公司 包括计算机化唤起元素的认知平台
WO2019035910A1 (en) * 2017-08-15 2019-02-21 Akili Interactive Labs, Inc. COGNITIVE PLATFORM COMPRISING COMPUTERIZED ELEMENTS
CN110070944A (zh) * 2019-05-17 2019-07-30 段新 基于虚拟环境和虚拟角色的社会功能评估训练系统
CN110970130A (zh) * 2019-12-30 2020-04-07 段新 一种注意缺陷多动障碍的数据处理方法

Also Published As

Publication number Publication date
CN110970130A (zh) 2020-04-07
CN110970130B (zh) 2023-06-27

Similar Documents

Publication Publication Date Title
CN110970130B (zh) 一种注意缺陷多动障碍的数据处理装置
Ahmed et al. A systematic survey on multimodal emotion recognition using learning algorithms
Xu et al. Learning emotions EEG-based recognition and brain activity: A survey study on BCI for intelligent tutoring system
Carneiro et al. Multimodal behavioral analysis for non-invasive stress detection
Huynh et al. Engagemon: Multi-modal engagement sensing for mobile games
Elzeiny et al. Machine learning approaches to automatic stress detection: A review
Conati et al. Modeling user affect from causes and effects
Gervasi et al. Applications of affective computing in human-robot interaction: State-of-art and challenges for manufacturing
Bakhtiyari et al. Fuzzy model of dominance emotions in affective computing
Bakhtiyari et al. Hybrid affective computing—keyboard, mouse and touch screen: from review to experiment
Putze et al. Understanding hci practices and challenges of experiment reporting with brain signals: Towards reproducibility and reuse
Gan et al. Iot-based multimodal analysis for smart education: Current status, challenges and opportunities
Ceneda et al. Show me your face: Towards an automated method to provide timely guidance in visual analytics
Acarturk et al. Gaze aversion in conversational settings: An investigation based on mock job interview
Prakash et al. Computer vision-based assessment of autistic children: Analyzing interactions, emotions, human pose, and life skills
Li et al. A framework for using games for behavioral analysis of autistic children
CN113974589A (zh) 多模态行为范式评估优化系统及认知能力评价方法
Marcos et al. Emotional AI in Healthcare: a pilot architecture proposal to merge emotion recognition tools
Baskaran et al. Multi-dimensional task recognition for human-robot teaming: literature review
AU2022361223A1 (en) Mental health intervention using a virtual environment
Marilly et al. Gesture interactions with video: From algorithms to user evaluation
Yadav et al. Speak Up! Studying the interplay of individual and contextual factors to physiological-based models of public speaking anxiety
Ekiz et al. Long short-term memory network based unobtrusive workload monitoring with consumer grade smartwatches
Chepin et al. The improved method for robotic devices control with operator's emotions detection
Zhang et al. Multimodal Fast–Slow Neural Network for learning engagement evaluation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20910808

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/12/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20910808

Country of ref document: EP

Kind code of ref document: A1