WO2020132941A1 - 识别方法及相关装置 - Google Patents

识别方法及相关装置 Download PDF

Info

Publication number
WO2020132941A1
WO2020132941A1 PCT/CN2018/123895 CN2018123895W WO2020132941A1 WO 2020132941 A1 WO2020132941 A1 WO 2020132941A1 CN 2018123895 W CN2018123895 W CN 2018123895W WO 2020132941 A1 WO2020132941 A1 WO 2020132941A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
brain
data
machine learning
learning model
Prior art date
Application number
PCT/CN2018/123895
Other languages
English (en)
French (fr)
Inventor
李晓涛
李娟�
王立平
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to PCT/CN2018/123895 priority Critical patent/WO2020132941A1/zh
Publication of WO2020132941A1 publication Critical patent/WO2020132941A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses

Definitions

  • This application relates to the field of computers, and in particular to recognition methods and related devices.
  • the traditional method is to use brain wave detection and nuclear magnetic scanning.
  • the brain wave detection mainly detects the signals of the cerebral cortex, and many of the detected signals cannot be accurately interpreted.
  • the MRI scan there is not much effective information about the emotional cognition of the human brain.
  • the embodiments of the present application provide a recognition method and a related device to intelligently and scientifically recognize human emotions.
  • An identification method applied to the user side including:
  • the brain reaction data includes at least eye reaction data
  • the brain reaction parameter values include at least eye reaction parameter values
  • the eye reaction parameter value includes the parameter value corresponding to each eye reaction parameter
  • the brain reaction parameter value is input into a machine learning model, and the machine learning model outputs a recognition result of emotional cognition.
  • the recognition result of the emotion recognition output by the machine learning model includes: the machine learning model recognizes the emotion recognition type and the corresponding emotion recognition type according to the eye reaction parameter value and the eye reaction parameter threshold Score; the machine learning model determines the state information corresponding to the emotion cognitive type according to the state threshold corresponding to the emotion cognitive type and the score; the machine learning model outputs the recognized emotion cognitive type And corresponding state information; the recognition result includes: the emotional cognition type and corresponding state information.
  • the method further includes: using the user's eye reaction data to personally modify the threshold of the eye reaction parameter.
  • the method further includes: receiving correction data input by the user; and using the correction data to correct at least one of emotional cognition type, state information, and state threshold.
  • the emotional cognitive type includes at least one of an emotional type and a cognitive type; wherein: the emotional type includes: a mood subtype and a fatigue subtype; the cognitive type includes an attention subtype And stress subtypes; the machine learning model is trained using labeled training samples; wherein the training samples include brain reaction parameters from healthy individuals or patient individuals; the labels include mood state information labels, Fatigue state information label, attention state information label and stress state information label.
  • the method further includes: using a labeled test sample to test the recognition accuracy and recognition speed of the machine learning model; the test sample; wherein, the test sample includes data from a healthy individual or Brain reaction parameters of individual patients; the tags include: mood state information tags, fatigue state information tags, attention state information tags, and stress state information tags; if the machine learning model does not satisfy the preset conditions, execute One or more of the above operations, and re-training: re-select the eye response parameters; adjust the weight value of the machine learning model; adjust the state threshold; adjust at least one of the type and content of the label; wherein ,
  • the preset condition includes: the recognition accuracy of the machine learning model is not lower than the accuracy threshold and the recognition speed is not lower than the speed threshold.
  • the method further includes: uploading the recognition results of emotional cognition and corresponding brain reaction parameter values to the cloud or the background; the uploaded brain reaction parameter values will be used as training samples or test samples during the training process.
  • the recognition results of emotional cognition are used to mark the corresponding training samples or test samples; or, upload the recognition results of emotional cognition and corresponding brain reaction data to the cloud or background; the uploaded brain reaction data are used to generate the training process
  • Training samples or test samples in, the recognition results of emotional cognition are used to mark the corresponding training samples or test samples; after the cloud or background optimizes the machine learning model, the optimized machine learning model will be synchronized to all Describe the user side.
  • the eye response parameters include one or more of the following: the contrast and brightness of the eye; the speed, direction, and frequency of eye movement; the magnitude and speed of the pupil reaction; the interpupillary distance; the speed, amplitude, and frequency of blinking ; Muscle contraction of the eye, including the eye and eyebrows.
  • the eye reaction data includes: an eye video or an eye image; the brain reaction data further includes at least one of a forebrain cortical signal and an dermal electrical signal; the brain reaction parameter value also Including: at least one of the forebrain cortical parameter value and the electrical dermal parameter value; wherein, the forebrain cortical parameter value includes the parameter value corresponding to each forebrain cortical parameter, and the electrical dermal parameter value includes the corresponding dermal electrical parameter value Parameter value.
  • An identification system includes an acquisition device and a central control system; the central control system includes at least an identification device; wherein:
  • the collection device is used for: collecting user's brain reaction data;
  • the identification device is used to: perform data processing on the brain reaction data to obtain brain reaction parameter values; wherein, the brain reaction data includes at least eye reaction data; and the brain reaction parameter values include at least eye Partial reaction parameter value; the eye reaction parameter value includes the parameter value corresponding to each eye reaction parameter;
  • the brain reaction parameter value is input into a machine learning model, and the machine learning model outputs a recognition result of emotional cognition.
  • the central control system further includes a cloud or a background; the recognition device is also used to: upload recognition results of emotional cognition and corresponding brain response parameter values to the cloud or the background; the uploaded brain response
  • the parameter value will be used as a training sample or a test sample during the training process, and the recognition result of the emotional cognition is used to mark the corresponding training sample or test sample; or, the recognition result of the emotional cognition and the corresponding brain reaction data are uploaded to Cloud or background; the uploaded brain reaction data is used to generate training samples or test samples during the training process, and the recognition results of the emotional cognition are used to mark the corresponding training samples or test samples;
  • the cloud or background is used to: Use labeled training samples and test samples to train the machine learning model; the optimized machine learning model will be synchronized to the recognition device.
  • the collection device includes a camera device on a smart terminal, and the recognition device is specifically the smart terminal; or, the collection device includes: a wearable device with an eye camera function; the recognition device It is an intelligent terminal.
  • the wearable device includes: a camera device that collects eye reaction data; a forebrain cortex signal sensor that collects forebrain cortex signals, and a skin electrical signal sensor that collects skin electrical signals.
  • the wearable smart device is smart glasses;
  • the camera device is a miniature electronic camera; wherein: the miniature electronic camera is disposed at the junction of the lens and handle of the smart glasses; the skin electrical signal The sensor is arranged at a position where the inside of the lens handle contacts the ear; the signal sensor of the forebrain cortex is arranged at the middle of the lens handle.
  • the rear inner side of the temple of the smart glasses is a flexible bioelectrode
  • the two nose pads of the smart glasses are flexible bioelectrodes.
  • An intelligent terminal including:
  • the acquisition unit is used to acquire the brain reaction data of the user
  • the brain reaction data includes at least eye reaction data
  • the brain reaction parameter values include at least eye reaction parameter values
  • the eye reaction parameter value includes the parameter value corresponding to each eye reaction parameter
  • the brain reaction parameter value is input into a machine learning model, and the machine learning model outputs a recognition result of emotional cognition.
  • a wearable smart device including:
  • a forebrain cortex signal sensor that collects forebrain cortex signals
  • a skin electrical signal sensor that collects skin electrical signals
  • it also includes: a data output device.
  • it also includes: a health index monitor.
  • the wearable smart device is smart glasses;
  • the camera device is a miniature electronic camera; wherein: the miniature electronic camera is disposed at the junction of the lens and handle of the smart glasses; the skin electrical signal The sensor is arranged at a position where the inside of the lens handle contacts the ear; the signal sensor of the forebrain cortex is arranged at the middle of the lens handle.
  • the rear inner side of the temple of the smart glasses is a flexible bioelectrode
  • the two nose pads of the smart glasses are flexible bioelectrodes.
  • it further includes: a mechanical sleep switch or a time switch; the mechanical sleep switch or the time switch is provided at the connection between the temple and the frame of the smart glasses.
  • it further includes: a touch screen; the touch screen is disposed outside the mirror handle.
  • it also includes: a rechargeable battery.
  • the data output device includes a Bluetooth chip, and the Bluetooth chip is built into any mirror handle.
  • the data output device includes a WiFi chip.
  • a storage medium storing a plurality of instructions.
  • the instructions are suitable for loading by a processor to perform the steps in the above identification method.
  • a chip system includes a processor for supporting the identification device or the smart terminal to perform the above identification method.
  • brain reaction data mainly eye reaction data
  • data processing is performed to obtain brain reaction parameter values including eye reaction parameter values, and brain reaction parameters
  • the value is input to the machine learning model, and the artificial intelligence algorithm analyzes the brain response parameter value to obtain the recognition result of emotional cognition, thereby realizing the intelligent recognition of human emotional cognition.
  • the emotional and cognitive status of the brain can be judged by the state and level of the visual system processing visual signal input.
  • the artificial intelligence algorithm is based on brain reaction parameter values including eye reaction parameter values It is scientifically feasible to recognize the state of emotional cognition.
  • FIG. 1a is a structural example of a recognition system provided by an embodiment of the present application.
  • FIG. 1b is an exemplary structural diagram of an intelligent terminal provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of data upload provided by an embodiment of the present application.
  • FIG. 8 is an exemplary structural diagram of smart glasses provided by an embodiment of the present application.
  • the present invention provides recognition methods and related devices (such as recognition systems, smart terminals, storage media, wearable smart devices, chip systems, etc.) to intelligently and scientifically recognize human emotional cognition in various scenarios .
  • the above identification system may include: a collection device 101 and a central control system, where the central control system includes at least the identification device 102.
  • the core idea of the recognition method performed by the above-mentioned recognition system is that after brain reaction data (mainly eye reaction data) is collected, it will be processed to obtain brain reaction parameter values including eye reaction parameter values , The brain reaction parameter value is input into the machine learning model, and the artificial intelligence algorithm analyzes based on the brain reaction parameter value to obtain the recognition result of emotional cognition.
  • brain reaction data mainly eye reaction data
  • the brain reaction parameter value is input into the machine learning model, and the artificial intelligence algorithm analyzes based on the brain reaction parameter value to obtain the recognition result of emotional cognition.
  • the collection device 101 may include a high-pixel camera of a smart terminal
  • the recognition device device 102 may include the smart terminal.
  • FIG. 1b shows an exemplary structure of the foregoing smart terminal, including:
  • Obtaining unit 1021 used to obtain user's brain reaction data
  • Recognition unit 1022 used to perform data processing on the above brain reaction data to obtain brain reaction parameter values, and input the brain reaction parameter values into a machine learning model, and the machine learning model outputs emotion recognition recognition results.
  • the eye reaction data may be collected by a high-pixel camera of the smart terminal (that is, the acquiring unit 1021 may specifically include a high-pixel camera).
  • Smart terminals include but are not limited to smart phones, ipads, laptops, etc.
  • the brain reaction data may also include at least one of a forebrain cortical signal and an electrical skin signal.
  • the signal of the forebrain cortex can be collected by the signal sensor of the forebrain cortex
  • the electrical signal of the skin can be collected by the electrical signal sensor of the skin.
  • the forebrain cortex signal sensor and the dermal electrical signal sensor can be installed on a wearable device (such as smart glasses), and the wearable device transmits the wireless terminal to the smart terminal.
  • the acquiring unit 1021 may further include a wireless receiving device to acquire data transmitted by the wearable device.
  • the intelligent terminal may include a processor and a storage medium in hardware.
  • a variety of instructions are stored on the storage medium, and the instructions are suitable for loading by the processor.
  • the function of the recognition unit 1022 can be realized: performing data processing on the above brain reaction data to obtain brain reaction parameters Value, and input the brain reaction parameter value into the machine learning model, and the machine learning model outputs the recognition result of emotional cognition.
  • the function of the recognition unit 1022 can be realized by application software (for example, APP) installed in the smart terminal.
  • application software for example, APP
  • the collection device 101 may include a micro camera on a wearable device
  • the recognition device device 102 may include a smart terminal.
  • at least one of a forebrain cortical signal sensor and an electrical dermal signal sensor may be provided on the wearable device.
  • the above-mentioned eye reaction data can be collected by a miniature camera on the wearable device, and the forebrain cortical signal and the electrical signal of the skin can also be collected on the wearable device.
  • the wearable device transmits the collected data to the smart terminal through Bluetooth, WiFi and other wireless methods, and the smart terminal performs the subsequent steps.
  • the acquiring unit 1021 of the smart terminal includes a wireless receiving device to acquire data transmitted by the wearable device.
  • the function of the aforementioned identification unit 1022 can be implemented by the processor of the smart terminal loading instructions in the storage medium.
  • the function of the recognition unit 1022 can be realized by application software (for example, APP) installed in the smart terminal.
  • application software for example, APP
  • the above-mentioned collection device may further include various health index monitors, so that the range of brain response data can be expanded to cover various health index data, so that more comprehensive data can be used to obtain more accurate Recognize the results.
  • the collection device and the recognition device included in the recognition system are convenient and quick to operate, and also easy to carry or carry, they can be used for long-term monitoring or immediate evaluation of the emotional recognition of the brain.
  • the above-mentioned recognition system can be used to evaluate, monitor and even predict some diseases related to brain cognition, and thus can be used for long-term monitoring and care of chronic mental diseases. Of course, it can also be used for immediate monitoring of the deterioration or attack of mental illness.
  • the recognition system can even output intervention adjustment measures and suggestions, such as, but not limited to, output breath adjustment suggestions, music treatment measures/suggestions, light-sensing treatment measures/suggestions, or cognitive behavior treatment measures/suggestions.
  • the above-mentioned recognition system can also immediately assess and monitor the user's attention and fatigue, and output intervention adjustment measures and recommendations to remind the user Use the brain healthily and scientifically to improve the efficiency of users' work or study.
  • the above-mentioned recognition system can also be used to monitor the psychological reaction of the user watching commercial advertisements (that is, the detection of the effect of commodity advertisements), whether he is fatigued to drive, and conduct psychological lie detection.
  • the user can also input the current occasion (scene) or the occasion to be entered (scene), and the recognition system can give targeted suggestions according to the occasion. For example, if the user enters an interview occasion, if it is recognized that the user's current attention is not focused enough, the user may be reminded to concentrate.
  • the identification method and related device provided by the present application have a very broad application prospect.
  • FIG. 2 shows an exemplary flow of the above identification method, including:
  • the collection device collects the user's brain reaction data.
  • the above brain reaction data includes at least eye reaction data
  • the eye reaction data may further include eye video, or picture data extracted from the eye video (which may be referred to as an eye image).
  • the brain reaction data may further include at least one of a forebrain cortical signal and an electrical skin signal.
  • the anterior cerebral cortex signal may specifically be an EEG signal obtained by EEG (Electroencephalogram, brain wave) technology; and the dermal electrical signal may specifically be an PPG signal obtained by PPG (Photoplethymography, photoplethysmography) technology.
  • EEG Electroencephalogram, brain wave
  • PPG Photoplethymography, photoplethysmography
  • the forebrain cortical signal carries the forebrain information
  • the ocular response data carries the midbrain information
  • the dermal electrical signal carries the hindbrain information. If the comprehensive brain information from the midbrain, forebrain, and hindbrain is used to participate in the subsequent recognition steps, there are It is helpful for comprehensive, accurate and immediate interpretation of the emotional and cognitive state of the human brain.
  • S1 can be executed by the aforementioned acquisition unit 1021 or acquisition device 101.
  • the recognition device performs data processing on the brain reaction data to obtain brain reaction parameter values.
  • the brain response data includes at least eye response data
  • the brain response parameter values include at least eye response parameter values.
  • the ocular reaction parameter value includes the parameter value corresponding to each ocular reaction parameter.
  • the ocular reaction parameter includes, by way of example, but not limited to one or more of the following:
  • the contrast of the eye may specifically refer to the contrast between the white of the eye (the sclera part) and the eyeball (the iris part).
  • the brightness of the eye will be affected by the blood state of the capillaries in the eye. For example, if the capillaries are congested, the brightness will be relatively dark when not congested.
  • the eye movement frequency may include the frequency of the eye movement up and down and left and right.
  • the pupil reaction here includes: pupil contraction or enlargement.
  • the eye muscles used when smiling and frowning are significantly different. Therefore, the contraction of the eye muscles can be used to analyze human emotions and cognition. More specifically, the contraction of the muscles in the eye can be expressed using dot changes in computer vision.
  • the brain response parameter value further includes: at least one of the forebrain cortical parameter value and the dermal electrical parameter value.
  • the parameter values of the forebrain cortex include the parameter values corresponding to the parameters of the forebrain cortex.
  • the dermal electrical parameter value includes a parameter value corresponding to each dermal electrical parameter, and the dermal electrical parameter may further include at least one of heart rate, blood pressure, temperature, and respiratory frequency.
  • the recognition device uses a machine learning model (also referred to as an AI model) to analyze the above brain reaction parameter values to obtain a recognition result of emotional cognition.
  • a machine learning model also referred to as an AI model
  • steps S2-S3 may be performed by the aforementioned identification unit 1022.
  • the aforementioned machine learning model may include a deep learning model. Deep machine learning methods are also divided into supervised learning and unsupervised learning. The learning models established under different learning frameworks are very different. For example, Convolutional Neural Networks (CNNs) is a machine learning model under deep supervised learning, and Deep Belief Nets (DBNs) is a machine learning model under unsupervised learning. .
  • CNNs Convolutional Neural Networks
  • DNNs Deep Belief Nets
  • the brain reaction parameter value may be input into a machine learning model, and the machine learning model may output the recognition result of emotional cognition.
  • parameters such as binocular competition can also be entered.
  • the binocular competition parameters can be collected by other auxiliary devices.
  • the recognition result of emotion recognition may further include the recognized emotion recognition type and corresponding state information.
  • the emotional cognitive type may include at least one of an emotional type and a cognitive type.
  • the type of emotion includes at least: “mood” and “fatigue” subtypes.
  • exemplary sub-types of "mood” include: happy, sad, fear, excitement, depression, anxiety, or grief sub-types (or sub-moods), which can be numbered, binary coded, etc. Type is expressed.
  • the status information may include a text description, or may include a score; or, the status information may also include a score and a text description, and the type of emotional cognition finally displayed to the user may also be in the form of text.
  • state information in the above "attention deficit” is explicitly expressed.
  • state information can also be expressed implicitly or indirectly, for example, “eyes are full of anxiety and romance”, which includes the recognized types of emotional cognition-"anxiety” and “sadness”, but its state information is through “eyes” Full of anxiety and romance” to express it as a whole.
  • brain reaction data mainly eye reaction data
  • data processing is performed to obtain brain reaction parameter values including eye reaction parameter values
  • the brain The response parameter value is input into the machine learning model, and the artificial intelligence algorithm analyzes the brain response parameter value to obtain the recognition result of emotional cognition, thereby realizing the intelligent recognition of human emotional cognition.
  • the emotional and cognitive status of the brain can be judged by the state and level of the visual system processing visual signal input.
  • the artificial intelligence algorithm is based on brain reaction parameter values including eye reaction parameter values .
  • the following takes the eye response parameter value as an example to introduce the specific process of the machine learning model outputting the recognition result of emotional cognition, which may include the following steps:
  • Step a The machine learning model identifies the type of emotional cognition and the corresponding score based on the eye response parameter value and the eye response parameter threshold.
  • the score can be understood as a score or a grade value.
  • the machine learning model can recognize: 5 points for excitement; 4 points for fear.
  • Cognitive types include at least the subtypes of "attention” and “stress.”
  • Step b The machine learning model determines the state information corresponding to the emotion recognition type according to the state threshold and score corresponding to the emotion recognition type.
  • the status information may include a text description or may include a score; alternatively, the status information may also include a score and a text description.
  • the text description can be determined according to the status threshold and the score calculated in step S31.
  • step S31 For example, suppose that the attention value obtained in step S31 is x, and then assume that the score is between the state threshold a and the state threshold b, the corresponding text description is "difference”. If a ⁇ x ⁇ b, it can be determined that the text description of the state information corresponding to the subtype of "attention" is "bad".
  • the above state threshold may include a limit threshold to distinguish between a normal state and a sick state (normal state and sick state belong to state information).
  • the calculated score is not less than the limit threshold, it can be determined to be in a normal state, and below the limit threshold can be determined to be in a morbid state.
  • the state threshold may further include a pathological degree threshold to further determine the pathological degree (pathological degree also belongs to state information).
  • the state threshold may also include a normal state degree threshold to divide the normal state into multiple degrees. For example, assuming that the initial threshold corresponding to "happy mood" is 5-7 points, a score of 3-4 points will determine that the mood is low. Then, for the user A, the recognition device recognizes the emotion type of mood state (for example, "happy"), and the corresponding score is 4, then the output state information is "mood depression".
  • a normal state degree threshold to divide the normal state into multiple degrees. For example, assuming that the initial threshold corresponding to "happy mood" is 5-7 points, a score of 3-4 points will determine that the mood is low. Then, for the user A, the recognition device recognizes the emotion type of mood state (for example, "happy"), and the corresponding score is 4, then the output state information is "mood depression".
  • the status information corresponding to other subtypes is determined in a similar manner, and will not be repeated here.
  • Step c The machine learning model outputs the recognized emotion recognition type and the corresponding state information.
  • the output type of emotional cognition type and corresponding state information may be visual output or voice broadcast output.
  • the machine learning model can also output descriptions that reflect the state of the eye, for example, it can output descriptions such as “both eyes are dark and gray”, “eyebrows locked” and the like to reflect the state of the eye.
  • the above identification method may further include the following steps:
  • the identification device receives the calibration data input by the user
  • the above-mentioned recognition device can provide a human-machine interaction interface for correction, so that the user can manually input correction data.
  • the above correction data may be used to correct at least one of emotion recognition type, state information, and state threshold.
  • the machine learning model may recognize the emotion type specifically as “sadness”. However, if the user is in a "weeping over joy” situation, the user can change the identified emotion type to "happy", “happy”, etc.
  • the user can manually input text into the human-computer interaction interface, and the system converts the text into the corresponding type.
  • multiple emotion recognition type options may be provided in the human-computer interaction interface, and the user may select one or several items.
  • the human-computer interaction interface can provide multiple state information for the user to choose, the user selects one of the You can enter.
  • the user may have a need to correct it.
  • the user can manually input specific scores into the human-computer interaction interface, or the human-machine interaction interface can provide multiple scores for the user to choose, and the user can input by selecting one of them.
  • the recognition device uses the correction data to correct at least one of the emotion recognition type, state information, and state threshold.
  • S4-S5 may be performed by the aforementioned identification unit 1022.
  • the status information may further include at least one of a text description and a score.
  • the text description in the status information is corrected, and the actual correction is the correspondence between the text description and the status threshold, or it can be understood that: the final correction is the status threshold .
  • the recognition device recognizes the emotion type of "happy” for user A, and the corresponding score is 4, the output status information is "depressed mood”.
  • the state threshold corresponding to "mood joy” may be modified to 4-7 points.
  • limit threshold and pathological degree threshold in the state threshold are generally not corrected using correction data.
  • the emotion recognition type, state information, and state threshold can be corrected according to the correction data input by the user, so that the recognition result is more relevant and accurate to the individual.
  • the type of emotional cognition and the corresponding score can be identified according to the value of the eye response parameter and the threshold value of the eye response parameter.
  • different individuals have different eye shapes and sizes, and the highest and lowest frequency of blinking is different.
  • the foregoing identification method may further include the following steps:
  • S6 The recognition device uses the user's eye reaction data to personally correct the eye reaction parameter threshold.
  • S6 may be performed by the aforementioned identification unit 1022.
  • the machine learning can be extracted by collecting eye response data within a period of time (eg, days, week, etc.) to extract the user’s eye habits (such as its own highest blinking rate, lowest frequency, etc.), pupil size, etc. Parameter thresholds in the model.
  • eye reaction parameter values such as interpupillary distance, eye height, eye width, iris color, etc.
  • the parameter threshold can be corrected according to the user's eye reaction data, so that the recognition result is more relevant and accurate to the individual.
  • the aforementioned central control system may also include a cloud or a background.
  • the recognition device may upload the recognition result of emotional cognition and the corresponding brain reaction parameter value to the cloud or the background.
  • the recognition device may upload the recognition results of emotional recognition and corresponding brain reaction data to the cloud or the background.
  • the recognition device may periodically upload brain reaction parameter values/brain reaction data. More specifically, the recognition device may directly and automatically upload brain reaction parameter values/brain reaction data periodically, or after user authorization, periodically automatically upload brain reaction parameter values/brain reaction data.
  • the cloud or background will integrate massive data while protecting user privacy, use the uploaded brain response parameter values/brain response data to generate training samples or test samples to train machine learning models, and perform parameters (such as the aforementioned parameter thresholds) , State threshold) optimization; and the uploaded recognition results of emotional cognition can be used to mark the corresponding training samples or test samples; finally, the optimized machine learning model will be synchronized to the recognition device.
  • parameters such as the aforementioned parameter thresholds) , State threshold
  • the recognition device may personally correct the threshold of the ocular reaction parameters, and use the correction data to correct the emotion recognition type, state information, and state threshold, etc.
  • the above machine learning model can be obtained by training based on training samples. After the machine learning model training is completed, it will also test whether the trained machine learning model meets the expected performance requirements (including the requirements for recognition accuracy and recognition speed). , Will be adjusted accordingly until the expected performance requirements are met.
  • the first training can obtain the machine learning model, and the subsequent training can realize the optimization of the machine learning model.
  • the training process of the machine learning model performed by the cloud or background server may include at least the following steps:
  • Either sample may include brain response parameters from healthy individuals or individual patients.
  • the above-mentioned types of patients include but are not limited to autism, depression, Alzheimer's disease, Huntington's disease, schizophrenia, trauma sequelae.
  • the brain response data will also be processed to obtain training samples.
  • training samples can be manually marked as a priori knowledge of machine learning models.
  • the machine learning model is officially put into use, it can be automatically marked according to the recognition results of emotional cognition.
  • the so-called labeling can refer to adding one or more labels to the training samples. For example, you can add mood state information tags, fatigue state information tags, attention state information tags, and stress state information tags.
  • the contents of the above-mentioned types of tags include: emotion subtypes or cognitive subtypes, and corresponding state information.
  • a label indicating whether the sample is from a healthy individual or a patient individual can be added (more specifically, "0" can be used to indicate health, and "1" can be used to indicate patient).
  • the patient's sample can be further labeled with a disease condition, and even a doctor's diagnosis report can be added as a label.
  • S503 Use the labeled samples to form a training sample set and a test sample set.
  • any labeled sample can be put into a training sample set or a test sample set.
  • the samples in the training sample set are used to train the machine learning model, which can be called a training sample
  • the samples in the test sample set are used to test the machine learning model, which can be called a test sample.
  • S504 Use the training sample set to train the machine learning model.
  • the training samples in the training sample set can be used as input for training.
  • the above machine learning model may be a neural network algorithm model, such as a CNN (Convolutional Neural Network, convolutional neural network) model.
  • CNN Convolutional Neural Network, convolutional neural network
  • S504 Use the test sample set to test the diagnostic performance of the machine learning model.
  • test samples in the test sample set are input into the machine learning model, and the diagnostic performance is calculated according to the output of the machine learning model.
  • the diagnostic performance of the model may include recognition accuracy and recognition speed.
  • CNN can be tested in combination with GAN (Generative adversarial networks, generative adversarial networks), which will not be repeated here.
  • GAN Geneative adversarial networks, generative adversarial networks
  • the preset condition may include: the recognition accuracy of the machine learning model is not lower than the recognition accuracy threshold (95% or 98%, etc.), and the recognition speed is not lower than the speed threshold (for example, 10 seconds), to obtain A machine learning model that tests both recognition accuracy and recognition speed.
  • the recognition accuracy threshold and the speed threshold can be set according to different needs.
  • the recognition accuracy threshold can be set to 95%
  • the speed threshold can be set to process 1000 samples in 10 seconds.
  • the machine learning model After the machine learning model is put into use, it will continue to be trained to carry out the machine learning model.
  • FIG. 6 Please refer to FIG. 6 to introduce an embodiment in which eye response data is collected by a high-pixel camera of a smart terminal, and APP software installed in the smart terminal performs data processing and outputs recognition results of emotional cognition, which specifically includes the following steps:
  • S601 The intelligent terminal (high-pixel camera) collects eye reaction data.
  • the eye reaction data is specifically eye video, and may also be picture data derived therefrom.
  • the user can hold the smart terminal, aim the camera at both eyes, about 30-40 cm away from the eyes, and the user looks at the camera to shoot a video.
  • the eye video should be high-definition (more than 4 million pixels) video, and even objects reflected from the pupil of the eye can be seen.
  • the user's eye habits such as the highest blink rate, the lowest frequency, etc.
  • the pupil size, etc. of the user by collecting eye reaction data over a period of time (such as a few days, a week, etc.) to correct the machine Threshold of parameters in the learning model (individual correction).
  • This kind of personalized correction is generally carried out in the initial period of use of the recognition system (or in the initial period of time after the optimization of the machine learning model).
  • the camera that comes with the smart terminal can be used to periodically capture eye videos. For example, ingesting twice a day, each time about 1 minute of eye video is collected to ingest the eye video of the best state and the worst state of each day.
  • the eye video at the best state of the day can be taken about one hour after getting up in the morning, and the eye video at the worst state of the day can be taken close to get off work.
  • the smart terminal can switch between two working modes: fixed time recording mode and non-fixed time recording mode.
  • the eye video can be captured twice a day as mentioned above; in the non-fixed time recording mode, the eye video can be captured anytime and anywhere according to the user's operation.
  • S602 The voice collection device of the intelligent terminal collects voice data.
  • the voice collection device may specifically be a microphone.
  • the content of the voice data may include a user's specific state description of at least one of "mood”, “fatigue”, “attention”, and "stress”.
  • the user may say “I'm so happy”, “I feel so stressed”, “I'm so tired”, “The brain has become a paste” and so on.
  • the content of the voice data may also be a self-score of the emotion type or the cognitive type.
  • the user can input "stress 7 points" by voice.
  • S603 The APP of the smart terminal recognizes the voice data to obtain a voice recognition result.
  • the speech recognition result can be used to generate at least one of mood state information tags, fatigue state information tags, attention state information tags, and stress state information tags used in the training process. Coupled with the horizontal data comparison between healthy people and patients such as depression patients, the intelligent classification function of the machine learning model can be trained.
  • S602 and S603 can be executed in the early stage of the use of the recognition system (or in the initial period after the optimization of the machine learning model), and need not be performed in every recognition process.
  • the algorithm optimization based on artificial intelligence in the later period can calculate the user's mental cognitive state at any time, and only need to provide 30 seconds of eye video in real time.
  • the APP of the intelligent terminal performs data processing on the eye reaction data to obtain the eye reaction parameter value.
  • data processing can be integrated with the face recognition function of the intelligent terminal to recognize the eye reaction parameter value.
  • the angle sensor and distance sensor of the smart terminal can be used to determine the angle and distance between the camera and the eye, and then the actual size of the eye can be calculated according to the determined angle and distance to restore the proportion of the collected image at different distances and different angles , Or size conversion of the eye response parameter value.
  • the APP analyzes the above-mentioned eye reaction parameter values using a machine learning model, and obtains and displays the recognition results of emotional cognition.
  • the output method can be visual output or voice broadcast output.
  • step S606 The APP prompts whether to manually correct. If the user selects "Yes”, proceed to step S607, otherwise proceed to step S609.
  • the above-mentioned recognition device may have a human-machine interactive interface to prompt whether to manually correct.
  • S607 Receive calibration data input by the user.
  • the correction data may include at least one of emotional cognitive type status information, and the status information may further include at least one of text description and score.
  • S608 Use the correction data to correct at least one of the emotion recognition type, state information, and state threshold, to S609.
  • step S605 if the correction data is used to correct at least one of the emotion recognition type, state information, and state threshold, the recognition result obtained in step S605 will also be corrected accordingly.
  • S608 is similar to the aforementioned S5 and will not be repeated here.
  • S609 Use the user's eye reaction data to personally correct the eye reaction parameter threshold.
  • the machine learning can be extracted by collecting eye response data within a period of time (eg, days, week, etc.) to extract the user’s eye habits (such as its own highest blinking rate, lowest frequency, etc.), pupil size, etc. Parameter thresholds in the model.
  • S603-S609 can be performed by the aforementioned identification unit 1022.
  • the recognition data may include at least one of recognition results of emotional recognition and speech recognition results.
  • the recognition results can be desensitized to filter out sensitive information.
  • sensitive information include but are not limited to: name, age, place of residence, ID number, contact information, email address, etc.
  • the uploaded data can be used to train machine learning models, and the identification data can be used to generate tags.
  • tags For the specific training process and related introduction of tags, please refer to the previous introduction of this article, and it will not be repeated here.
  • S610 may be performed by the aforementioned identification unit 1022.
  • the user side may also only be used to collect data, display recognition results, and provide a human-computer interaction interface, data processing, recognition, and use of correction data to modify emotional cognition types, state information, and state thresholds, Personalized corrections can be implemented in the cloud or in the background.
  • the above-mentioned APP can also calculate the physical and mental state based on personal data based on the algorithm foundation of health big data, and propose individual-specific recommendations and beneficial intervention strategies, such as providing respiratory adjustment recommendations, rest recommendations, and playing music.
  • the above embodiment is based on an intelligent terminal, which combines existing high-resolution camera technology, mature computing vision technology, and advanced artificial intelligence algorithms to realize recognition of human emotions and cognition. Due to the wide range of use of smart terminals, it is universally applicable and can be used to instantly assess the user's emotions, attention, fatigue, etc. It is suitable for the public (such as office workers) to regulate stress, use the brain scientifically, maintain physical and mental health and balance.
  • the recognition method based on the smart wearable device will be introduced.
  • the micro-camera of the smart wearable device (such as smart glasses) collects eye reaction data and transmits it to the smart terminal through wireless transmission, and the APP software installed on the smart terminal performs data processing and output emotion recognition Known recognition results.
  • both the smart wearable device and the smart terminal belong to the user side.
  • the miniature electronic camera 801 can continuously capture eye video at close range.
  • smart glasses can work between two working modes: fixed time recording mode and continuous recording mode.
  • the micro electronic camera 801 can fix the eye video twice a day, for details, please refer to the record of S601 above; while when working in the continuous recording mode, the micro electronic camera 801 will continue to capture the eye video .
  • the micro electronic camera 801 may have a lens for bidirectional photography.
  • skin electrical signal sensors for example, electronic components sensitive to skin bioelectricity
  • electronic components sensitive to skin bioelectricity may also be provided at the contact area between the inside of the lens handle and the ear.
  • the skin electrical signal sensor may specifically be a PPG sensor, which uses PPG technology to collect PPG data related to the autonomic nervous system, including heart rate, blood pressure, and respiratory rate. PPG technology mostly uses green or red light as the measuring light source.
  • the PPG sensor further includes an LED lamp 802, and a photoelectric sensor.
  • the above LED lamp may specifically include a red LED and an infrared LED lamp.
  • the red LED and the infrared LED lamp may be replaced with a green LED lamp.
  • the middle segment of the lens handle may be provided with a forebrain cortex signal sensor (for example, an element sensitive to EEG signals).
  • the forebrain cortex signal sensor may specifically be an EEG sensor, which uses EEG technology to collect relevant brain wave signals.
  • the accuracy of the EEG signal mainly depends on the number of wires.
  • both the temples and the nose pads of the smart glasses are designed with flexible bioelectrodes.
  • the rear inner side of the temples is a flexible bioelectrode 803, and the two nose pads 804 are flexible bioelectrodes.
  • This design not only guarantees the comfort of the glasses, but also ensures the 3-lead design of the electrical signal.
  • the multi-lead can effectively reduce noise interference, improve the accuracy of the collected signal and the accuracy of subsequent algorithms degree.
  • EEG Electro-oculogram
  • ERG Electro-Retinogram
  • EMG Electro-ography
  • a mechanical sleep switch or a timing switch 805 may also be provided at the connection between the temple and the frame of the smart glasses.
  • a mechanical sleep switch can be set to automatically enter the sleep state after the glasses are folded and closed.
  • the user can manually set the timer time and enter the sleep state when the timer time is reached.
  • a mechanical sleep switch or timer switch can be set on one side, or a mechanical sleep switch or timer switch can be set on both sides.
  • a touch screen 806 can be provided outside the mirror handle, and the user can operate different functions through different gestures such as clicking, double-clicking, sliding forward, and not going backward.
  • the aforementioned input correction data can be realized by different gestures of the user.
  • a switch sensor can also be provided to detect the opening and closing state of the temple.
  • the above-mentioned smart glasses contain multiple sensors, which can make up for the defects of different sensors and achieve multiple guarantees of accuracy.
  • the opening and closing of the temple can be detected; obviously, when it is detected that it is in the closed state, the user does not wear smart glasses;
  • the PPG sensor on both sides can receive red light and infrared light with different absorption rates to detect whether it is in contact with the skin. Because the PPG sensor is located on the inside of the handle and the ear contacts, so when the smart glasses are placed on the legs, it will not The PPG signal is generated so that when the glasses are placed on the legs, it is mistakenly assumed that the glasses are still being worn to record false data.
  • the smart glasses may also include physical output devices.
  • the physical output device may include a power supply system and a data output device.
  • the power supply system includes but is not limited to a rechargeable battery 807.
  • the rechargeable battery 807 is disposed at the handle end of the glasses, and can last for at least 2 hours after being fully charged.
  • the data output device includes but is not limited to a Bluetooth or WiFi device.
  • a Bluetooth chip 808 may be built into the mirror handle.
  • the smart glasses may also include a voice collection device (for example, a micro microphone).
  • a voice collection device for example, a micro microphone
  • the recognition method performed based on the smart glasses includes the following steps:
  • S901 Smart glasses collect brain reaction data.
  • the data collected by the smart glasses includes eye micro video, PPG and EEG data, which can be transmitted to a smart terminal (such as a mobile phone) via Bluetooth.
  • S902 Smart glasses collect voice data.
  • S902 is similar to the aforementioned S602, and will not be repeated here.
  • Voice data can also be transmitted to smart terminals via Bluetooth.
  • S903 The APP of the smart terminal recognizes the voice data to obtain a voice recognition result.
  • S903 is similar to the foregoing S603 and will not be repeated here.
  • S904 The APP of the intelligent terminal performs data processing on the brain reaction data to obtain brain reaction parameter values.
  • the angle and distance between the smart glasses and the eyes can be calculated through the left and right cameras on the smart glasses, and then the actual size of the eyes can be calculated according to the angle and distance between the smart glasses and the eyes to restore the different wearing conditions. Acquire image scale, or change the size of eye response parameter value, etc.
  • S905 The APP of the intelligent terminal analyzes the above brain reaction parameter values using a machine learning model to obtain the recognition result of emotional cognition.
  • the recognition result of the emotion recognition can be displayed by the smart terminal.
  • the smart terminal can also transmit the recognition result to the smart glasses, which are displayed to the user in the form of image or voice.
  • the recognition results of emotional cognition can be uploaded to a preset mobile phone, cloud or background.
  • S906-S910 are similar to the aforementioned S606-S610 and will not be repeated here.
  • the smart glasses in this embodiment are mainly based on the technology of fine-sweeping eye reaction in computer vision, and at the same time, the EEG technology for detecting brain electrical signals and the PPG technology for electrical skin signals are used for data collection. ,
  • the steps performed by the smart terminal APP can also be performed by smart glasses or the cloud and background.
  • Smart glasses can be worn for a long time, which can be used for long-term monitoring and care of chronic mental diseases. Of course, it can also be used for immediate monitoring of the deterioration or attack of mental illness. It can be worn by patients with common mental diseases, including depression, autism, trauma sequelae, and schizophrenia, to provide timely prediction, monitoring, and intervention of disease dynamics in daily life.
  • the smart terminal may also output intervention adjustment measures and suggestions through smart glasses, such as, but not limited to, output breath adjustment suggestions, music treatment measures/suggestions, light-sensing treatment measures/suggestions or cognitive behavior treatment measures/suggestions, and recommendations for taking medicine.
  • intervention adjustment measures and suggestions such as, but not limited to, output breath adjustment suggestions, music treatment measures/suggestions, light-sensing treatment measures/suggestions or cognitive behavior treatment measures/suggestions, and recommendations for taking medicine.
  • the recognition method based on smart glasses can also be used to instantaneously capture and interpret the eye stress response for observing an object or scene.
  • the eyes and the external environment can be simultaneously framed, which is conducive to instant capture and interpretation of the eye stress response to the observation of an object or scene.
  • the external environment and eye video can be simultaneously captured through a two-way photography lens, and the recognition results can be interpreted together with the external environment to understand the wearer observing an object or scene Emotional or cognitive changes from time to time.
  • the above recognition method can be used to detect people's stress responses to specific scenes and objects.
  • whether the wearer is driving fatigued during driving may be continuously monitored based on smart glasses, and if it is detected that the driver is approaching fatigued or already fatigued, a prompt may be given.
  • the identification system and identification method provided by this application combined with existing high-resolution camera technology, mature computing vision technology and advanced artificial intelligence algorithm, mainly focus on accurately scanning the eye reaction to achieve precision and intelligence It can detect and evaluate the emotional and cognitive state of the human brain in a timely, efficient and scientific manner, and it is timely, efficient and easy to operate.
  • the traditional method is to use brain wave detection and nuclear magnetic scanning.
  • EEG brain wave
  • fMRI functional magnetic resonance imaging
  • the steps of the method or model described in conjunction with the embodiments disclosed herein may be implemented directly by hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable and programmable ROM, registers, hard disk, removable disk, WD-ROM, or all fields in the technical field Any other known storage medium.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Psychology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种识别方法,包括:采集使用者的脑部反应数据(S1);对脑部反应数据进行数据处理,得到脑部反应参数值(S2);其中,所述脑部反应数据至少包括眼部反应数据;脑部反应参数值至少包括眼部反应参数值;所述眼部反应参数值包括各眼部反应参数对应的参数值;将所述脑部反应参数值输入机器学习模型,由所述机器学习模型输出情感认知的识别结果(S3)。在采集到脑部反应数据,特别是是眼部反应数据后,会对其进行数据处理,得到包含眼部反应参数值的脑部反应参数值,将脑部反应参数值输入机器学习模型,由人工智能算法基于脑部反应参数值进行分析,得到情感认知的识别结果,从而实现了对人类的情感认知的智能化识别。

Description

识别方法及相关装置 技术领域
本申请涉及计算机领域,特别涉及识别方法及相关装置。
背景技术
我们一直在努力探求智能化、科学性检测/评估人类大脑的情感认知状况的方法。
针对目前脑认知或脑疾病状态的即时解读和长期监测,传统方式上是采用脑电波检测和核磁扫描。然而,脑电波检测主要检测到的是大脑皮层的信号,检测到的很多信号还无法准确解读。至于核磁扫描,其对于人类大脑的情感认知方面的有效信息并不多。
发明内容
有鉴于此,本申请实施例提供识别方法及相关装置,以对人类的情感认知进行智能化、科学性地识别。
为实现上述目的,本申请实施例提供如下技术方案:
一种识别方法,应用于用户侧,包括:
采集使用者的脑部反应数据;
对所述脑部反应数据进行数据处理,得到脑部反应参数值;其中,所述脑部反应数据至少包括眼部反应数据;所述脑部反应参数值至少包括眼部反应参数值;所述眼部反应参数值包括各眼部反应参数对应的参数值;
将所述脑部反应参数值输入机器学习模型,由所述机器学习模型输出情感认知的识别结果。
可选的,所述由所述机器学习模型输出情感认知的识别结果包括:所述机器学习模型根据所述眼部反应参数值和眼部反应参数阈值,识别出情感认知类型以及对应的分值;所述机器学习模型根据所述情感认知类型对应的状态阈值以及所述分值,确定所述情感认知类型对应的状态信息;所述机器学习模型输出识别出的情感认知类型及所对应的状态信息;所述识别结果包括:所述情感认知类型及对应的状态信息。
可选的,所述方法还包括:使用所述使用者的眼部反应数据,对所述眼部反应参数阈值进行个性化修正。
可选的,所述方法还包括:接收所述使用者输入的校正数据;使用所述校正数据修正情感认知类型、状态信息以及状态阈值中的至少一种。
可选的,所述情感认知类型包括情感类型和认知类型中的至少一种;其中:所述情感类型包括:心情子类型和疲劳度子类型;所述认知类型包括注意力子类型和压力子类型;所述机器学习模型是使用带标签的训练样本进行训练的;其中,所述训练样本包括来自健康个体或病患个体的脑部反应参数;所述标签包括心情状态信息标签、疲劳度状态信息标签、注意力状态信息标签和压力状态信息标签。
可选的,在训练结束后,所述方法还包括:使用带标签的测试样本测试所述机器学习模型的识别精度和识别速度;所述测试样本;其中,所述测试样本包括来自健康个体或病患个体的脑部反应参数;所述标签包括:心情状态信息标签、疲劳度状态信息标签、注意 力状态信息标签和压力状态信息标签;若所述机器学习模型不满足预设条件,执行下述操作中的一种或多种,并重新进行训练:重新选定眼部反应参数;调整所述机器学习模型的权重值;调整状态阈值;调整标签的类型和内容中的至少一种;其中,所述预设条件包括:所述机器学习模型的识别精度不低于精度阈值且识别速度不低于速度阈值。
可选的,所述方法还包括:上传情感认知的识别结果以及相应的脑部反应参数值至云端或后台;上传的脑部反应参数值将在训练过程中作为训练样本或测试样本,所述情感认知的识别结果用于标记相应的训练样本或测试样本;或者,上传情感认知的识别结果以及相应的脑部反应数据至云端或后台;上传的脑部反应数据用于生成训练过程中的训练样本或测试样本,所述情感认知的识别结果用于标记相应的训练样本或测试样本;所述云端或后台优化所述机器学习模型后,优化后的机器学习模型将同步至所述用户侧。
可选的,所述眼部反应参数包括以下一种或多种:眼睛的对比度和亮度;眼球运动的速度、方向和频率;瞳孔反应的幅度和速度;瞳孔间距;眨眼的速度、幅度和频率;眼部的肌肉收缩情况,所述眼部包括眼周和眉毛。
可选的,所述眼部反应数据包括:眼部视频或眼部图像;所述脑部反应数据还包括前脑皮层信号和皮肤电信号中的至少一种;所述脑部反应参数值还包括:前脑皮层参数值和皮肤电参数值中的至少一种;其中,所述前脑皮层参数值包括各前脑皮层参数对应的参数值,所述皮肤电参数值包括各皮肤电参数对应的参数值。
一种识别系统,包括采集装置和中央控制系统;所述中央控制系统至少包括识别装置;其中:
所述采集装置用于:采集使用者的脑部反应数据;
所述识别装置用于:对所述脑部反应数据进行数据处理,得到脑部反应参数值;其中,所述脑部反应数据至少包括眼部反应数据;所述脑部反应参数值至少包括眼部反应参数值;所述眼部反应参数值包括各眼部反应参数对应的参数值;
将所述脑部反应参数值输入机器学习模型,由所述机器学习模型输出情感认知的识别结果。
可选的,所述中央控制系统还包括云端或后台;所述识别装置还用于:上传情感认知的识别结果以及相应的脑部反应参数值至所述云端或后台;上传的脑部反应参数值将在训练过程中作为训练样本或测试样本,所述情感认知的识别结果用于标记相应的训练样本或测试样本;或者,上传情感认知的识别结果以及相应的脑部反应数据至云端或后台;上传的脑部反应数据用于生成训练过程中的训练样本或测试样本,所述情感认知的识别结果用于标记相应的训练样本或测试样本;所述云端或后台用于:使用带标签的训练样本和测试样本对机器学习模型进行训练;优化后的机器学习模型将同步至所述识别装置。
可选的,所述采集装置包括智能终端上的摄像装置,所述识别装置具体为所述智能终端;或者,所述采集装置包括:具有眼部摄像功能的可穿戴式装置;所述识别装置为智能终端。
可选的,所述可穿戴式装置包括:采集眼部反应数据的摄像装置;采集前脑皮层信号的前脑皮层信号传感器,以及,采集皮肤电信号的皮肤电信号传感器。
可选的,所述可穿戴式智能设备为智能眼镜;所述摄像装置为微型电子摄像头;其中:所述微型电子摄像头设置在所述智能眼镜的镜片与镜柄交界处;所述皮肤电信号传感器设 置在镜柄内侧和耳朵接触的部位;所述前脑皮层信号传感器设置在所述镜柄的中部。
可选的,所述智能眼镜的镜腿的后内侧为柔性生物电极,所述智能眼镜的两个鼻托为柔性生物电极。
一种智能终端,包括:
获取单元,用于获取使用者的脑部反应数据;
识别单元,用于:
对所述脑部反应数据进行数据处理,得到脑部反应参数值;其中,所述脑部反应数据至少包括眼部反应数据;所述脑部反应参数值至少包括眼部反应参数值;所述眼部反应参数值包括各眼部反应参数对应的参数值;
将所述脑部反应参数值输入机器学习模型,由所述机器学习模型输出情感认知的识别结果。
一种可穿戴式智能设备,包括:
采集眼部反应数据的摄像装置;
采集前脑皮层信号的前脑皮层信号传感器,以及,采集皮肤电信号的皮肤电信号传感器。
可选的,还包括:数据输出装置。
可选的,还包括:健康指数监测仪。
可选的,所述可穿戴式智能设备为智能眼镜;所述摄像装置为微型电子摄像头;其中:所述微型电子摄像头设置在所述智能眼镜的镜片与镜柄交界处;所述皮肤电信号传感器设置在镜柄内侧和耳朵接触的部位;所述前脑皮层信号传感器设置在所述镜柄的中部。
可选的,所述智能眼镜的镜腿的后内侧为柔性生物电极,所述智能眼镜的两个鼻托为柔性生物电极。
可选的,还包括:机械休眠开关或定时开关;所述机械休眠开关或定时开关设置在所述智能眼镜的镜腿与镜框连接处。
可选的,还包括:触摸屏;所述触摸屏设置在所述镜柄外侧。
可选的,还包括:可充电电池。
可选的,所述数据输出装置包括蓝牙芯片,所述蓝牙芯片内置于任一镜柄中。
可选的,所述数据输出装置包括WiFi芯片。
一种存储介质,所述存储介质存储有多条指令,所述指令适于处理器进行加载,以执行上述的识别方法中的步骤。
一种芯片系统,所述芯片系统包括处理器,用于支持所述识别装置或所述智能终端执行上述的识别方法。
在本申请实施例中,在采集到脑部反应数据后(主要是眼部反应数据),会对其进行数据处理,得到包含眼部反应参数值的脑部反应参数值,将脑部反应参数值输入机器学习模型,由人工智能算法基于脑部反应参数值进行分析,得到情感认知的识别结果,从而实现了对人类的情感认知的智能化识别。
同时,需要说明的是,人类大脑获取的信息大约80%来自于视觉系统,因此,大脑的情感和认知状况可通过视觉系统处理视觉信号输入的状态和水平得以判断。尤其是通过眼睛和嘴巴为主的人脸表情去判断人们的心理和精神状态,是人类交流时都在自动做的事 情,因此通过人工智能算法基于包含眼部反应参数值的脑部反应参数值,去识别情感认知状态是科学可行的。
附图说明
图1a为本申请实施例提供的识别系统结构示例图;
图1b为本申请实施例提供的智能终端示例性结构图;
图2、3、6、7、9为本申请实施例提供的识别方法示例性流程图;
图4为本申请实施例提供的数据上传示意图;
图5为本申请实施例提供的训练过程示例性流程图;
图8为本申请实施例提供的智能眼镜示例性结构图。
具体实施方式
本发明提供识别方法及相关装置(例如识别系统、智能终端、存储介质、可穿戴式智能设备、芯片系统等),以在多种场景下对人类的情感认知进行智能化、科学性地识别。
请参见图1a,上述识别系统可包括:采集装置101和中央控制系统,其中的中央控制系统至少包括识别装置102。
上述识别系统所执行的识别方法的核心思想是:在采集到脑部反应数据后(主要是眼部反应数据),会对其进行数据处理,得到包含眼部反应参数值的脑部反应参数值,将脑部反应参数值输入机器学习模型,由人工智能算法基于脑部反应参数值进行分析,得到情感认知的识别结果。
在一个示例中,采集装置101可包括智能终端的高像素摄像头,识别装置装置102可包括该智能终端。
图1b示出了上述智能终端的一种示例性结构,包括:
获取单元1021:用于获取使用者的脑部反应数据;
识别单元1022:用于对上述脑部反应数据进行数据处理,得到脑部反应参数值,并将脑部反应参数值输入机器学习模型,由机器学习模型输出情感认知的识别结果。
具体的,眼部反应数据可由智能终端的高像素摄像头采集(也即,获取单元1021具体可包括高像素摄像头)。智能终端包括但不限于智能手机、ipad、笔记本电脑等。
除眼部反应数据外,脑部反应数据还可包括前脑皮层信号和皮肤电信号中的至少一种。其中,前脑皮层信号,可由前脑皮层信号传感器采集,而皮肤电信号可由皮肤电信号传感器采集。前脑皮层信号传感器和皮肤电信号传感器可安装在可穿戴设备上(例如智能眼镜),由可穿戴设备通过无线方式传输给智能终端。则在此种场景下,上述获取单元1021还可包括无线接收装置,以获取可穿戴设备传输的数据。
智能终端在硬件上可包含处理器和存储介质。其中存储介质上存储有多种指令,该指令适于处理器加载,处理器加载存储介质中的指令后可实现识别单元1022的功能:对上述脑部反应数据进行数据处理,得到脑部反应参数值,并将脑部反应参数值输入机器学习模型,由机器学习模型输出情感认知的识别结果。
从使用者角度看,可由安装在智能终端的应用软件(例如APP)来实现识别单元1022的功能。
在另一个示例中,采集装置101可包括可穿戴设备上的微型摄像头,而识别装置装置102可包括智能终端。此外,可穿戴设备上还可设置前脑皮层信号传感器和皮肤电信号传感器中的至少一种。
具体的,上述的眼部反应数据可由可穿戴设备上的微型摄像头采集,前脑皮层信号和皮肤电信号也可由可穿戴设备上采集。可穿戴设备将采集到的数据通过蓝牙、WiFi等无线方式传输至智能终端,由智能终端来执行后续步骤。
则在此场景下,智能终端的获取单元1021包括无线接收装置,以获取可穿戴设备传输的数据。而前述的识别单元1022的功能,可由智能终端的处理器加载存储介质中的指令实现。
从使用者角度看,可由安装在智能终端的应用软件(例如APP)来实现识别单元1022的功能。
在本申请其他实施例中,上述采集装置还可包括各类健康指数监测仪,从而可将脑部反应数据范围扩大到涵盖各类健康指数数据,从而可采用更全面的数据得到更为精准的识别结果。
由于识别系统所包括的采集装置和识别装置操作起来方便快捷,也便于携带或佩载,因此可用于对大脑的情感认知进行长期监测或即时评估。
举例来讲,上述识别系统可用于评估、监测甚至预测一些脑认知相关的疾病,从而可用于慢性精神疾病的长期监测和护理。当然,也可用于对精神疾病病情恶化或发作进行即时监测。识别系统甚至可以输出干预调节措施和建议,例如但不限于输出呼吸调整建议、音乐治疗措施/建议、光感治疗措施/建议或认知行为治疗措施/建议等。
至于即时监测,除前述提及的,对精神疾病病情恶化或发作进行即时监测,上述识别系统也可即时评估和监测使用者的注意力和疲劳度,输出干预调节措施和建议,以提醒使用者健康、科学地用脑,从而提高使用者的工作或学习效率。
此外,上述识别系统还可用于监测使用者观看商业广告的心理反应(即商品广告效应的检测)、是否疲劳驾驶、进行心理测谎等。
使用者还可输入当前场合(场景)或即将进入的场合(场景),识别系统可结合场合给出针对性的建议。例如,使用者输入面试场合,若识别出使用者当前的注意力不够集中,可提醒使用者集中注意力。
因此,本申请提供的识别方法和相关装置具有相当广泛的使用前景。
下面将基于以上描述中本申请涉及的共性方面,对本申请实施例做进一步详细说明。
图2示出了上述识别方法的一种示例性流程,包括:
S1:采集装置采集使用者的脑部反应数据。
在一个示例中,上述脑部反应数据至少包括眼部反应数据,眼部反应数据可进一步包括眼部视频,或从眼部视频中提取的图片数据(可称为眼部图像)。
除眼部反应数据外,在另一个示例中,上述脑部反应数据还可包括前脑皮层信号和皮肤电信号中的至少一种。
其中,前脑皮层信号具体可为通过EEG(Electroencephalogram,脑电波)技术获得的EEG信号;而皮肤电信号具体可为通过PPG(Photoplethymography,光电容积描记法)技 术获得的PPG信号。
前脑皮层信号携带了前脑信息,眼部反应数据携带了中脑信息,而皮肤电信号携带了后脑信息,若使用来自中脑、前脑以及后脑的大脑综合信息参与后续的识别步骤,有利于全面、精准和即时解读人类大脑的情感和认知状态。
S1可由前述的获取单元1021或采集装置101执行。
S2:识别装置对上述脑部反应数据进行数据处理,得到脑部反应参数值。
在一个示例中,脑部反应数据至少包括眼部反应数据,相应的,脑部反应参数值至少包括眼部反应参数值。
眼部反应参数值则包括各眼部反应参数对应的参数值,眼部反应参数示例性地包括但不限下以下一种或多种:
(1)眼睛的对比度和亮度;
眼睛的对比度可具体指:眼白(巩膜部分)跟眼珠(虹膜部分)的对比度。
眼睛的亮度会受眼内毛细血管血液状态影响,例如,毛细血管充血,则亮度会相对不充血时暗。
(2)眼球运动的速度、方向和频率;
具体的,眼球运动频率可包括眼球上下、左右运动的频率。
(3)瞳孔反应的幅度和速度;
这里的瞳孔反应包括:瞳孔收缩或变大。
(4)瞳孔间距;
(5)眨眼的速度、幅度和频率;
(6)眼部(包括眉毛部位)的肌肉收缩情况。
举例来讲,微笑和皱眉时所使用的眼部肌肉是明显不同,因此,可使用眼部肌肉收缩情况来分析人类的情感和认知。更具体的,眼部的肌肉收缩情况可使用计算机视觉的点阵变化表示。
在另一个示例中,若脑部反应数据还包括前脑皮层信号和皮肤电信号,则相应的,脑部反应参数值还包括:前脑皮层参数值和皮肤电参数值中的至少一种。
其中,前脑皮层参数值包括各前脑皮层参数对应的参数值。
与之类似,皮肤电参数值包括各皮肤电参数对应的参数值,皮肤电参数可进一步包括心率、血压、温度和呼吸频率中的至少一种。
S3:识别装置使用机器学习模型(也可称为AI模型)分析上述脑部反应参数值,得到情感认知的识别结果。
在一个示例中,步骤S2-S3可由前述的识别单元1022执行。
上述机器学习模型可包括深度学习模型。深度机器学习方法也有监督学习与无监督学习之分.不同的学习框架下建立的学习模型很是不同。例如,卷积神经网络(Convolutional neural networks,简称CNNs)就是一种深度的监督学习下的机器学习模型,而深度置信网(Deep Belief Nets,简称DBNs)就是一种无监督学习下的机器学习模型。
具体的,可将脑部反应参数值输入机器学习模型,由机器学习模型输出情感认知的识别结果。
除了脑部反应参数值,还可输入双眼竞争等参数。双眼竞争参数可由其他辅助装置采 集。
情感认知的识别结果进一步可包括识别出的情感认知类型及相应的状态信息。
在一个示例中,情感认知类型可包括情感类型和认知类型中的至少一种。
其中:情感类型至少包括:“心情”和“疲劳度”子类型。
进一步的,“心情”子类型示例性得包括:高兴、悲伤、恐惧、兴奋、抑郁、焦虑或忧伤等心情子类型(或称为子心情),可以使用编号、二进制编码等方式对各心情子类型进行表示。
示例性的,状态信息可包含文字描述,或者可包括分值;或者,状态信息也可同时包括分值和文字描述,而最终展示给使用者的情感认知类型也可为文字形式。
以识别结果为“注意力差”为例,其包含了识别出的情感认知类型的文字形式——“注意力”,以及状态信息的文字描述——“差”。
需要指出的是,上述“注意力差”中的状态信息是显式表达的。此外,状态信息也可为隐式或间接表达,例如“眼神充满焦虑和忧伤”,其包含了识别出的情感认知类型——“焦虑”以及“忧伤”,但其状态信息是通过“眼神充满焦虑和忧伤”来整体表达的。
可见,在本申请实施例中,在采集到脑部反应数据后(主要是眼部反应数据),会对其进行数据处理,得到包含眼部反应参数值的脑部反应参数值,将脑部反应参数值输入机器学习模型,由人工智能算法基于脑部反应参数值进行分析,得到情感认知的识别结果,从而实现了对人类的情感认知的智能化识别。
同时,需要说明的是,人类大脑获取的信息大约80%来自于视觉系统,因此,大脑的情感和认知状况可通过视觉系统处理视觉信号输入的状态和水平得以判断。尤其是通过眼睛和嘴巴为主的人脸表情去判断人们的心理和精神状态,是人类交流时都在自动做的事情,因此通过人工智能算法基于包含眼部反应参数值的脑部反应参数值,去识别情感认知是科学可行的。
下面以眼部反应参数值为例,介绍机器学习模型输出情感认知的识别结果的具体过程,其可包括如下步骤:
步骤a:机器学习模型根据眼部反应参数值和眼部反应参数阈值,识别出情感认知类型以及对应的分值。
分值可理解为评分或等级值。举例来讲,机器学习模型可识别得出:兴奋值5分;恐惧值4分等。
而认知类型则至少包括“注意力”和“压力”子类型。
步骤b:机器学习模型根据情感认知类型对应的状态阈值以及分值,确定情感认知类型对应的状态信息。
前述提及了状态信息可包含文字描述,或者可包括分值;或者,状态信息也可同时包括分值和文字描述。其中的文字描述可根据状态阈值和步骤S31计算出的分值确定。
仍以“注意力”为例,举例来讲,假定在步骤S31中得到注意力值为x,再假定分值在状态阈值a和状态阈值b之间时,对应的文字描述为“差”。若a≤x≤b,则可确定“注意力”这一子类型对应的状态信息的文字描述为“差”。
在本申请其他实施例中,上述状态阈值中可包括极限阈值以区分正常状态和病态(正常状态、病态属于状态信息)。举例来讲,计算得到的分值不小于极限阈值,可判定处于正常状态,而低于极限阈值可判定处于病态。
此外,状态阈值还可包括病态程度阈值来进一步确定病态程度(病态程度也属于状态信息)。
同理,状态阈值还可包括正常状态程度阈值,以将正常状态划分为多个程度。举例来讲,假定“心情愉悦”对应的初始阈值是5-7分,分值位于3-4分会判定为情绪低落。则若对用户A,识别装置识别出心情状态(例如“高兴”)这一情感类型,相应的分值为4,则输出的状态信息为“情绪低落”。
其他子类型对应的状态信息的确定方式与之类似,在此不作赘述。
步骤c:机器学习模型输出识别出的情感认知类型及所对应的状态信息。
情感认知类型及所对应的状态信息的输出方式可是视觉输出或语音播报输出。
此外,机器学习模型还可输出反映眼部状态的描述,例如,其可输出“双眼无神灰暗”、“眉头紧锁”之类的描述来反映眼部状态。
由于个体差异,识别系统初期输出的识别结果未必与使用者的真实情况相符。则请参见图3,在步骤S3之后,上述识别方法还可包括如下步骤:
S4:识别装置接收使用者输入的校正数据;
在展示识别结果后或展示识别结果的同时,上述识别装置可提供用于校正的人机交互界面,以便于使用者人工输入校正数据。
在一个示例中,上述校正数据可用于校正情感认知类型、状态信息和状态阈值中的至少一项。
以校正情感认知类型为例,若使用者眼部流出泪水,机器学习模型可能识别出的情感类型具体为“悲伤”。但若使用者是“喜极而泣”的情况,则使用者可将识别出的情感类型更改为“快乐”、“高兴”等。
具体的,使用者可向人机交互界面中手工输入文字,由系统将文字转化为相应的类型。
当然,考虑到不同使用者对情感的描述可能不尽相同,为了方便统一处理,人机交互界面中可提供多个情感认知类型选项,使用者对其中一项或几项选中即可。
以校正“状态信息”为例,假定系统向使用者展示的是“心情不错”,但使用者可能觉得自己当前的心情一般,有校正的需求。针对此种情况,使用者可向人机交互界面中手工输入文字“一般”。
当然,考虑到不同使用者对同一状态可能有不同的描述,为了方便统一处理,针对一情感认知类型,人机交互界面可提供多个状态信息供使用者选择,使用者对其中一项选中即可实现输入。
以校正分值为例,若系统向使用者展示了分值,使用者可能有校正其的需求。针对此种情况,使用者可向人机交互界面中手工输入具体的分值,或者,人机交互界面可提供多个分值供使用者选择,使用者对其中一项选中即可实现输入。
S5:识别装置使用校正数据修正情感认知类型、状态信息以及状态阈值中的至少一种。
在一个示例中,S4-S5可由前述的识别单元1022执行。
前述提及了,状态信息可进一步包括文字描述和分值中的至少一种。
以文字描述为例,由于文字描述是根据状态阈值确定的,所以校正状态信息中的文字描述,实际修正的是文字描述与状态阈值间的对应关系,或者可理解为:最终修正的是状态阈值。
举例来讲,假定“心情愉悦”对应的初始阈值是5-7分,分值位于3-4分会判定为情绪低落。则若对用户A,识别装置识别出“高兴”这一情感类型,相应的分值为4,则输出的状态信息为“情绪低落”。
若用户A将文字描述由“情绪低落”校正为“心情愉悦”,则可将“心情愉悦”对应的状态阈值修改为4-7分。
需要说明的是,状态阈值中的极限阈值和病态程度阈值,一般不使用校正数据进行修正。
可见,在本实施例中,可根据使用者输入的校正数据来修正情感认知类型、状态信息以及状态阈值,从而令识别结果与个体更相贴切,更精准。
前述提及了,可根据眼部反应参数值和眼部反应参数阈值,识别出情感认知类型以及对应的分值。在实际中,不同个体其眼部的形状、大小,眨眼最高、最低频率是有差异的。
因此,仍请参见图3,在上述步骤S3之后,上述识别方法还可包括如下步骤:
S6:识别装置使用使用者的眼部反应数据,对眼部反应参数阈值进行个性化修正。
在一个示例中,S6可由前述的识别单元1022执行。
具体的,可通过一段时间(例如几天、一周等)内采集眼部反应数据来提取出使用者的用眼习惯(如本身的眨眼最高、最低频率等)、瞳孔大小等,来修正机器学习模型中的参数阈值。
或在数据处理中,对眼部反应参数值(例如瞳孔间距、眼高、眼宽、虹膜颜色等)进行尺寸变换等。
可见,在本实施例中,可根据使用者的眼部反应数据来修正参数阈值,从而令识别结果与个体更相贴切,更精准。
在本明其他实施例中,前述的中央控制系统除识别装置外,还可包括云端或后台。
在一个示例中,识别装置可上传情感认知的识别结果以及相应的脑部反应参数值至云端或后台。
在另一个示例中,识别装置可上传情感认知的识别结果以及相应的脑部反应数据至云端或后台。
识别装置可定期上传脑部反应参数值/脑部反应数据。更具体的,识别装置可直接定期自动上传脑部反应参数值/脑部反应数据,或在使用者授权后,定期自动上传脑部反应参数值/脑部反应数据。
云端或后台会在保护用户隐私的情况下整合海量数据,使用上传的脑部反应参数值/脑部反应数据生成训练样本或测试样本,来训练机器学习模型,以及进行参数(例如前述的参数阈值、状态阈值)优化;而上传的情感认知的识别结果可用于标记相应的训练样本或测试样本;最后,优化后的机器学习模型将同步至识别装置。
在同步至识别装置后,识别装置可能会重新对眼部反应参数阈值进行个性化修正,以 及使用校正数据修正情感认知类型、状态信息以及状态阈值等。
识别装置与云端或后台的交互流程可参见图4。
上述机器学习模型可基于训练样本进行训练得到的,在机器学习模型训练结束后,还会测试训练出的机器学习模型是否满足预期性能要求(包含识别精度和识别速度方面的要求),若不满足,会进行相应的调整,直至满足预期性能要求。
需要说明的是,第一次训练可得到机器学习模型,后续的训练可实现对机器学习模型的优化。
下面重点介绍机器学习模型的训练过程。请参见图5,由云端或后台服务器执行的机器学习模型的训练过程可至少包括如下步骤:
S501:获取样本。
任一样本可包括来自健康个体或病患个体的脑部反应参数。上述病患种类包括但不限于自闭症、抑郁症、阿兹海默症、亨廷顿症、精神分裂症、创伤后遗症。
当然,对于识别装置上传的是脑部反应数据的情况,还会对脑部反应数据进行数据处理,得到训练样本。
S502:对上述样本进行标记。
需要说明,数据源建立初期,可人工对训练样本进行标记,作为机器学习模型的先验知识。而在后期,特别是机器学习模型正式投入使用后,可根据情感认知的识别结果自动标记。
所谓的标记,可指为训练样本添加一个或多个标签。例如,可添加心情状态信息标签、疲劳度状态信息标签、注意力状态信息标签和压力状态信息标签。
上述几类标签的内容包括:情感子类型或认知子类型,以及相应的状态信息。
此外,还可添加表征样本是来自健康个体还是病患个体的标签(更具体的,可以“0”表示健康,以“1”表示病患)。对于病患个体的样本还可进一步添加病情标签,甚至可添加医生的诊断报告作为标签。
S503:使用标记后的样本构成训练样例集和测试样例集。
在一个示例中,可将任一标记后的样本放入训练样例集或测试样例集。其中,训练样例集中的样本用于训练机器学习模型,可称为训练样本,而测试样例集中的样本用于对机器学习模型进行测试,可称为测试样本。
S504:使用训练样例集训练机器学习模型。
具体的,可将训练样例集中的训练样本作为输入进行训练。
示例性的,上述机器学习模型可为神经网络算法模型,例如CNN(Convolutional Neural Network,卷积神经网络)模型。
S504:使用测试样例集测试机器学习模型的诊断性能。
具体的,是将测试样例集中的测试样本输入机器学习模型,根据机器学习模型的输出来统计其诊断性能。
其中,模型的诊断性能可包括识别精度和识别速度。
可令CNN结合GAN(Generative adversarial networks,生成式对抗网络)进行测试,在此不作赘述。
S505:若机器学习模型不满足预设条件,执行下述操作中的一种或多种,并重新训练(返回S501):
重新选定眼部反应参数的种类;
调整机器学习模型的权重值;
调整状态阈值;
调整标签的类型和内容中的至少一种。
在一个示例中,上述预设条件可包括:机器学习模型的识别精度不低于识别精度阈值(95%或98%等),且识别速度不低于速度阈值(例如10秒),以此得到测试识别精度和识别速度兼顾的机器学习模型。
其中,识别精度阈值和速度阈值可根据不同的需要而设定,例如,可设定识别精度阈值为95%,设定速度阈值为10秒处理1000个样本。
在机器学习模型投入使用后,还会持续会对其进行训练,以对机器学习模型进行。
下面将介绍使用不同物理实体来执行的识别方法。
请参见图6,先介绍由智能终端的高像素摄像头采集眼部反应数据,由安装在智能终端的APP软件进行数据处理和输出情感认知的识别结果的实施例,其具体包括如下步骤:
S601:智能终端(的高像素摄像头)采集眼部反应数据。
在本实施例中,眼部反应数据具体为眼部视频,也可以是由此衍生出来的图片数据。使用者可手持智能终端,令摄像头对准双眼,距离眼睛大约30-40厘米,使用者注视摄像头拍摄一段视频。
需要说明的是,眼部视频应为高清晰度(400万以上像素)的视频,甚至可看到眼睛瞳孔部位反射出来的物体。
前述提及了可通过一段时间(例如几天、一周等)内采集眼部反应数据来提取出使用者的用眼习惯(如本身的眨眼最高、最低频率等)、瞳孔大小等,来修正机器学习模型中的参数阈值(个性化修正)。
这种个性化修正一般会在识别系统的使用初期(或机器学习模型优化后最初的一段时间内)进行。
为了更好得进行个性化修正,在一个示例中,可使用智能终端自带的摄像头定期摄取眼部视频。例如,每天摄取两次,每次采集大约1分钟的眼部视频,以分别摄取每天最佳状态和最差状态时的眼部视频。
更具体的,可在早上起床后约一小时摄取每天最佳状态时的眼部视频,在临近下班摄取每天最差状态时的眼部视频。
在持续三四天后,基本可建立以天为周期的自身情感和认知的动态变化值。
在一个示例中,智能终端可在两个工作模式间切换:固定时间记录模式和非固定时间记录模式。
在固定时间记录模式下,可如前述所提及的那样,每天固定两次摄取眼部视频;而在非固定时间记录模式下,可依使用者的操作随时随地摄取眼部视频。
S602:智能终端的语音采集装置采集语音数据。
语音采集装置具体可为麦克风。
语音数据的内容可包括使用者对“心情”、“疲劳度”、“注意力”和“压力”中的至少一种的具体状态描述。
示例性的,使用者可说“我好开心”,“感觉压力好大”,“好累啊”,“大脑已经变成一团浆糊”等。
或者,语音数据的内容也可是对情感类型或认知类型的自我评分。例如,使用者可语音输入“压力7分”等。
S603:智能终端的APP对语音数据进行识别,得到语音识别结果。
需要说明的是,语音识别结果可用于生成训练过程中使用的心情状态信息标签、疲劳度状态信息标签、注意力状态信息标签和压力状态信息标签中的至少一种。再加上健康人群和病患人群如抑郁症患者的横向数据比对,可训练出机器学习模型的智能分类的功能。
需要说明的是,S602和S603可在识别系统的使用初期(或机器学习模型优化后最初的一段时间内)执行,并不需要在每一次的识别过程均执行。
在过了初期使用之后,后期基于人工智能的算法优化,可以算出使用者任何时刻的精神认知状态,而且只要即时提供30秒的眼部视频即可。
S604:智能终端的APP对眼部反应数据进行数据处理,得到眼部反应参数值。
详细内容请参见前述介绍,在此不作赘述。
在具体实现时,可将数据处理跟智能终端的人脸识别功能整合在一起,识别出眼部反应参数值。
此外,还可借助智能终端的角度传感器、距离传感器来确定摄像头与眼睛的角度和距离,进而根据确定的角度和距离来推算出眼睛的实际大小,以还原不同距离、不同角度下的采集图像比例,或对眼部反应参数值进行尺寸变换等。
S605:APP使用机器学习模型分析上述眼部反应参数值,得到情感认知的识别结果并展示。
输出方式可是视觉输出或语音播报输出。
关于如何得到情感认知结果请参见本文前述记载,在此不作赘述。
S606:APP提示是否手工校正,若使用者选择“是”,进入步骤S607,否则进入步骤S609。
在一个示例中,在展示识别结果后或展示识别结果的同时,上述识别装置可人机交互界面,以提示是否手工校正。
S607:接收使用者输入的校正数据。
校正数据可包括情感认知类型状态信息中的至少一种,而状态信息又可进一步包括文字描述和分值中的至少一种。
S607与前述S4相类似,在此不作赘述。
S608:使用校正数据修正情感认知类型、状态信息以及状态阈值中的至少一种,至S609。
需要说明的是,若使用校正数据修正了情感认知类型、状态信息以及状态阈值中的至少一种,则步骤S605中得到的识别结果也会随之修正。
S608与前述S5相类似,在此不作赘述。
S609:使用使用者的眼部反应数据,对眼部反应参数阈值进行个性化修正。
具体的,可通过一段时间(例如几天、一周等)内采集眼部反应数据来提取出使用者的用眼习惯(如本身的眨眼最高、最低频率等)、瞳孔大小等,来修正机器学习模型中的参数阈值。
S603-S609可由前述的识别单元1022执行。
S610:定期上传眼部反应数据和相应的识别数据至云端或后台。
需要说明的是,识别数据可包括情感认知的识别结果和语音识别结果中的至少一种。
在上传前,可对识别结果可进行脱敏处理,以过滤掉敏感信息,敏感信息示例性地包括但不限于:姓名、年龄、居住地、身份证号、联系方式、邮箱等。
上传的数据可用于机器学习模型的训练,识别数据可用于生成标签。训练具体过程以及标签的相关介绍请参见本文前述介绍,在此不作赘述。
云端或后台与用户侧的互动请参见图7。
S610可由前述的识别单元1022执行。
此外,在本申请其他实施例中,用户侧也可仅用于采集数据、展示识别结果以及提供人机交互界面,数据处理、识别、使用校正数据修正情感认知类型、状态信息以及状态阈值、个性化修正等,可由云端或后台实现。
此外,上述APP还可基于健康大数据的算法基础,计算出基于个人数据的身心状态,并提出具有个体针对性的建议以及有益的干预策略,例如提供呼吸调整建议、休息建议、播放音乐等。
上述实施例基于智能终端,结合了现有的高分辨率摄像技术、成熟的计算视觉技术以及先进的人工智能算法来实现对人类情感和认知的识别。由于智能终端的使用范围广泛,因此具有大众普适性,可用于即时评估使用者的情绪、注意力、疲劳度等,适应于大众(例如上班族)调节压力,科学用脑,维持身心健康和平衡。
下面,介绍基于智能穿戴式设备所执行的识别方法。在本实施例中,由智能穿戴式设备(例如智能眼镜)的微型摄像头采集眼部反应数据,并通过无线传输方式传输给智能终端,由安装在智能终端的APP软件进行数据处理和输出情感认知的识别结果。
当然,在识别系统包括后台或云端的情况下,智能穿戴式设备和智能终端都属于用户侧。
以智能眼镜为例,请参见图8,在智能眼镜的镜片两侧,即镜片跟镜柄交界处各有一个微型电子摄像头801。
微型电子摄像头801可持续近距离摄取眼部视频。当然,智能眼镜可工作在两个工作模式间切换:固定时间记录模式和持续记录模式。
当工作在固定时间记录模式,微型电子摄像头801可每天固定两次摄取眼部视频,具体介绍请参见前述S601的记载;而当工作在持续记录模式下,微型电子摄像头801将持续摄取眼部视频。
微型电子摄像头801可具有双向摄影的镜头。
除微型摄像头801外,在镜柄内侧和耳朵接触的部位还可设置皮肤电信号传感器(例如对皮肤生物电敏感的电子元件)。
皮肤电信号传感器具体可为PPG传感器,其采用PPG技术采集跟自主神经系统相关的 PPG数据,包括心率、血压和呼吸频率等。PPG技术多采用绿光或红光能作为测量光源。PPG传感器进一步包括LED灯802,以及光电传感器。
上述LED灯具体可包括红光LED与红外LED灯,当然,也可将红光LED与红外LED灯替换为绿光LED灯。
此外,镜柄中间段可设置前脑皮层信号传感器(例如对脑电信号敏感的元件),前脑皮层信号传感器具体可为EEG传感器,其采用EEG技术收集相关脑电波信号。
目前EEG信号的精准度主要取决于导线的数量,市场上的EEG产品很多只有2根导线(简称2导),很难提高准确率,以达到医学要求。
可选的,智能眼镜的两镜腿与鼻托均为柔性生物电极设计,具体的,镜腿的后内侧为柔性生物电极803,两个鼻托804为柔性生物电极。此种设计在保证眼镜的舒适性同时,保证了电信号的3导设计,而多导相对于单导和双导设计来说,可以有效降低噪声干扰、提升采集信号的精度和后续算法的准确度。
除了结合EEG和PPG技术外,也可以再结合其它相关技术,比如但不限于EOG(眼动电图)、ERG(视网膜电图)、EMG(肌电图)等技术。
可选的,在智能眼镜的镜腿与镜框连接处,还可设置机械休眠开关或定时开关805。
设置机械休眠开关可实现在眼镜折叠合上后,可自动进入休眠状态。
而设置定时开关,可实现由使用者手动设置定时时间,到达定时时间进入休眠状态。
可在一侧设置机械休眠开关或定时开关,也可在两侧设置机械休眠开关或定时开关。
在镜柄外侧可设置触摸屏806,使用者可通过点击、双击、向前滑动、向后没去等不同手势来操作不同功能。
前述提及的输入校正数据可通过使用者的不同手势来实现。
此外,在镜框与镜腿的交接处,还可设置开关感应器(开关传感器),以检测镜腿的开合状态。
上述智能眼镜包含了多种传感器,可以相互弥补不同的传感器缺陷,做到精确度的多重保障。
此外,利用上述多种传感器,还可检测使用者是否佩戴着智能眼镜:
例如,可检测镜腿的开合;显然,当检测到处于关闭状态时,使用者并未佩戴智能眼镜;
再例如,可通过双侧的PPG传感器接收红光与红外光吸收率不同来检测是否接触皮肤,因PPG传感器位于镜柄内侧和耳朵接触的部位,所以当智能眼镜放在腿上时并不会产生PPG信号,这样可避免出现眼镜放在腿上的时候被误以为仍在佩戴眼镜从而记录不实数据的情况。
可在同时检测到PPG信号和EEG信号,并且检测到镜腿处于开启状态时,判定使用者佩戴着智能眼镜。
此外,智能眼镜还可包括物理输出装置。物理输出装置可包括电源系统和数据输出装置。其中,电源系统包括但不限于可充电电池807。在一个示例中,可充电电池807设置在眼镜柄端,一次充完电至少可以持续工作2小时。
而数据输出装置包括但不限于蓝牙或WiFi装置,具体的,可在镜柄中内置蓝牙芯片808。
此外,智能眼镜还可包括语音采集装置(例如微型麦克风)。
请参见图9,基于智能眼镜所执行的识别方法包括如下步骤:
S901:智能眼镜采集脑部反应数据。
在本实施例中,智能眼镜采集的数据包括眼部微视频、PPG和EEG数据,可通过蓝牙传送至智能终端(例如手机)。
S902:智能眼镜采集语音数据。
S902与前述的S602相类似,在此不作赘述。
语音数据也可通过蓝牙传送至智能终端。
S903:智能终端的APP对语音数据进行识别,得到语音识别结果。
S903与前述的S603相类似,在此不作赘述。
S904:智能终端的APP对脑部反应数据进行数据处理,得到脑部反应参数值。
在数据处理过程中,可通过智能眼镜上的左右摄像头来计算智能眼镜与眼睛的角度和距离,进而根据智能眼镜与眼睛的角度和距离来推算出眼睛的实际大小,以还原不同佩戴情况下的采集图像比例,或对眼部反应参数值进行尺寸变换等。
S904的相关内容请参见前述的S604和S2,在此不作赘述。
S905:智能终端的APP使用机器学习模型分析上述脑部反应参数值,得到情感认知的识别结果。
上述情感认知的识别结果可由智能终端展示。智能终端也可将识别结果传输至智能眼镜,由智能眼镜以图像或语音形式展示给使用者。
或者,可将情感认知的识别结果上传至预先设定的手机端、云端或后台。
S905与前述的S3相类似,在此不作赘述。
S906-S910与前述的S606-S610相类似,在此不作赘述。
本实施例中的智能眼镜以计算机视觉精扫眼部反应的技术为主,同时结合检测脑电信号的EEG技术、皮肤电信号的PPG技术等进行数据采集。、
当然,由智能终端APP所执行的步骤也可由智能眼镜或者云端、后台来执行。
智能眼镜可长时间佩戴,从而可用于慢性精神疾病的长期监测和护理。当然,也可用于对精神疾病病情恶化或发作进行即时监测。可给常见精神疾病患者,包括抑郁症、自闭症、创伤后遗症和精神分裂症患者佩戴,以提供日常生活中及时预测、监测和干预疾病动态的功能。
智能终端还可通过智能眼镜输出干预调节措施和建议,例如但不限于输出呼吸调整建议、音乐治疗措施/建议、光感治疗措施/建议或认知行为治疗措施/建议、提出吃药建议等。
此外,基于智能眼镜的识别方法还可用于即时摄取和解读针对观察某一物体或场景的眼部应激反应。
例如,通过双向摄影的镜头,可对眼部和外部环境同时取景,从而有利于即时摄取和解读针对观察某一物体或场景的眼部应激反应。
举例来讲,可在心理测谎或商业广告效应的测试上,通过双向摄影的镜头同时摄取外部环境和眼部视频,将识别结果与外部环境一同解读,可了解佩戴者观察某一物体或场景时的情感或认知变化。
再举例,上述识别方法可在人们对特定场景和物体的应激反应的检测上使用。
例如,在战后创作后遗症治疗中,向佩戴者展示不同的物体(例如照片)或讲述不同的事件,采用双向摄影镜头同时摄像,可建立针对不同物体与情感认知的识别结果之间的关联,判断哪一或哪些物体或事件会带来过大的反应,从而有利于医生进行有针对性的治疗。
再例如,可基于智能眼镜持续监测佩戴者在驾驶过程中是否为疲劳驾驶(根据疲劳度判断),若监测到接近疲劳驾驶或已然疲劳驾驶,可进行提示。
综上,本申请提供的识别系统和识别方法,结合了现有的高分辨率摄像技术、成熟的计算视觉技术以及先进的人工智能算法,以精确扫描眼部反应为主,实现精准化、智能化、即时性和科学性检测以及评估人类大脑的情感状况和认知状态,并且及时、高效以及容易操作。
而目前市面上还缺少具有相似功能的脑健康产品。以智能眼镜为例,目前市面上的智能眼镜虽然具有摄像功能,但并不用于脑认知状态的解读和监测上。
针对目前脑认知或脑疾病状态的即时解读和长期监测,传统方式上是采用脑电波检测和核磁扫描。
然而,在脑电波(EEG)检测技术方面,电极通过接触头皮接收到来自大脑皮层的微弱电信号,往往噪音很大,不容易解析清楚具体的脑电信号。脑电信号目前可以根据不同频率分出几个波段,包括δ、θ、α、β波。在人们处于睡眠状态时,脑电波的信号解析比较清楚;但是人们在白天清醒状态时,产生的脑电信号异常复杂,难于解析。尤其在人类情感和认知状态的解读上,脑电信号的研究一直难以得到重大突破。
功能性核磁扫描(fMRI)技术虽然在鉴别各脑区不同程度参与人类各种情感活动方面具有优势,但是操作很不方便,尤其还要受试者保持平躺和头部固定;并且,具体到检测人类大脑认知状态或情感障碍,目前的研究也非常有限。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,之处参见方法部分说明即可。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及模型步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
结合本文中所公开的实施例描述的方法或模型的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、WD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将 不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (17)

  1. 一种识别方法,其特征在于,应用于用户侧,包括:
    采集使用者的脑部反应数据;
    对所述脑部反应数据进行数据处理,得到脑部反应参数值;其中,所述脑部反应数据至少包括眼部反应数据;所述脑部反应参数值至少包括眼部反应参数值;所述眼部反应参数值包括各眼部反应参数对应的参数值;
    将所述脑部反应参数值输入机器学习模型,由所述机器学习模型输出情感认知的识别结果。
  2. 如权利要求1所述的方法,其特征在于,
    所述由所述机器学习模型输出情感认知的识别结果包括:
    所述机器学习模型根据所述眼部反应参数值和眼部反应参数阈值,识别出情感认知类型以及对应的分值;
    所述机器学习模型根据所述情感认知类型对应的状态阈值以及所述分值,确定所述情感认知类型对应的状态信息;
    所述机器学习模型输出识别出的情感认知类型及所对应的状态信息;
    所述识别结果包括:所述情感认知类型及对应的状态信息。
  3. 如权利要求2所述的方法,其特征在于,所述方法还包括:
    使用所述使用者的眼部反应数据,对所述眼部反应参数阈值进行个性化修正。
  4. 如权利要求2所述的方法,其特征在于,所述方法还包括:
    接收所述使用者输入的校正数据;
    使用所述校正数据修正情感认知类型、状态信息以及状态阈值中的至少一种。
  5. 如权利要求1-4任一项所述的方法,其特征在于,
    所述情感认知类型包括情感类型和认知类型中的至少一种;
    其中:所述情感类型包括:心情子类型和疲劳度子类型;所述认知类型包括注意力子类型和压力子类型;
    所述机器学习模型是使用带标签的训练样本进行训练的;其中,所述训练样本包括来自健康个体或病患个体的脑部反应参数;所述标签包括心情状态信息标签、疲劳度状态信息标签、注意力状态信息标签和压力状态信息标签。
  6. 如权利要求5所述的方法,其特征在于,在训练结束后,所述方法还包括:
    使用带标签的测试样本测试所述机器学习模型的识别精度和识别速度;所述测试样本;其中,所述测试样本包括来自健康个体或病患个体的脑部反应参数;所述标签包括:心情状态信息标签、疲劳度状态信息标签、注意力状态信息标签和压力状态信息标签;
    若所述机器学习模型不满足预设条件,执行下述操作中的一种或多种,并重新进行训练:
    重新选定眼部反应参数;
    调整所述机器学习模型的权重值;
    调整状态阈值;
    调整标签的类型和内容中的至少一种;
    其中,所述预设条件包括:所述机器学习模型的识别精度不低于精度阈值且识别速度不低于速度阈值。
  7. 如权利要求5所述的方法,其特征在于,还包括:
    上传情感认知的识别结果以及相应的脑部反应参数值至云端或后台;上传的脑部反应参数值将在训练过程中作为训练样本或测试样本,所述情感认知的识别结果用于标记相应的训练样本或测试样本;
    或者,上传情感认知的识别结果以及相应的脑部反应数据至云端或后台;上传的脑部反应数据用于生成训练过程中的训练样本或测试样本,所述情感认知的识别结果用于标记相应的训练样本或测试样本;
    所述云端或后台优化所述机器学习模型后,优化后的机器学习模型将同步至所述用户侧。
  8. 如权利要求1所述的方法,其特征在于,所述眼部反应参数包括以下一种或多种:
    眼睛的对比度和亮度;
    眼球运动的速度、方向和频率;
    瞳孔反应的幅度和速度;
    瞳孔间距;
    眨眼的速度、幅度和频率;
    眼部的肌肉收缩情况,所述眼部包括眼周和眉毛。
  9. 如权利要求1所述的方法,其特征在于,
    所述眼部反应数据包括:眼部视频或眼部图像;
    所述脑部反应数据还包括前脑皮层信号和皮肤电信号中的至少一种;
    所述脑部反应参数值还包括:前脑皮层参数值和皮肤电参数值中的至少一种;其中,所述前脑皮层参数值包括各前脑皮层参数对应的参数值,所述皮肤电参数值包括各皮肤电参数对应的参数值。
  10. 一种识别系统,其特征在于,包括采集装置和中央控制系统;所述中央控制系统至少包括识别装置;其中:
    所述采集装置用于:采集使用者的脑部反应数据;
    所述识别装置用于:对所述脑部反应数据进行数据处理,得到脑部反应参数值;其中,所述脑部反应数据至少包括眼部反应数据;所述脑部反应参数值至少包括眼部反应参数值;所述眼部反应参数值包括各眼部反应参数对应的参数值;
    将所述脑部反应参数值输入机器学习模型,由所述机器学习模型输出情感认知的识别结果。
  11. 如权利要求10所述的系统,其特征在于,所述中央控制系统还包括云端或后台;
    所述识别装置还用于:
    上传情感认知的识别结果以及相应的脑部反应参数值至所述云端或后台;上传的脑部反应参数值将在训练过程中作为训练样本或测试样本,所述情感认知的识别结果用于标记相应的训练样本或测试样本;
    或者,上传情感认知的识别结果以及相应的脑部反应数据至云端或后台;上传的脑部反应数据用于生成训练过程中的训练样本或测试样本,所述情感认知的识别结果用于标记相应的训练样本或测试样本;
    所述云端或后台用于:使用带标签的训练样本和测试样本对机器学习模型进行训练;优化后的机器学习模型将同步至所述识别装置。
  12. 如权利要求10或11所述的系统,其特征在于,
    所述采集装置包括智能终端上的摄像装置,所述识别装置具体为所述智能终端;
    或者,
    所述采集装置包括:具有眼部摄像功能的可穿戴式装置;所述识别装置为智能终端。
  13. 如权利要求12所述的系统,所述可穿戴式装置包括:
    采集眼部反应数据的摄像装置;
    采集前脑皮层信号的前脑皮层信号传感器,以及,采集皮肤电信号的皮肤电信号传感器。
  14. 如权利要求13所述的系统,其特征在于,所述可穿戴式智能设备为智能眼镜;所述摄像装置为微型电子摄像头;其中:
    所述微型电子摄像头设置在所述智能眼镜的镜片与镜柄交界处;
    所述皮肤电信号传感器设置在镜柄内侧和耳朵接触的部位;
    所述前脑皮层信号传感器设置在所述镜柄的中部。
  15. 如权利要求14所述的系统,其特征在于,所述智能眼镜的镜腿的后内侧为柔性生物电极,所述智能眼镜的两个鼻托为柔性生物电极。
  16. 一种智能终端,其特征在于,包括:
    获取单元,用于获取使用者的脑部反应数据;
    识别单元,用于:
    对所述脑部反应数据进行数据处理,得到脑部反应参数值;其中,所述脑部反应数据至少包括眼部反应数据;所述脑部反应参数值至少包括眼部反应参数值;所述眼部反应参数值包括各眼部反应参数对应的参数值;
    将所述脑部反应参数值输入机器学习模型,由所述机器学习模型输出情感认知的识别结果。
  17. 一种存储介质,其特征在于,所述存储介质存储有多条指令,所述指令适于处理器进行加载,以执行如权利要求1-9任一项所述的识别方法中的步骤。
PCT/CN2018/123895 2018-12-26 2018-12-26 识别方法及相关装置 WO2020132941A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/123895 WO2020132941A1 (zh) 2018-12-26 2018-12-26 识别方法及相关装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/123895 WO2020132941A1 (zh) 2018-12-26 2018-12-26 识别方法及相关装置

Publications (1)

Publication Number Publication Date
WO2020132941A1 true WO2020132941A1 (zh) 2020-07-02

Family

ID=71128414

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123895 WO2020132941A1 (zh) 2018-12-26 2018-12-26 识别方法及相关装置

Country Status (1)

Country Link
WO (1) WO2020132941A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861957A (zh) * 2021-02-01 2021-05-28 陕西中良智能科技有限公司 一种油井运行状态检测方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104871160A (zh) * 2012-09-28 2015-08-26 加利福尼亚大学董事会 用于感觉和认知剖析的系统和方法
CN105051647A (zh) * 2013-03-15 2015-11-11 英特尔公司 基于生物物理信号的搜集时间和空间模式的大脑计算机接口(bci)系统
CN106537290A (zh) * 2014-05-09 2017-03-22 谷歌公司 与真实和虚拟对象交互的基于生物力学的眼球信号的系统和方法
CN106886792A (zh) * 2017-01-22 2017-06-23 北京工业大学 一种基于分层机制构建多分类器融合模型的脑电情感识别方法
CN108056774A (zh) * 2017-12-29 2018-05-22 中国人民解放军战略支援部队信息工程大学 基于视频刺激材料的实验范式情绪分析实现方法及其装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104871160A (zh) * 2012-09-28 2015-08-26 加利福尼亚大学董事会 用于感觉和认知剖析的系统和方法
CN105051647A (zh) * 2013-03-15 2015-11-11 英特尔公司 基于生物物理信号的搜集时间和空间模式的大脑计算机接口(bci)系统
CN106537290A (zh) * 2014-05-09 2017-03-22 谷歌公司 与真实和虚拟对象交互的基于生物力学的眼球信号的系统和方法
CN106886792A (zh) * 2017-01-22 2017-06-23 北京工业大学 一种基于分层机制构建多分类器融合模型的脑电情感识别方法
CN108056774A (zh) * 2017-12-29 2018-05-22 中国人民解放军战略支援部队信息工程大学 基于视频刺激材料的实验范式情绪分析实现方法及其装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861957A (zh) * 2021-02-01 2021-05-28 陕西中良智能科技有限公司 一种油井运行状态检测方法及装置
CN112861957B (zh) * 2021-02-01 2024-05-03 陕西中良智能科技有限公司 一种油井运行状态检测方法及装置

Similar Documents

Publication Publication Date Title
US11561614B2 (en) Methods and systems for obtaining, aggregating, and analyzing vision data to assess a person's vision performance
US10524667B2 (en) Respiration-based estimation of an aerobic activity parameter
US10799122B2 (en) Utilizing correlations between PPG signals and iPPG signals to improve detection of physiological responses
US11064892B2 (en) Detecting a transient ischemic attack using photoplethysmogram signals
US11103139B2 (en) Detecting fever from video images and a baseline
US20180103917A1 (en) Head-mounted display eeg device
US10638938B1 (en) Eyeglasses to detect abnormal medical events including stroke and migraine
JP2015533559A (ja) 知覚および認知プロファイリングのためのシステムおよび方法
US20190313966A1 (en) Pain level determination method, apparatus, and system
Nie et al. SPIDERS: Low-cost wireless glasses for continuous in-situ bio-signal acquisition and emotion recognition
Nie et al. SPIDERS+: A light-weight, wireless, and low-cost glasses-based wearable platform for emotion sensing and bio-signal acquisition
IL268575A (en) Patient monitoring system and method
CN109620265A (zh) 识别方法及相关装置
US20220211310A1 (en) Ocular system for diagnosing and monitoring mental health
KR101862696B1 (ko) 실사와 컴퓨터 그래픽스 영상을 이용한 생체 정보 표시 시스템 및 그의 표시 방법
WO2020132941A1 (zh) 识别方法及相关装置
CN209932735U (zh) 可穿戴式智能设备
WO2020209846A1 (en) Pain level determination method, apparatus, and system
JP2022153232A (ja) 生体情報処理装置および生体情報処理システム
Cho et al. Instant automated inference of perceived mental stress through smartphone ppg and thermal imaging
Maniyath et al. Evaluation of Mental Health Using IOT based Wearables
Bevilacqua et al. Proposal for non-contact analysis of multimodal inputs to measure stress level in serious games
WO2022209498A1 (ja) 生体情報処理装置および生体情報処理システム
Andreeßen Towards real-world applicability of neuroadaptive technologies: investigating subject-independence, task-independence and versatility of passive brain-computer interfaces
WO2022209499A1 (ja) 情動情報を表示する情報処理システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18944525

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18944525

Country of ref document: EP

Kind code of ref document: A1