CN111657887B - Near-infrared shallow subcutaneous tissue imaging device and cognitive load analysis method - Google Patents

Near-infrared shallow subcutaneous tissue imaging device and cognitive load analysis method Download PDF

Info

Publication number
CN111657887B
CN111657887B CN202010459503.0A CN202010459503A CN111657887B CN 111657887 B CN111657887 B CN 111657887B CN 202010459503 A CN202010459503 A CN 202010459503A CN 111657887 B CN111657887 B CN 111657887B
Authority
CN
China
Prior art keywords
cognitive load
infrared
user
blood vessel
body part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010459503.0A
Other languages
Chinese (zh)
Other versions
CN111657887A (en
Inventor
张腾翔
高佳圆
陈益强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN202010459503.0A priority Critical patent/CN111657887B/en
Publication of CN111657887A publication Critical patent/CN111657887A/en
Application granted granted Critical
Publication of CN111657887B publication Critical patent/CN111657887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • A61B5/0064Body surface scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/02007Evaluating blood vessel condition, e.g. elasticity, compliance
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Cardiology (AREA)
  • Neurology (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Psychology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Child & Adolescent Psychology (AREA)
  • Mathematical Physics (AREA)
  • Vascular Medicine (AREA)
  • Quality & Reliability (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Optics & Photonics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Neurosurgery (AREA)
  • Educational Technology (AREA)
  • Social Psychology (AREA)

Abstract

The invention provides a cognitive load analysis method, which comprises the following steps: a model construction step, namely acquiring body parameters of a user and a cognitive load state corresponding to the user; training a machine learning model according to the physical parameters and the cognitive load state to obtain a cognitive load model; and a model analysis step, namely acquiring the current cognitive load state of the user through the cognitive load model according to the current physical parameters of the user. The invention also provides a near-infrared superficial subcutaneous tissue imaging device and a data processing system for carrying out cognitive load analysis on the near-infrared image acquired by the near-infrared superficial subcutaneous tissue imaging device aiming at the user.

Description

Near-infrared shallow subcutaneous tissue imaging device and cognitive load analysis method
Technical Field
The invention relates to the field of infrared imaging and image analysis of superficial subcutaneous tissues, in particular to near infrared imaging for analyzing cognitive load and emotion.
Background
The existing near-infrared subcutaneous tissue infrared imaging is mostly used for superficial blood vessel imaging. In the patent: in a vein dynamic characteristic analysis device and method based on near infrared spectrum technology, an author analyzes vein blood vessels through transmitted light; in the patent: in a flowing blood imaging device, flowing blood is imaged by emitting near infrared light of different wave bands, so that the imaging definition of a cardiovascular image is improved. However, infrared imaging and cognitive load and mood were not further analyzed in this type of protocol.
Meanwhile, some technologies related to cognitive load and emotion recognition are applied to near infrared imaging. In the patent: in a training method, a device, equipment and a system for emotion regulation, an author uses near infrared imaging in the system, but the author is mainly used for acquiring blood oxygen data of a trainee; in the patent: in an emotion recognition apparatus and method, a head-mounted display device, and a storage medium, an author uses near-infrared imaging to acquire heat of a human body. This type of scheme does not address systematic analysis of near infrared imaging.
The existing infrared imaging technology for the superficial subcutaneous tissue has the following problems in cognitive load and emotion recognition:
1. lack of specific analysis of infrared imaging of superficial subcutaneous tissue;
2. the information for infrared imaging is less used, and most of the information only uses one kind of information (such as blood flow velocity and temperature).
Disclosure of Invention
In order to solve the above problems, the present invention provides a cognitive load analysis method, including: a model construction step, namely acquiring body parameters of a user and a cognitive load state corresponding to the user; training a machine learning model according to the physical parameters and the cognitive load state to obtain a cognitive load model; and a model analysis step, namely acquiring the current cognitive load state of the user through the cognitive load model according to the current physical parameters of the user.
The cognitive load analysis method of the present invention, wherein the physical parameter is a blood vessel diameter of a subcutaneous tissue at the user's body part; the physical parameter comprises a first blood vessel diameter w at a first body part of the user1And a second blood vessel diameter w at a second body part of the user2
The model construction step comprises: acquiring a first near-infrared image of the first body part, and performing Gaussian filtering and image noise reduction on the first near-infrared imageAnd homomorphic filtering, extracting tone channel data of the first near-infrared image through a hexagonal cone model, carrying out image segmentation to obtain a first blood vessel image, detecting a blood vessel boundary according to directional local contrast, and obtaining a first blood vessel diameter w1(ii) a Acquiring a second near-infrared image of the second body part, performing Gaussian filtering, image noise reduction and homomorphic filtering on the second near-infrared image, extracting tone channel data of the second near-infrared image through a hexagonal cone model, performing image segmentation to obtain a second blood vessel image, detecting a blood vessel boundary according to directional local contrast, and obtaining a second blood vessel diameter w2(ii) a With t0First vessel diameter at time
Figure BDA0002510501750000021
And t0First vessel diameter at time + Δ t
Figure BDA0002510501750000029
Obtaining a first blood vessel diameter difference for a first body part
Figure BDA0002510501750000022
With t0Second vessel diameter at time
Figure BDA0002510501750000023
And t0Second vessel diameter at time + Δ t
Figure BDA0002510501750000024
Obtaining a second blood vessel diameter difference of a second body part
Figure BDA0002510501750000025
With t0First vessel diameter at time
Figure BDA0002510501750000026
And a second vessel diameter
Figure BDA0002510501750000027
Obtaining the difference of the blood vessel diameters of the body part of the user
Figure BDA0002510501750000028
For Δ w1、Δw2Extracting features of the delta w to construct a training data set, and training a machine learning model by using the data set to obtain the cognitive load model; where Δ t is the measurement time interval.
The cognitive load analysis method comprises the steps of measuring a first near-infrared image at a first body part and a second near-infrared image at a second body part by using a near-infrared superficial subcutaneous tissue imaging device, wherein the first body part is a temple, and the second body part is a nose.
The cognitive load analysis method of the present invention, wherein the output of the cognitive load model is the cognitive load state of the user, and the cognitive load state is: low or medium or high load.
The invention also provides a near-infrared shallow subcutaneous tissue imaging device, comprising: a frame; the first near-infrared sensor is arranged at a position, corresponding to the temple of the human body, of the spectacle frame and is used for acquiring a near-infrared image of the temple of the user; the second near-infrared sensor is arranged at the position, corresponding to the nose of the human body, of the spectacle frame and used for acquiring a near-infrared image of the nose of the user; the data transmission module is used for transmitting the near-infrared image acquired by the first near-infrared sensor and the near-infrared image acquired by the second near-infrared sensor to a processor of a data processing system; the data transmission module transmits data through a data line.
The present invention also provides a computer-readable storage medium storing computer-executable instructions for performing the cognitive load analysis method as described above.
The invention also proposes a data processing system comprising: a near-infrared superficial subcutaneous tissue imaging device as previously described; the computer-readable storage medium as described previously; a processor retrieving and executing computer executable instructions in the computer readable storage medium to perform a user specific cognitive load analysis on the near infrared images acquired by the near infrared superficial subcutaneous tissue imaging device.
Drawings
FIG. 1 is a schematic structural diagram of a near-infrared superficial subcutaneous tissue imaging device of the present invention.
FIG. 2 is a flow chart of a cognitive load analysis method of the present invention.
FIG. 3 is a schematic diagram of a data processing system of the present invention.
Detailed Description
During research, the inventor finds that near infrared can extract information of superficial subcutaneous tissues (including but not limited to blood flow speed, blood vessel width, blood pressure and blood oxygen), the temperature of local tissues of blood vessels can be increased to a certain extent under the conditions of blood flow acceleration and blood vessel expansion, and meanwhile, the superficial subcutaneous tissues of partial parts (including but not limited to forehead, nose tip, nose bridge and temple) can present different states under different cognitive load states and different emotions of a human.
Therefore, the invention provides the analysis of the superficial subcutaneous tissue by near infrared imaging, and further the analysis of the cognitive load.
The invention aims to solve the problem of equipment invasiveness in emotion acquisition and cognitive load acquisition in the prior art, and provides a near-infrared shallow subcutaneous tissue imaging device and a cognitive load analysis method.
The key points of the invention comprise:
1. the infrared imaging module has the characteristics of small relative volume, expandability, portability, mobility and the like and is used as a signal input source;
2. providing analysis of superficial subcutaneous tissue information (including but not limited to blood flow velocity and degree of vasodilation);
3. a brand-new infrared information analysis angle: analyzing the cognitive load of the user;
4. providing conversion of infrared imaging signals to cognitive load.
In order to make the objects, technical solutions and advantages of the present invention more clearly understood, the following provides a further detailed description of an infrared superficial subcutaneous tissue imaging apparatus and a cognitive load analysis method according to the present invention with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
FIG. 1 is a schematic structural diagram of a near-infrared superficial subcutaneous tissue imaging device of the present invention. As shown in fig. 1, the near-infrared shallow subcutaneous tissue imaging device of the present invention adopts a smart glasses structure, including: the glasses comprise a glasses frame 1, a first near-infrared sensor 2-1, a second near-infrared sensor 2-2 and a data transmission module 3, wherein the first near-infrared sensor 2-1 is arranged at a position, corresponding to the temple of the head of a human body, of the glasses frame 1 so as to acquire a near-infrared image of subcutaneous tissue at the temple of the user, the second near-infrared sensor 2-2 is arranged at a position, corresponding to the nose of the human body, of the glasses frame 1 so as to acquire a near-infrared image of the subcutaneous tissue at the nose of the user, and near-infrared image information of the face of a wearer around the nose, the temple and the like can be collected by placing near-infrared imaging sensors at different parts of the glasses; the data transmission module 3 is configured to transmit the near-infrared images collected by the first near-infrared sensor 2-1 and the second near-infrared sensor 2-2 to a processor of the data processing system, in this embodiment, the data transmission module 3 performs data transmission in a wired mode by using a data line, and may also perform wireless transmission based on a wireless protocol by using, for example, bluetooth, wifi, and the like, which is not limited thereto.
The invention can image the subcutaneous blood vessel by processing the near infrared image collected by the near infrared shallow subcutaneous tissue imaging device. Blood characteristics such as blood vessel width, blood pressure, blood oxygen and the like are obtained by analyzing the blood vessel image. And extracting learning features based on blood vessel and blood parameters of different parts. And inputting the extracted features into a machine learning (such as random forest) or neural network model, wherein the output of the model is the current cognitive load and emotional state of the wearer. The model is trained by collecting data in advance.
FIG. 2 is a flow chart of a cognitive load analysis method of the present invention. As shown in fig. 2, an embodiment of the cognitive load recognition method based on the blood vessel width is as follows:
step S1, respectively collecting by near-infrared shallow subcutaneous tissue imaging deviceObtain the current time t0Near-infrared images of superficial subcutaneous tissues of the temporal temple region and the nasal region;
step S2, obtaining the blood vessel diameter w of the superficial subcutaneous tissue of the temple part and the nose part through the near infrared image1,w2(ii) a Carrying out Gaussian filtering and noise reduction on the near-infrared image of the temple part; homomorphic filtering is carried out on the obtained image, and light interference is filtered; converting the processed image into HSV (Hue, Saturation, Value, also called a hexagonal cone model) space, and extracting H channel parameters; carrying out picture segmentation based on a KNN clustering algorithm to obtain a blood vessel image of the temple part; the near-infrared image of the nose part is processed in the same way;
step S3, detecting the blood vessel boundary based on the directional local contrast; based on the space coordinate information of the left and right boundary pixels, the specific blood vessel diameter of the temple part can be directly calculated
Figure BDA0002510501750000041
In the same way, a specific blood vessel diameter of the nasal part is obtained
Figure BDA0002510501750000042
Obtaining t in the above manner0Specific blood vessel diameter of temple region at + Δ t
Figure BDA0002510501750000043
And specific vessel diameter of nasal part
Figure BDA0002510501750000051
Δ t is the adjacent image frame time interval;
step S4, extracting machine learning characteristics;
to pair
Figure BDA0002510501750000052
Performing feature extraction on the data to obtain the diameter difference of specific blood vessels at the temple part
Figure BDA0002510501750000053
Specific blood vessel diameter difference of nasal part
Figure BDA0002510501750000054
And t0Difference of blood vessel diameters at two positions of time
Figure BDA0002510501750000055
Step S5, collecting tested data delta w through user experiment1、Δw2Δ w to train a support vector machine based machine learning model. The model input is the machine learning characteristics in step S4, and the model output is the current cognitive load (e.g. low load, medium load, high load)
Step S6, identifying the current cognitive load of the user in real time based on the model; for the emotion recognition of the user, the output of the machine learning model needs to be adjusted to the emotion type (e.g. calm, happy, sad) in step S5, and other relevant blood parameters such as blood pressure and blood oxygen may be added to improve the recognition accuracy.
The main technical innovation point of the invention is that based on a near-infrared superficial subcutaneous tissue imaging device integrated with glasses, blood vessel blood parameters of a plurality of head parts are compared to obtain the current cognitive load or emotion. This is because the flow of blood over the head will change when the user is under different cognitive loads or emotions. For example, at high cognitive load, blood will flow more from the nose to the forehead.
FIG. 3 is a schematic diagram of a data processing system of the present invention. As shown in fig. 3, an embodiment of the present invention further provides a data processing system, which includes the aforementioned near-infrared superficial subcutaneous tissue imaging apparatus, a computer-readable storage medium, and a processor. The computer-readable storage medium of the present invention stores computer-executable instructions that, when executed by a processor of a data processing system, perform cognitive load analysis on a user by processing a near-infrared image acquired by a near-infrared superficial subcutaneous tissue imaging device. It will be understood by those skilled in the art that all or part of the steps of the above method may be implemented by instructing relevant hardware (e.g., processor, FPGA, ASIC, etc.) through a program, and the program may be stored in a readable storage medium, such as a read-only memory, a magnetic or optical disk, etc. All or some of the steps of the above embodiments may also be implemented using one or more integrated circuits. Accordingly, the modules in the above embodiments may be implemented in hardware, for example, by an integrated circuit, or in software, for example, by a processor executing programs/instructions stored in a memory. Embodiments of the invention are not limited to any specific form of hardware or software combination.
Compared with the prior art, the invention provides a novel non-invasive technology to analyze various psychological signals of emotion and cognitive load, and the extensible module is used for building the identification system, so that the mobility and the practicability of the system are enhanced, and various key functions such as blood vessel imaging, blood flow analysis, emotion concentration degree identification and the like are realized.
The above embodiments are only for illustrating the invention and are not to be construed as limiting the invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention, therefore, all equivalent technical solutions also fall into the scope of the invention, and the scope of the invention is defined by the claims.

Claims (7)

1. A cognitive load analysis method, comprising:
a model construction step, namely acquiring body parameters of a user and a cognitive load state corresponding to the user; training a machine learning model according to the physical parameters and the cognitive load state to obtain a cognitive load model; wherein the body parameter is a blood vessel diameter of subcutaneous tissue at the user body part, including a first blood vessel diameter w at a first body part of the user1And a second blood vessel diameter w at a second body part of the user2(ii) a With t0First vessel diameter at time
Figure FDA0003068975210000011
And t0First vessel diameter at time + Δ t
Figure FDA0003068975210000012
Obtaining a first blood vessel diameter difference for a first body part
Figure FDA0003068975210000013
With t0Second vessel diameter at time
Figure FDA0003068975210000014
And t0Second vessel diameter at time + Δ t
Figure FDA0003068975210000015
Obtaining a second blood vessel diameter difference of a second body part
Figure FDA0003068975210000016
With t0First vessel diameter at time
Figure FDA0003068975210000017
And a second vessel diameter
Figure FDA0003068975210000018
Obtaining the difference of the blood vessel diameters of the body part of the user
Figure FDA0003068975210000019
For Δ w1、Δw2Performing characteristic extraction on the delta w to construct a training data set, training a machine learning model by using the data set and the cognitive load state to obtain the cognitive load model, wherein delta t is a measurement time interval;
and a model analysis step, namely acquiring the current cognitive load state of the user through the cognitive load model according to the current physical parameters of the user.
2. The cognitive load analysis method of claim 1, wherein the model building step further comprises:
obtaining a first near-infrared image of the first body part, performing Gaussian filtering, image noise reduction and homomorphic filtering on the first near-infrared image, extracting tone channel data of the first near-infrared image through a hexagonal cone model, performing image segmentation to obtain a first blood vessel image, detecting a blood vessel boundary according to directional local contrast, and obtaining a first blood vessel diameter w1
Acquiring a second near-infrared image of the second body part, performing Gaussian filtering, image noise reduction and homomorphic filtering on the second near-infrared image, extracting tone channel data of the second near-infrared image through a hexagonal cone model, performing image segmentation to obtain a second blood vessel image, detecting a blood vessel boundary according to directional local contrast, and obtaining a second blood vessel diameter w2
3. The method of claim 1, wherein a first near-infrared image of the first body part and a second near-infrared image of the second body part are measured with a near-infrared shallow subcutaneous tissue imaging device, wherein the first body part is temple and the second body part is nose.
4. The cognitive load analysis method of claim 1, wherein the output of the cognitive load model is the cognitive load status of the user, the cognitive load status being: low or medium or high load.
5. A computer-readable storage medium storing computer-executable instructions for performing the cognitive load analysis method of any one of claims 1-4.
6. A data processing system comprising:
shallow layer subcutaneous tissue imaging device of near-infrared, this shallow layer subcutaneous tissue imaging device of near-infrared specifically includes:
a frame;
the first near-infrared sensor is arranged at a position, corresponding to the temple of the human body, of the spectacle frame and is used for acquiring a near-infrared image of the temple of the user;
the second near-infrared sensor is arranged at the position, corresponding to the nose of the human body, of the spectacle frame and used for acquiring a near-infrared image of the nose of the user;
the data transmission module is used for transmitting the near-infrared image acquired by the first near-infrared sensor and the near-infrared image acquired by the second near-infrared sensor to a processor of a data processing system;
the data processing system further comprising a processor and the computer-readable storage medium of claim 5; the processor retrieves and executes computer-executable instructions in the computer-readable storage medium to perform a cognitive load analysis for a user on a near-infrared image acquired by the near-infrared superficial subcutaneous tissue imaging device.
7. The data processing system of claim 6, wherein the data transmission module performs data transmission via a data line.
CN202010459503.0A 2020-05-27 2020-05-27 Near-infrared shallow subcutaneous tissue imaging device and cognitive load analysis method Active CN111657887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010459503.0A CN111657887B (en) 2020-05-27 2020-05-27 Near-infrared shallow subcutaneous tissue imaging device and cognitive load analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010459503.0A CN111657887B (en) 2020-05-27 2020-05-27 Near-infrared shallow subcutaneous tissue imaging device and cognitive load analysis method

Publications (2)

Publication Number Publication Date
CN111657887A CN111657887A (en) 2020-09-15
CN111657887B true CN111657887B (en) 2021-09-03

Family

ID=72384575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010459503.0A Active CN111657887B (en) 2020-05-27 2020-05-27 Near-infrared shallow subcutaneous tissue imaging device and cognitive load analysis method

Country Status (1)

Country Link
CN (1) CN111657887B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105942979A (en) * 2016-05-23 2016-09-21 清华大学玉泉医院 Near-infrared brain-imaging instrument based on cognition task test
CN107205701A (en) * 2014-11-14 2017-09-26 张文瀚 Device and method to detect blood oxygen concentration and/or cephalophyma
CN110200641A (en) * 2019-06-04 2019-09-06 清华大学 A kind of method and device based on touch screen measurement cognitive load and psychological pressure

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102579058A (en) * 2011-01-17 2012-07-18 吴明达 Method for analyzing love state by utilizing cerebral blood flow
US20160007921A1 (en) * 2014-07-10 2016-01-14 Vivonics, Inc. Head-mounted neurological assessment system
JP7080657B2 (en) * 2018-02-07 2022-06-06 株式会社デンソー Emotion identification device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107205701A (en) * 2014-11-14 2017-09-26 张文瀚 Device and method to detect blood oxygen concentration and/or cephalophyma
CN105942979A (en) * 2016-05-23 2016-09-21 清华大学玉泉医院 Near-infrared brain-imaging instrument based on cognition task test
CN110200641A (en) * 2019-06-04 2019-09-06 清华大学 A kind of method and device based on touch screen measurement cognitive load and psychological pressure

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Age and Vascular Burden Determinants of Cortical Hemodynamics Underlying Verbal Fluency;Sebastian Heinzel等;《RESEARCH ARTICLE》;20150922;第10卷(第9期);第1-14页 *
Linear dependency of full scattering profile isobaric point on tissue diameter;Hamootal Duadi等;《Journal of Biomedical Optics》;20140212;第19卷(第2期);第1-5页 *

Also Published As

Publication number Publication date
CN111657887A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
Jung et al. Utilizing deep learning towards multi-modal bio-sensing and vision-based affective computing
CN107358180B (en) Pain assessment method for facial expression
US10154818B2 (en) Biometric authentication method and apparatus
Pollreisz et al. A simple algorithm for emotion recognition, using physiological signals of a smart watch
CN107122709B (en) Living body detection method and device
CN109993068B (en) Non-contact human emotion recognition method based on heart rate and facial features
KR101963694B1 (en) Wearable device for gesture recognition and control and gesture recognition control method using the same
KR101738278B1 (en) Emotion recognition method based on image
JP2017093760A (en) Device and method for measuring periodic variation interlocking with heart beat
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
CN108742660A (en) A kind of Emotion identification method based on wearable device
KR101952804B1 (en) Emotion recognition interface apparatus
Przybyło A deep learning approach for remote heart rate estimation
CN113764099A (en) Psychological state analysis method, device, equipment and medium based on artificial intelligence
CN111657887B (en) Near-infrared shallow subcutaneous tissue imaging device and cognitive load analysis method
CN110598607B (en) Non-contact and contact cooperative real-time emotion intelligent monitoring system
Karmuse et al. A robust rppg approach for continuous heart rate measurement based on face
KR102608633B1 (en) Electronic device and control method thereof
CN113705339B (en) Cross-user human behavior recognition method based on antagonism domain adaptation strategy
JP6201520B2 (en) Gaze analysis system and method using physiological indices
CN106361327B (en) Waking state detection method and system in sleep state analysis
CN106344008B (en) Waking state detection method and system in sleep state analysis
CN115553779A (en) Emotion recognition method and device, electronic equipment and storage medium
CN115581435A (en) Sleep monitoring method and device based on multiple sensors
KR102333120B1 (en) Self Scalp Diagnostic System and Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant