CN117462146A - Method and device for detecting abnormal discharge of human brain, storage medium and electronic equipment - Google Patents
Method and device for detecting abnormal discharge of human brain, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN117462146A CN117462146A CN202311630555.XA CN202311630555A CN117462146A CN 117462146 A CN117462146 A CN 117462146A CN 202311630555 A CN202311630555 A CN 202311630555A CN 117462146 A CN117462146 A CN 117462146A
- Authority
- CN
- China
- Prior art keywords
- data
- biomedical
- face
- abnormal discharge
- monitoring data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000002159 abnormal effect Effects 0.000 title claims abstract description 178
- 210000004556 brain Anatomy 0.000 title claims abstract description 80
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000012544 monitoring process Methods 0.000 claims abstract description 237
- 230000009471 action Effects 0.000 claims abstract description 153
- 238000001514 detection method Methods 0.000 claims abstract description 140
- 238000012545 processing Methods 0.000 claims abstract description 85
- 230000033001 locomotion Effects 0.000 claims description 127
- 238000012806 monitoring device Methods 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 19
- 238000007781 pre-processing Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 14
- 230000015654 memory Effects 0.000 claims description 12
- 210000004761 scalp Anatomy 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 7
- 238000012952 Resampling Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 20
- 210000001508 eye Anatomy 0.000 description 16
- 230000008859 change Effects 0.000 description 15
- 210000001331 nose Anatomy 0.000 description 14
- 238000006073 displacement reaction Methods 0.000 description 13
- 238000012549 training Methods 0.000 description 12
- 210000000746 body region Anatomy 0.000 description 10
- 239000013598 vector Substances 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 230000004927 fusion Effects 0.000 description 6
- 210000000214 mouth Anatomy 0.000 description 6
- 210000005069 ears Anatomy 0.000 description 5
- 230000001787 epileptiform Effects 0.000 description 5
- 238000002372 labelling Methods 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 4
- 239000012634 fragment Substances 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 206010015037 epilepsy Diseases 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 230000002496 gastric effect Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 230000003183 myoelectrical effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 210000004958 brain cell Anatomy 0.000 description 2
- 230000000747 cardiac effect Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 230000001037 epileptic effect Effects 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000035790 physiological processes and functions Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000002269 spontaneous effect Effects 0.000 description 2
- 208000012902 Nervous system disease Diseases 0.000 description 1
- 230000003187 abdominal effect Effects 0.000 description 1
- 230000036982 action potential Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 230000036760 body temperature Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 210000004087 cornea Anatomy 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 210000000624 ear auricle Anatomy 0.000 description 1
- 238000002565 electrocardiography Methods 0.000 description 1
- 238000002567 electromyography Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 210000002837 heart atrium Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 210000001087 myotubule Anatomy 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 230000001020 rhythmical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 210000002784 stomach Anatomy 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/372—Analysis of electroencephalograms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7225—Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/725—Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Psychology (AREA)
- Evolutionary Computation (AREA)
- Power Engineering (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The embodiment of the disclosure relates to a method and a device for detecting abnormal discharge of a human brain, a storage medium and electronic equipment, and relates to the technical fields of artificial intelligence and multiple modes. The method comprises the following steps: acquiring biomedical characteristic data of a tested object; acquiring video monitoring data of the tested object; detecting action information of the detected object according to the video monitoring data; extracting an interested image sequence used for representing the action of the tested object from the video monitoring data; and processing the biomedical feature data, the action information and the interested image sequence by utilizing a pre-trained abnormal discharge detection model to obtain an abnormal discharge detection result of the tested object. The automatic detection of abnormal discharge of the human brain can be realized, and the detection accuracy is improved.
Description
Technical Field
Embodiments of the present disclosure relate to the field of artificial intelligence and multimodal technologies, and more particularly, to a human brain abnormal discharge detection method, a human brain abnormal discharge detection apparatus, a computer-readable storage medium, and an electronic device.
Background
This section is intended to provide a background or context for the embodiments of the disclosure recited in the claims, which description herein is not admitted to be prior art by inclusion in this section.
Abnormal discharge of human brain is caused by abnormal activity of brain cells, and can cause nervous system diseases such as epilepsy. The abnormal discharge of the brain of a person is detected by the detection means such as an electroencephalogram and the like, so that doctors can be helped to identify diseases of patients. For example, the detection of epileptiform discharges in an electroencephalogram is one of the important criteria for the diagnosis of epilepsy.
The abnormal discharge detection of human brain relates to detection data such as electroencephalogram, and the like, and the detection and analysis of the data are heavy in work and consume a great deal of manpower and time cost. Therefore, the industry is increasingly demanding for automatic detection of abnormal discharges of the human brain.
Disclosure of Invention
However, the accuracy of detection of abnormal discharge of human brain is currently required to be improved.
In the related art, an artificial intelligence technology is adopted to detect abnormal discharge of the human brain, for example, an intelligent network is identified based on deep learning training epileptiform discharge, so as to detect epileptiform discharge of the human brain. The related literature reports that the accuracy rate reaches only 70%, and professional doctors and technicians are required to post-process the artificial intelligence reading result.
Therefore, an improved method for detecting abnormal discharge of human brain is highly needed, which can realize automatic detection of abnormal discharge of human brain and improve the detection accuracy.
In this context, embodiments of the present disclosure desirably provide a human brain abnormal discharge detection method, a human brain abnormal discharge detection apparatus, a computer-readable storage medium, and an electronic device.
According to a first aspect of the present disclosure, there is provided a method for detecting abnormal discharge of a human brain, comprising: acquiring biomedical characteristic data of a tested object; acquiring video monitoring data of the tested object; detecting action information of the detected object according to the video monitoring data; extracting an interested image sequence used for representing the action of the tested object from the video monitoring data; and processing the biomedical feature data, the action information and the interested image sequence by utilizing a pre-trained abnormal discharge detection model to obtain an abnormal discharge detection result of the tested object.
In one embodiment, the acquiring biomedical feature data of the subject includes: acquiring biomedical monitoring data of the tested object acquired by a biomedical monitoring device; preprocessing the biomedical monitoring data; and obtaining biomedical characteristic data according to the preprocessed biomedical monitoring data.
In one embodiment, the biomedical monitoring data comprises: multichannel brain electrical monitoring data acquired from a plurality of sites of the scalp of the subject; the obtaining the biomedical characteristic data according to the preprocessed biomedical monitoring data comprises the following steps: calculating the potential difference between each channel and the reference electrode according to the preprocessed multichannel electroencephalogram monitoring data to obtain electroencephalogram initial characteristic data; and extracting biomedical feature data according to the initial feature data of the electroencephalogram signals.
In one embodiment, the biomedical signature data comprises an electroencephalogram waveform signature; the extracting the biomedical feature data according to the electroencephalogram signal initial feature data comprises the following steps: and processing the initial characteristic data of the electroencephalogram signals by utilizing a pre-trained waveform characteristic extraction model so as to extract the waveform characteristics of the electroencephalogram signals.
In one embodiment, the biomedical signature data comprises an electroencephalogram signal time-frequency signature; the extracting the biomedical feature data according to the electroencephalogram signal initial feature data comprises the following steps: performing time-frequency conversion on the electroencephalogram signal initial characteristic data to obtain electroencephalogram signal time-frequency data corresponding to the electroencephalogram signal initial characteristic data; and extracting the time-frequency characteristics of the electroencephalogram signals according to the time-frequency characteristics of the electroencephalogram signals.
In one embodiment, the preprocessing of the biomedical monitoring data comprises at least one of: resampling, filtering, removing noise data and carrying out numerical value standardization processing.
In one embodiment, the video surveillance data comprises face video surveillance data; the detecting the motion information of the tested object according to the video monitoring data comprises the following steps: detecting face key point data from the face video monitoring data; and obtaining the face action information of the tested object according to the face key point data.
In one embodiment, the sequence of images of interest comprises a sequence of face images; the extracting an image sequence of interest for characterizing the motion of the tested object from the video monitoring data comprises the following steps: and cutting out a face area image from a plurality of frames of the face video monitoring data according to the face key point data to obtain a face image sequence.
In one embodiment, the obtaining the face motion information of the tested object according to the face key point data includes: and determining the face key points with motion and the displacement information thereof according to the face key point data to obtain the face action information of the tested object.
In one embodiment, the video monitoring data comprises body video monitoring data; the detecting the motion information of the tested object according to the video monitoring data comprises the following steps: detecting body key point data from the body video monitoring data; and obtaining the body action information of the tested object according to the body key point data.
In one embodiment, the sequence of images of interest comprises a sequence of body images; the extracting an image sequence of interest for characterizing the motion of the tested object from the video monitoring data comprises the following steps: and cutting out a body region image from multiple frames of the body monitoring video data according to the body key point data to obtain a body image sequence.
In one embodiment, the obtaining the body motion information of the tested object according to the body key point data includes: and determining body key points generating movement and displacement information thereof according to the body key point data to obtain body action information of the tested object.
In one embodiment, the abnormal discharge detection model comprises a feature processing layer, an attention layer and a classification layer; the processing the biomedical feature data, the action information and the interested image sequence by using a pre-trained abnormal discharge detection model to obtain an abnormal discharge detection result of the tested object comprises the following steps: inputting the biomedical feature data, the motion information, the image sequence of interest into the abnormal discharge detection model; extracting action feature data from the action information by utilizing the feature processing layer, extracting image feature data from the interested image sequence, and fusing the biomedical feature data, the action feature data and the image feature data to obtain fused features; characterizing the fusion features by using the attention layer to obtain embedded features; and mapping the embedded features to an output space by using the classification layer to obtain an abnormal discharge detection result of the tested object.
In one embodiment, before the processing of the biomedical feature data, the motion information, the sequence of images of interest with a pre-trained abnormal discharge detection model, the method further comprises: at least one of the motion information and the sequence of images of interest is time aligned with the biomedical feature data.
In one embodiment, said time-aligning at least one of said motion information and said sequence of images of interest with said biomedical feature data comprises: detecting biomedical feature data of one or more suspected abnormal discharges and corresponding one or more first time points from the biomedical feature data; detecting one or more pieces of action data of suspected abnormal discharge from the action information and one or more corresponding second time points; matching the biomedical characteristic data of the suspected abnormal discharge with the action data of the suspected abnormal discharge, and determining the corresponding relation between the first time point and the second time point according to a matching result; and determining a time calibration parameter based on the corresponding relation between the first time point and the second time point, and performing time alignment on the action information and the biomedical feature data by utilizing the time calibration parameter.
In one embodiment, the matching the biomedical characteristic data of the suspected abnormal discharge with the action data of the suspected abnormal discharge includes: determining a first relative value between the biomedical characteristic data of the suspected abnormal discharge and other biomedical characteristic data; determining a second relative value between the action data of the suspected abnormal discharge and other action data in the action information; and comparing the first relative value with the second relative value to obtain a matching result between the biomedical characteristic data of the suspected abnormal discharge and the action data of the suspected abnormal discharge.
According to a second aspect of the present disclosure, there is provided a human brain abnormal discharge detection apparatus comprising: a first acquisition module configured to acquire biomedical feature data of a subject; the second acquisition module is configured to acquire video monitoring data of the tested object; the motion information detection module is configured to detect motion information of the tested object according to the video monitoring data; an image sequence extraction module configured to extract an image sequence of interest from the video surveillance data for characterizing an action of the subject; and the model processing module is configured to process the biomedical feature data, the action information and the interested image sequence by utilizing a pre-trained abnormal discharge detection model to obtain an abnormal discharge detection result of the tested object.
In one embodiment, the acquiring biomedical feature data of the subject includes: acquiring biomedical monitoring data of the tested object acquired by a biomedical monitoring device; preprocessing the biomedical monitoring data; and obtaining biomedical characteristic data according to the preprocessed biomedical monitoring data.
In one embodiment, the biomedical monitoring data comprises: multichannel brain electrical monitoring data acquired from a plurality of sites of the scalp of the subject; the obtaining the biomedical characteristic data according to the preprocessed biomedical monitoring data comprises the following steps: calculating the potential difference between each channel and the reference electrode according to the preprocessed multichannel electroencephalogram monitoring data to obtain electroencephalogram initial characteristic data; and extracting biomedical feature data according to the initial feature data of the electroencephalogram signals.
In one embodiment, the biomedical signature data comprises an electroencephalogram waveform signature; the extracting the biomedical feature data according to the electroencephalogram signal initial feature data comprises the following steps: and processing the initial characteristic data of the electroencephalogram signals by utilizing a pre-trained waveform characteristic extraction model so as to extract the waveform characteristics of the electroencephalogram signals.
In one embodiment, the biomedical signature data comprises an electroencephalogram signal time-frequency signature; the extracting the biomedical feature data according to the electroencephalogram signal initial feature data comprises the following steps: performing time-frequency conversion on the electroencephalogram signal initial characteristic data to obtain electroencephalogram signal time-frequency data corresponding to the electroencephalogram signal initial characteristic data; and extracting the time-frequency characteristics of the electroencephalogram signals according to the time-frequency characteristics of the electroencephalogram signals.
In one embodiment, the preprocessing of the biomedical monitoring data comprises at least one of: resampling, filtering, removing noise data and carrying out numerical value standardization processing.
In one embodiment, the video surveillance data comprises face video surveillance data; the detecting the motion information of the tested object according to the video monitoring data comprises the following steps: detecting face key point data from the face video monitoring data; and obtaining the face action information of the tested object according to the face key point data.
In one embodiment, the sequence of images of interest comprises a sequence of face images; the extracting an image sequence of interest for characterizing the motion of the tested object from the video monitoring data comprises the following steps: and cutting out a face area image from a plurality of frames of the face video monitoring data according to the face key point data to obtain a face image sequence.
In one embodiment, the obtaining the face motion information of the tested object according to the face key point data includes: and determining the face key points with motion and the displacement information thereof according to the face key point data to obtain the face action information of the tested object.
In one embodiment, the video monitoring data comprises body video monitoring data; the detecting the motion information of the tested object according to the video monitoring data comprises the following steps: detecting body key point data from the body video monitoring data; and obtaining the body action information of the tested object according to the body key point data.
In one embodiment, the sequence of images of interest comprises a sequence of body images; the extracting an image sequence of interest for characterizing the motion of the tested object from the video monitoring data comprises the following steps: and cutting out a body region image from multiple frames of the body monitoring video data according to the body key point data to obtain a body image sequence.
In one embodiment, the obtaining the body motion information of the tested object according to the body key point data includes: and determining body key points generating movement and displacement information thereof according to the body key point data to obtain body action information of the tested object.
In one embodiment, the abnormal discharge detection model comprises a feature processing layer, an attention layer and a classification layer; the processing the biomedical feature data, the action information and the interested image sequence by using a pre-trained abnormal discharge detection model to obtain an abnormal discharge detection result of the tested object comprises the following steps: inputting the biomedical feature data, the motion information, the image sequence of interest into the abnormal discharge detection model; extracting action feature data from the action information by utilizing the feature processing layer, extracting image feature data from the interested image sequence, and fusing the biomedical feature data, the action feature data and the image feature data to obtain fused features; characterizing the fusion features by using the attention layer to obtain embedded features; and mapping the embedded features to an output space by using the classification layer to obtain an abnormal discharge detection result of the tested object.
In one embodiment, the model processing module is further configured to: at least one of the motion information and the sequence of images of interest is time aligned with the biomedical feature data prior to the processing of the biomedical feature data, the motion information, the sequence of images of interest with a pre-trained abnormal discharge detection model.
In one embodiment, said time-aligning at least one of said motion information and said sequence of images of interest with said biomedical feature data comprises: detecting biomedical feature data of one or more suspected abnormal discharges and corresponding one or more first time points from the biomedical feature data; detecting one or more pieces of action data of suspected abnormal discharge from the action information and one or more corresponding second time points; matching the biomedical characteristic data of the suspected abnormal discharge with the action data of the suspected abnormal discharge, and determining the corresponding relation between the first time point and the second time point according to a matching result; and determining a time calibration parameter based on the corresponding relation between the first time point and the second time point, and performing time alignment on the action information and the biomedical feature data by utilizing the time calibration parameter.
In one embodiment, the matching the biomedical characteristic data of the suspected abnormal discharge with the action data of the suspected abnormal discharge includes: determining a first relative value between the biomedical characteristic data of the suspected abnormal discharge and other biomedical characteristic data; determining a second relative value between the action data of the suspected abnormal discharge and other action data in the action information; and comparing the first relative value with the second relative value to obtain a matching result between the biomedical characteristic data of the suspected abnormal discharge and the action data of the suspected abnormal discharge.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the human brain abnormal discharge detection method of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of detecting abnormal discharges of the human brain of the first aspect described above and possible implementations thereof via execution of the executable instructions.
In the scheme of the disclosure, on one hand, biomedical characteristic data and video monitoring data are obtained, the video monitoring data are processed to obtain action information and an interested image sequence of a detected object, and by combining the biomedical characteristic data, the action information and the interested image sequence, the detection of abnormal human brain discharge is carried out, so that the accuracy of a detection result can be improved, information deficiency can be mutually compensated among different types of data, for example, the action information or the interested image sequence can compensate information which cannot be embodied in the biomedical characteristic data, the stability of the detection result is ensured, the situation of misjudgment is reduced, manual identification and other artificial processing are not needed in the whole process, and the automatic detection of abnormal human brain discharge is realized. On the other hand, by processing the video monitoring data, two different types of data, namely action information and an interested image sequence, are obtained, so that the video monitoring data are fully mined, and the richness and the integrity of the data are improved.
Drawings
Fig. 1A shows a schematic diagram of a system architecture in the present exemplary embodiment.
Fig. 1B shows a schematic diagram of another system architecture in the present exemplary embodiment.
Fig. 2 shows a flowchart of a method for detecting abnormal discharge of a human brain in the present exemplary embodiment.
Fig. 3 shows a flowchart of acquiring biomedical feature data in the present exemplary embodiment.
Fig. 4 shows a schematic diagram of a multichannel brain electrical signal in the present exemplary embodiment.
Fig. 5 shows a schematic diagram of extracting characteristics from multi-channel electroencephalogram monitoring data in the present exemplary embodiment.
Fig. 6 shows schematic diagrams of face keypoints and body keypoints in the present exemplary embodiment.
Fig. 7 shows a schematic diagram of processing video monitoring data in the present exemplary embodiment.
Fig. 8 shows a flowchart of obtaining an abnormal discharge detection result using an abnormal discharge detection model in the present exemplary embodiment.
Fig. 9 shows a schematic diagram of acquiring a training data set in the present exemplary embodiment.
Fig. 10 shows a schematic diagram of training an abnormal discharge detection model in the present exemplary embodiment.
Fig. 11 shows a schematic diagram of detection of abnormal discharge of the human brain in the present exemplary embodiment.
Fig. 12 is a schematic diagram showing a configuration of a human brain abnormal discharge detection device in the present exemplary embodiment.
Fig. 13 shows a schematic structural diagram of an electronic device in the present exemplary embodiment.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present disclosure will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are presented merely to enable one skilled in the art to better understand and practice the present disclosure and are not intended to limit the scope of the present disclosure in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Embodiments of the present disclosure may be implemented as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the following forms, namely: complete hardware, complete software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
The principles and spirit of the present disclosure are described in detail below with reference to several representative embodiments thereof.
Summary of The Invention
The inventor discovers that the accuracy of detecting abnormal discharge of human brain is required to be improved at present. Specifically, in the related art, an artificial intelligence technology is used to detect abnormal discharge of the human brain, for example, an intelligent network is identified based on deep learning training epileptiform discharge, so as to detect epileptiform discharge of the human brain. The related literature reports that the accuracy rate reaches only 70%, and professional doctors and technicians are required to post-process the artificial intelligence reading result.
In view of the above, the disclosure provides a human brain abnormal discharge detection method, a human brain abnormal discharge detection device, a computer readable storage medium, and an electronic apparatus, on the one hand, biomedical feature data and video monitoring data are obtained, the video monitoring data are processed to obtain action information and an interested image sequence of a detected object, by combining three data of biomedical feature data, action information and an interested image sequence to perform human brain abnormal discharge detection, accuracy of detection results can be improved, information missing can be mutually compensated between different types of data, for example, the action information or the interested image sequence can compensate information which cannot be reflected in the biomedical feature data, stability of the detection results is ensured, misjudgment is reduced, and manual processing such as manual identification is not needed in the whole process, so that automatic detection of human brain abnormal discharge is realized. On the other hand, by processing the video monitoring data, two different types of data, namely action information and an interested image sequence, are obtained, so that the video monitoring data are fully mined, and the richness and the integrity of the data are improved.
Having described the basic principles of the present disclosure, various non-limiting embodiments of the present disclosure are specifically described below.
Application scene overview
It should be noted that the following application scenarios are only shown for facilitating understanding of the spirit and principles of the present disclosure, and embodiments of the present disclosure are not limited in this respect. Rather, embodiments of the present disclosure may be applied to any scenario where applicable.
Embodiments of the present disclosure may be applied to relevant scenarios of detection and assisted diagnosis of suspected patients. For example, a user is suspected of suffering from an epileptic condition, and during hospital treatment, a doctor needs to know the electroencephalogram state of the user, particularly whether the brain has abnormal discharge. Detection can be performed by the human brain abnormal discharge detection method of the present disclosure. The following describes an application scenario in detail in connection with a system architecture.
Fig. 1A shows a schematic diagram of a system architecture for detecting abnormal discharges in the brain of a person. The system architecture comprises a subject 101, a biomedical monitoring device 102, a video acquisition device 103, and a processing device 104. When it is necessary to perform abnormal discharge detection of the human brain on the subject 101, the biomedical monitoring device 102 and the video acquisition device 103 may be provided for the subject 101 to acquire corresponding data, and data processing may be performed by the processing device 104.
The biomedical monitoring device 102 may include detection electrodes 1021, 1022, 1023 and a master device 1024, where the detection electrodes 1021, 1022, 1023 may contact different parts of the subject 101, collect electrical signals, and the master device 1024 performs summary processing on the electrical signals to obtain biomedical monitoring data or biomedical feature data. By way of example, the biomedical monitoring device 102 may be an electroencephalograph (e.g., an electroencephalograph), and the detection electrodes 1021, 1022, 1023 may be fixed to respective locations of the head of the subject 101 to acquire multiple conductive electroencephalograms. Of course, the biomedical monitoring device 102 shown in fig. 1A is merely exemplary, and may include a helmet, a hat, bedding, a packaged portable detection electrode, and other various components, which are not limited in this disclosure.
The video capturing device 103 is configured to capture video monitoring data of the object 101, for example, a monitoring camera, a mobile phone with a shooting function, and the like, and may be disposed at a position where the camera faces the object 101, and continuously monitor and shoot to obtain the video monitoring data. In one embodiment, the video capture device 103 may include a two-way camera to capture the face and body of the subject, respectively, to capture face video surveillance data and body video surveillance data, respectively.
The biomedical monitoring device 102 and the video acquisition device 103 may be communicatively connected to the processing device 104, such as by a wired or wireless communication link, such that the biomedical monitoring device 102, the video acquisition device 103 transmits the acquired data to the processing device 104. The abnormal discharge detection method of the human brain in the present exemplary embodiment may be executed by the processing device 104, and the acquired multi-modal data may be processed to obtain an abnormal discharge detection result of the subject. In one embodiment, the processing device 104 may include a display, may display the abnormal discharge detection results, and may also display one or more of biomedical monitoring data, video monitoring data.
Any two or more of the biomedical monitoring device 102, the video capture device 103, and the processing device 104 may be integrated in the same device. For example, the biomedical monitoring device 102 and the processing device 104 may be integrated in the same device, or more specifically, the master device 1024 of the biomedical monitoring device 102 may function as the processing device 104. Still alternatively, the video capturing device 103 may be integrated in the biomedical monitoring device 102 or the processing device 104, for example, the biomedical monitoring device 102 may include a headset, in which the detection electrodes 1021, 1022, 1023 may be disposed, when the headset is worn by the subject 101, the head contacts the detection electrodes 1021, 1022, 1023, a fixing bracket may also be disposed on the headset, where the video capturing device 103 is fixedly disposed at the other end of the fixing bracket (for example, a mobile phone may be fixedly disposed by a fixture), and the video capturing device 103 faces the face and the body of the subject 101, so that video monitoring data can be obtained by capturing.
Fig. 1B shows a schematic diagram of another system architecture for detection of abnormal discharges in the human brain. The system architecture comprises a tested object 101, a biomedical monitoring device 102, a video acquisition device 103, a data transceiver device 105 and a server 106. The biomedical monitoring device 102, the video acquisition device 103 may be communicatively connected to a data transceiver device 105, and the data transceiver device 105 is communicatively connected to a server 106. Thus, the data transceiver 105 acquires biomedical monitoring data or biomedical feature data, face video data, and body key point data, and then transmits the data to the server 106. The server 106 may include any form of data processing server, such as cloud servers, distributed servers, and the like. After acquiring the multimodal data sent by the data transceiver 105, the server 106 executes the abnormal discharge detection method of the brain of the person in the present exemplary embodiment to obtain an abnormal discharge detection result of the tested object. In one embodiment, the server 106 may return the abnormal discharge detection result to the data transceiver 105 for display on the data transceiver 105 or the biomedical monitoring device 102.
The system architecture shown in fig. 1B is suitable for use in a portable scenario, for example, the subject 101 may use the portable biomedical monitoring device 102, the video capturing device 103, and the data transceiver device 105 sends data to the server 106 in any place such as home, office, etc. to realize abnormal discharge detection of the brain of a person. Therefore, the user can know the abnormal human brain discharge detection result without going out, and when the problems such as suspected epilepsy are encountered, the obtained abnormal human brain discharge detection result can be sent to a doctor so as to help the doctor to judge the illness state.
In one embodiment, the system architecture shown in FIG. 1A or FIG. 1B may also include a motion capture device, which may include one or more sensors, bound to the body keypoints of the subject 101 to sense motion at each location and to collect body keypoint data for the subject 101. For example, the sensor may be tied to the arm joint, the finger, and the palm of the subject 101 (e.g., the sensor is disposed at the finger and the palm of the motion capture glove, and the sensor is disposed at the finger and the palm of the subject 101 when the subject 101 wears the motion capture glove), and the motion capture device may acquire real-time position data of the arm joint, the finger, and the palm of the subject 101, thereby obtaining body keypoint data. Of course, the number of sensors and the bound body parts are not limited in the present disclosure, and the sensors of the motion capture device may be bound to other critical parts of the tested object 101 according to specific requirements, besides the arm joints, the fingers, and the palm, so as to collect corresponding body key point data.
Exemplary method
Exemplary embodiments of the present disclosure provide a method of detecting abnormal discharge of a human brain. Referring to fig. 2, the method may include steps S210 to S250. Each step in fig. 2 is described in detail below.
Referring to fig. 2, biomedical feature data of a subject is acquired in step S210.
The subject is a person who needs to perform abnormal discharge detection of the brain, such as a possible epileptic patient. Biomedical signals are signals generated by physiological processes of the human body and can reflect the physiological state or physical sign of the human body. Biomedical signature data is the signature data extracted from the biomedical signal. Biomedical signals include, but are not limited to, any one or more of the following: electrophysiological signals such as electrocardiosignals, electroencephalogram signals, electromyographic signals, electrooculographic signals, gastric electrical signals and the like, and non-electrophysiological signals such as body temperature, blood pressure, pulse, respiration and the like can be included. The biomedical signals and the acquisition process thereof are exemplarily described below.
The electrocardiograph signal may be an Electrocardiogram acquired by a multi-conductor Electrocardiograph (ECG), which refers to a pattern of potential changes of various forms extracted from a body surface by an electrocardiograph with the change of electrocardiograph bioelectricity, wherein the heart is excited successively by a pacing point, an atrium and a ventricle in each cardiac cycle. Electrocardiography is an objective indicator of the occurrence, spread, and recovery process of cardiac excitation.
The Electroencephalogram (EEG) is a graph obtained by amplifying and recording spontaneous biopotential of brain cortex of brain complement from scalp by a precise instrument, and is spontaneous and rhythmic electric activity of brain cell group recorded by electrode. The electrical activity is a plan view of the relationship between the recorded potential and time, with the potential on the vertical axis and the time on the horizontal axis. The frequency (period), amplitude and phase of the brain waves constitute the fundamental features of the electroencephalogram.
Electromyography (EMG) is a superposition of action potentials of a motion unit in a plurality of muscle fibers in time and space, and can be obtained by pasting an electromyographic sensor on the skin.
An Electrooculogram (EOG) is a bioelectric signal caused by the potential difference between the cornea and retina of an eye, and is extremely convenient to collect, and can be completed by a small number of electrodes.
Gastric electrical signals (EGG) are electrical signals generated by contraction of stomach muscles, and can be collected on the surface of the abdominal skin of a human body using electrodes.
The present disclosure is not limited to a particular manner of extracting feature data from biomedical signals, as may include, but is not limited to: preprocessing, data statistics, extracting characteristic data through a neural network, and the like.
In one embodiment, referring to fig. 3, the acquiring biomedical feature data of the subject may include the following steps S310 to S330:
step S310, acquiring biomedical monitoring data of a subject acquired by a biomedical monitoring device.
Wherein the biomedical monitoring data may be raw biomedical signals. For example, the biomedical monitoring device may be an electrocardiograph, an electroencephalograph, the biomedical monitoring data may be an electrocardiograph signal or an electrocardiogram acquired by the electrocardiograph, an electroencephalogram signal or an electroencephalogram acquired by the electroencephalograph, or the like. The electrocardiograms and the electroencephalograms are essentially drawn by electrocardiograms and electroencephalograms at different moments, and the electrocardiograms can be considered to be equivalent to the electrocardiograms and the electroencephalograms. Fig. 4 shows a schematic diagram of a multichannel electroencephalogram, and original electroencephalogram monitoring data is plotted into a graph line, so that waveforms shown in fig. 4, namely electroencephalograms, are obtained. The electroencephalograph may have 23 electrodes, each electrode collecting signals for one channel. Of course, the number of electrodes of the electroencephalograph is not limited in the present disclosure, and may be increased or decreased according to the specific circumstances.
Step S320, preprocessing biomedical monitoring data.
By way of example, the pre-treatment may include, but is not limited to, one or more of the following:
(1) resampling. Resampling may change the sampling rate of a signal, which may convert a non-uniformly sampled signal to a uniformly sampled signal. For example, the brain electrical monitoring data may be resampled at 500 Hz.
(2) And (5) filtering. Illustratively, the electroencephalogram monitoring data, the electrocardiographic monitoring data, and the myoelectric monitoring data comprise 29 channels in total, wherein the electroencephalogram monitoring data comprises 23 channels. The method can carry out 50Hz notch filtering on all 29 channels, reduce the interference of alternating current signals, and carry out band-pass (such as 0.1-70 Hz frequency band) filtering on 23 electroencephalogram channels, so as to reduce the signal interference of non-electroencephalogram frequency bands.
(3) And eliminating noise data. Noise data may be caused by poor contact (such as poor contact between biomedical monitoring equipment and the part of the tested object, poor contact of equipment cables, etc.), and noise data is removed to improve data quality and accuracy of detection results. Illustratively, noisy channel data may be culled.
(4) And (5) numerical value standardization processing. The numerical normalization process can map different modalities, different types of data into the same suitable numerical range to facilitate unified processing. For example, biomedical monitoring data of different types such as electroencephalogram monitoring data, electrocardiographic monitoring data, myoelectric monitoring data and the like can be dimensionalized first, and then normalized to a proper numerical range, for example, numerical mapping can be performed by multiplying corresponding coefficients, so that numerical normalization processing can be completed.
And step S330, biomedical characteristic data are obtained according to the preprocessed biomedical monitoring data.
The pre-processed biomedical monitoring data may be used as biomedical feature data, or may be subjected to further feature extraction processing, for example, the pre-processed biomedical monitoring data may be processed by a pre-trained feature extraction model, so as to obtain biomedical feature data.
In one embodiment, the biomedical monitoring data may include: multichannel brain electrical monitoring data acquired from multiple sites of the scalp of a subject. The electroencephalograph may have a plurality of electrodes (such as the detection electrodes 1021, 1022, 1023 described above, commonly referred to as active electrodes), and each electrode may acquire electroencephalogram monitoring data of one channel. One or more electrodes may be provided on each portion of the scalp of the subject, such that a plurality of electrodes are commonly used, thereby acquiring multichannel brain electrical monitoring data. Correspondingly, the obtaining biomedical characteristic data according to the preprocessed biomedical monitoring data may include the following steps:
calculating the potential difference between each channel and the reference electrode according to the preprocessed multichannel electroencephalogram monitoring data to obtain electroencephalogram initial characteristic data;
Biomedical characteristic data are extracted according to the initial characteristic data of the brain electrical signals.
The reference electrode is introduced, and the potential difference is used for ensuring the electroencephalogram signal, so that the noise influence can be reduced. The selection of the reference electrode includes, but is not limited to, the following: the single-pole lead is adopted, the movable electrode and the irrelevant electrode are arranged on the tested object, the irrelevant electrode can be arranged at the position of earlobe and the like, which is equivalent to taking the irrelevant electrode as a reference electrode, the acquired multichannel electroencephalogram monitoring data comprise the potential difference of each channel relative to the irrelevant electrode, and the initial characteristic data of the electroencephalogram signal are obtained after preprocessing. And the acquired multichannel electroencephalogram monitoring data comprise the potential difference of each channel relative to other movable electrodes, and the initial characteristic data of the electroencephalogram signals are obtained after preprocessing the potential difference of each channel relative to the other movable electrodes. And (3) setting a movable electrode and a grounding electrode on the measured object on average, wherein the acquired multichannel electroencephalogram monitoring data comprise potential differences of each channel relative to the grounding end, one or more of the electrodes of the multichannel electroencephalogram monitoring data can be used as a reference electrode, and the preprocessed multichannel electroencephalogram monitoring data are used for calculating the potential differences of each channel and the reference electrode to obtain the initial characteristic data of the electroencephalogram signals. For example, c electrodes of an electroencephalograph are arranged on the scalp of a tested object, any one of the c electrodes can be selected as a reference electrode, a plurality of electrodes can be used as the reference electrode, the preprocessed multichannel electroencephalogram monitoring data of the c electrodes are calculated to be an average value to be used as a reference value, and the difference value between the preprocessed multichannel electroencephalogram monitoring data of each channel and the reference value is calculated to obtain the initial characteristic data of the electroencephalogram signals. In one embodiment, different electrodes may be selected as reference electrodes, the potential difference of each channel is calculated under the different reference electrodes, and then an average value is calculated as the initial characteristic data of the brain electrical signal.
When the initial characteristic data of the electroencephalogram is obtained, the initial characteristic data of the electroencephalogram can be used as biomedical characteristic data obtained in the step S210, and further processing can be performed on the initial characteristic data of the electroencephalogram, for example, effective information in the initial characteristic data of the electroencephalogram is further extracted, so that biomedical characteristic data is obtained.
In one embodiment, the biomedical signature data may include brain electrical signal waveform characteristics. The extracting biomedical feature data according to the initial feature data of the brain electrical signal may include the following steps:
and processing the initial characteristic data of the electroencephalogram signals by utilizing a pre-trained waveform characteristic extraction model so as to extract waveform characteristics of the electroencephalogram signals.
The waveform feature extraction model may be a model of a pre-trained transducer or other structure, which is capable of extracting time sequence features and other aspect features in the initial feature data of the electroencephalogram signal. For example, the electroencephalogram signal initial feature data may be input into a waveform feature extraction model, and the waveform feature extraction model performs processing such as Embedding (Embedding) on the electroencephalogram signal initial feature data to obtain an electroencephalogram signal waveform feature.
In one embodiment, the biomedical feature data may include an electroencephalogram image feature. The image features can be extracted from the brain electrical signal graph subjected to preprocessing and/or potential difference calculation based on the reference electrode, for example, the brain electrical signal graph can be input into a model such as a pre-trained convolutional neural network and the like, and the brain electrical signal image features can be obtained.
In one embodiment, the biomedical signature data may include an electroencephalogram signal time-frequency signature. The extracting biomedical feature data according to the initial feature data of the brain electrical signal may include the following steps:
performing time-frequency conversion on the electroencephalogram signal initial characteristic data to obtain electroencephalogram signal time-frequency data corresponding to the electroencephalogram signal initial characteristic data;
and extracting the time-frequency characteristics of the brain electrical signals according to the time-frequency data of the brain electrical signals.
The initial characteristic data of the electroencephalogram signals are usually data in a time domain, the change of the electroencephalogram signals along with time is expressed, and the information is relatively single. By time-frequency transformation, the electroencephalogram signal initial characteristic data can be converted into a time-frequency joint domain, so that electroencephalogram signal time-frequency data corresponding to the electroencephalogram signal initial characteristic data can be obtained, and joint distribution information of signals in a time domain and a frequency domain can be provided. The time-frequency transformation mode includes but is not limited to short-time Fourier transformation, wavelet transformation and the like. The specific manner in which this disclosure is employed is not limited. Furthermore, the time-frequency characteristics of the electroencephalogram signals can be extracted from the time-frequency data of the electroencephalogram signals, for example, the image characteristics can be extracted from a time-frequency chart, and statistics and characteristic extraction can be performed on the time-frequency data of the electroencephalogram signals to obtain the time-frequency characteristics of the electroencephalogram signals.
Fig. 5 shows a schematic diagram of extracting feature data from multi-channel electroencephalogram monitoring data. Illustratively, the biomedical monitoring data comprises multichannel electroencephalogram monitoring data. The multichannel electroencephalogram monitoring data can be preprocessed, and then the potential difference between each channel and the reference electrode is calculated according to the preprocessed multichannel electroencephalogram monitoring data, so that the initial characteristic data of the electroencephalogram is obtained. Next, on the one hand, the initial characteristic data of the electroencephalogram signal is input into a pre-trained waveform characteristic extraction model, such as a transducer, to obtain waveform characteristics of the electroencephalogram signal. On the other hand, short-time Fourier transformation is carried out on the initial characteristic data of the electroencephalogram signals to obtain time-frequency data of the electroencephalogram signals, and then the time-frequency data of the electroencephalogram signals are input into a pre-trained time-frequency characteristic extraction model, such as EfficientNetv2, so as to obtain the time-frequency characteristics of the electroencephalogram signals.
Besides the electroencephalogram signal waveform characteristics and the electroencephalogram signal time-frequency characteristics, other electroencephalogram characteristic data, such as electroencephalogram statistical characteristic data, can be extracted.
In addition, biomedical characteristic data can be extracted aiming at biomedical monitoring data such as electrocardio, myoelectricity, electrooculogram, gastric electricity and the like by adopting a mode of extracting the electroencephalogram characteristic data.
By way of example, the biomedical monitoring data may include multichannel electrocardiographic monitoring data. The preprocessed multichannel electrocardio monitoring data can be used for calculating the potential difference between each channel and the reference electrode to obtain electrocardio signal initial characteristic data; biomedical feature data are extracted according to the electrocardiosignal initial feature data. Wherein the biomedical signature data comprises an electrocardiographic signal waveform signature. The initial characteristic data of the electrocardiosignal can be processed by utilizing a pre-trained waveform characteristic extraction model so as to extract waveform characteristics of the electrocardiosignal. The biomedical signature data includes time-frequency signatures of the electrocardiograph signals. The time-frequency conversion can be carried out on the electrocardiosignal initial characteristic data to obtain electrocardiosignal time-frequency data corresponding to the electrocardiosignal characteristic data; and extracting the time-frequency characteristics of the electrocardiosignals according to the time-frequency data of the electrocardiosignals.
By way of example, the biomedical monitoring data may include multichannel myoelectric monitoring data. The preprocessed multichannel myoelectricity monitoring data can be used for calculating the potential difference between each channel and the reference electrode to obtain myoelectricity signal initial characteristic data; biomedical feature data are extracted according to the electromyographic signal initial feature data. Wherein the biomedical signature data comprises electromyographic signal waveform signatures. The electromyographic signal initial feature data may be processed using a pre-trained waveform feature extraction model to extract electromyographic signal waveform features. The biomedical signature data includes electromyographic signal time-frequency signatures. The time-frequency conversion can be carried out on the electromyographic signal initial characteristic data to obtain electromyographic signal time-frequency data corresponding to the electromyographic signal characteristic data; and extracting the time-frequency characteristics of the electromyographic signals according to the time-frequency data of the electromyographic signals.
In one embodiment, after the original biomedical monitoring data is obtained, the biomedical monitoring data may be segmented for a certain period of time (for example, 4 seconds), for example, biomedical monitoring data segments in units of 4 seconds are obtained, biomedical monitoring data of each segment is preprocessed respectively, and characteristics of the preprocessed biomedical monitoring data are extracted to obtain biomedical characteristic data of each segment. And then detecting whether the tested object generates abnormal discharge in each segment by taking the segment as a unit.
With continued reference to fig. 2, in step S220, video monitoring data of the subject is acquired.
The video monitoring data can be obtained through shooting by a video acquisition device and can be the video of the tested object shot in a period of time. In one embodiment, the video monitoring data may include face video monitoring data and body video monitoring data, which may be acquired by two-way cameras of the video acquisition device, or may be acquired by capturing the face and body of the tested object by a single-way camera of the video acquisition device at the same time, so as to obtain video monitoring data, and then separating the face video monitoring data and the body video monitoring data from the video monitoring data by means of picture clipping or the like.
In the present exemplary embodiment, after acquiring the video monitoring data of the object to be tested, two processes may be performed, namely, detecting motion information of the object to be tested and extracting an image sequence of interest, which are performed in steps S230 and S240, respectively.
With continued reference to fig. 2, in step S230, motion information of the object under test is detected from the video monitoring data.
The motion information is used for representing the motion part of the tested object or representing what motion the tested object takes. Dynamic changes of the body part of the tested object can be detected from the video monitoring data, and thus the action information of the tested object can be identified. Alternatively, the position information or the position change information of the key point of the object to be measured may be used as the motion information of the object to be measured. For example, the motion information of the object under test may include: the key points of the motion in the face and body of the tested object and the displacement information of the key points.
With continued reference to fig. 2, in step S240, a sequence of images of interest for characterizing the motion of the subject is extracted from the video surveillance data.
The image sequence of interest may be an image sequence of a region of interest (Region Of Interest, ROI) in the video surveillance data or an image sequence of a frame of interest. For example, the face and body of the user may be identified from the video surveillance data, and multiple frames of face region images and multiple frames of body region images may be cropped to form the image sequence of interest. Or, a frame of interest with dynamic change of the detected object can be identified from the video monitoring data, then a body part area, namely the region of interest, of the detected object with dynamic change is identified from the frame of interest, and an image of the region of interest is cut out from the frame of interest to form an image sequence of interest.
In one embodiment, the video monitoring data includes face video monitoring data, for example, video data collected by a camera specially shooting a face in the video collecting device, or video data of a face area cut out from the video monitoring data. Accordingly, the detecting the motion information of the detected object according to the video monitoring data may include the following steps:
detecting face key point data from the face video monitoring data;
and obtaining the face action information of the tested object according to the face key point data.
Fig. 6 shows a schematic diagram of a face key point and a body key point. The 33 key points of the face and the body can be numbered and respectively marked as key points 0 to 32. Wherein, the key points 0-10 are face key points, and the key points 11-32 are body key points. Of course, the number of key points is not limited in the present disclosure, and may be increased or decreased according to specific situations.
The face key point data may include the locations of face key points at different times. The face keypoint data may be detected by a pre-trained face detection model or by employing a face detection algorithm. For example, one or more frames of images may be captured from the face video monitoring data, and the images may be analyzed by a target detection and keypoint detection algorithm to detect the positions of the face keypoints, e.g., by detecting the positions of the face parts such as eyes, nose, mouth, ears, etc., to determine the positions of the face keypoints, thereby obtaining the face keypoint data.
The face key point data represents the static position of the face key point at the moment of one or more frames, the position change information of the face key point can be analyzed from the static position, the face key point data is used as the face action information of the tested object, or the actions of the face are further identified, such as blink, head shaking, eyebrow wrinkling, mouth opening and the like, and the identification result is used as the face action information of the tested object.
In one embodiment, the obtaining the face motion information of the detected object according to the face key point data may include the following steps:
and determining the key points of the face with motion and the displacement information thereof according to the key point data of the face to obtain the face action information of the tested object.
The face key point data can represent the positions of face key points at different moments, can determine the face key points moving within a period of time according to the positions of the face key points, and records the marks (such as key point numbers in fig. 6) of the face key points and the displacement information thereof to form the face action information of the tested object.
In one embodiment, the position relationship between different face key points can be determined according to the face key point data so as to generate face action data to be identified; matching the face action data to be recognized with face action data of a plurality of standard face actions, and determining face action information corresponding to the face key point data according to the matching result. The face action data to be identified may be position relation sequence data of key points of the face. For example, the position relationship between the face key points may be characterized as a vector, different dimensions of the vector represent distances, orientations, and the like between different key points, and the face key point relationship data of each frame may be correspondingly generated according to the face key point data of each frame in a vector form. And matching the face key point position relation data of the single frame with face action data of a preset standard face action, wherein the face action data of the standard face action can be vectors of the face key point position data, the face action data of the standard face action matched with the face action data to be identified is obtained by calculating the similarity between the vectors, and if the similarity of the face action data and the face action data reaches a similarity threshold value, the face action data to be identified corresponds to the standard face action, so that face action information corresponding to the face key point data is obtained. Or, the face key point position relation data of different frames are formed into a sequence, the face action data of the standard face action can be the sequence as the face action data to be identified, and the face action information corresponding to the face key point data can be identified through the matching between the sequences.
In one embodiment, the sequence of images of interest comprises a sequence of facial images. The extracting the image sequence of interest for characterizing the motion of the tested object from the video monitoring data may include the following steps:
and cutting out a face region image from a plurality of frames of face video monitoring data according to the face key point data to obtain a face image sequence.
The face area image is an image containing the entire face. The face video monitoring data may include picture content other than a face, such as an environmental picture around the subject. After the face key point data is obtained, the position of the face area can be determined, and the face area image is cut out from a plurality of frames of face video monitoring data. The face region images of different frames form a sequence of face images.
By cutting out the face region image, a face image sequence is obtained, information irrelevant to the face in the face video monitoring data can be removed, the cutting processing process is simple, the calculated amount is reduced, and the accuracy of the subsequent detection result is improved.
In one embodiment, after the face video monitoring data is obtained, the face video monitoring data may be segmented for a certain time length (for example, 4 seconds, which may be the same as the time length of the segmentation of the biomedical monitoring data), for example, a face video monitoring data segment in units of 4 seconds is obtained. And determining one or more key frames, such as a first frame, a middle point frame, a last frame and the like, in the face video monitoring data of each segment, and detecting face key points in the key frames to obtain displacement information of the face key points so as to form face action information of a detected object. And determining the position of a face region according to the detected face key points, and intercepting face region images from face video monitoring data to form a face image sequence. And then taking the fragments as units, inputting the face action information, the face image sequence and other information into an abnormal discharge detection model so as to detect whether the detected object generates abnormal discharge in each fragment.
In one embodiment, the image sequence of interest comprises a human face bit image sequence. The extracting the image sequence of interest for characterizing the motion of the tested object from the video monitoring data may include the following steps:
and cutting out a face region image from a plurality of frames of face video monitoring data according to the face key point data to obtain a face region image sequence.
Wherein the facial region image is an image containing one or more facial regions. The face video monitoring data may include the whole face, and may also include the picture content other than the face, such as the environmental picture around the detected object. After the face key point data is obtained, the positions of the face parts, such as the positions of eyes, nose, mouth, ears and the like, the areas containing all the face parts (such as the whole face area, no hair and the like) can be determined, the areas (such as the eye area, the nose area, the mouth area and the ear area) of each face key part can be determined, and the face part area image can be cut. The face region images of different frames form a sequence of face region images.
The face region image is cut to obtain a face region image sequence, so that information irrelevant to faces or actions of the faces in the face video monitoring data can be removed, the calculated amount is reduced, and the accuracy of a subsequent detection result is improved.
In an embodiment, the clipping the face region image from the multiframe of the face video monitoring data according to the face key point data may include the following steps:
and determining the face part with movement according to the face key point data, and cutting out the area image of the face part with movement from a plurality of frames of face video monitoring data.
Because the face key point data represent the static positions of the face key points at the moment of one or more frames, and the face key point data of different frames are combined, the face positions can be determined to move (namely, the dynamic change of the existing positions) so as to intercept the area images of the face positions with the movement from the face video monitoring data, and the face position area images are obtained. For example, according to the key point data of the face, the dynamic change (generally, the position change) of the key points of the eyes and the nose positions can be determined, but the dynamic change of the key points of other parts such as the mouth and the ears does not occur, which indicates that the eyes and the nose positions move, according to the key point position information of the eyes and the nose positions in the key point data of the face, bounding boxes of the eyes and the nose positions (which can be a bounding box containing the eyes and the nose at the same time or a bounding box containing the eyes and the ears) can be generated from the video monitoring data of the face, images in the bounding boxes can be cut out, the bounding boxes can be appropriately enlarged (for example, the width and the height of the bounding boxes are multiplied by a proportionality coefficient larger than 1, the proportionality coefficient can be 1.1, 1.2 and the like, and the images in the enlarged bounding boxes can be determined according to experience or specific requirements), and the images in the bounding boxes can be cut out, so that the face region images can be obtained.
It should be understood that the clipping processing may be performed on each frame in the face video monitoring data, for example, the face portion where motion occurs in each frame may be determined according to the face key point data of each frame, so as to clip a corresponding face portion area image. Alternatively, the above-described clipping processing may be performed only for a part of frames in the face video monitoring data without processing each frame, thereby reducing the number and processing amount of face-part area images. For example, after acquiring the face video monitoring data, a key frame image may be extracted, for example, one frame is extracted every certain frame number, or a difference value between adjacent frames is detected, and when the difference value reaches a predetermined value, a next frame in the adjacent frames is extracted as the key frame image. And detecting the key points of the human face aiming at the key frame image to obtain key point data of the human face, determining the human face part moving in the key frame image according to the key point data of the human face, and cutting out a region image of the human face part. Or, for face video monitoring data in a period of time (usually, a unit detection duration, for example, the duration can be determined according to the number of images which can be processed by an abnormal discharge detection model), determining a face part in which motion occurs according to face key point data of each frame, and then cutting out an area image of the face part in which motion occurs for each frame or key frame to obtain a face part area image.
The human face action is mainly reflected on the human face part with the motion, and the regional image of the human face part with the motion is extracted from the human face video monitoring data, so that the emphasis of the subsequent processing can be placed on the human face part with the motion. Particularly, under the condition of abnormal discharge of the human brain, the human face action is usually a relatively fine expression, and the detail information in the region image of the human face part with motion is detected more easily by detecting the region image of the human face part with motion, so that the accuracy of a subsequent detection result is further improved, and the calculated amount is reduced.
In one embodiment, the video monitoring data includes body video monitoring data, for example, video data acquired by a camera specially shooting a body in the video acquisition device, or video data of a body area (excluding a human face) cut out from the video monitoring data. Accordingly, the detecting the motion information of the detected object according to the video monitoring data may include the following steps:
detecting body key point data from body video monitoring data;
and obtaining the body action information of the tested object according to the body key point data.
Wherein body keypoints may be described with reference to fig. 6. The body keypoint data may include the location of body keypoints at different times. Body keypoint data may be detected by pre-trained body detection models or by employing limb detection algorithms. For example, one or more frames of images may be taken from the body video monitoring data, the images may be analyzed by a target detection and keypoint detection algorithm to detect the location of body keypoints, such as by detecting the location of body parts such as the neck, chest, extremities, etc., to determine the location of body keypoints, thereby obtaining body keypoint data.
The body key point data represents the static position of the body key point at one or more frames, the position change information of the body key point can be analyzed from the body key point data and used as the body action information of the tested object, or further the actions of the body are identified, such as the actions of lifting hands, clapping hands, shaking and the like, and the identification result is used as the body action information of the tested object.
In one embodiment, the obtaining the body motion information of the subject according to the body keypoint data may include the following steps:
and determining body key points and displacement information thereof which generate movement according to the body key point data, and obtaining body action information of the tested object.
The body key point data can characterize the positions of body key points at different moments, body key points moving within a period of time can be determined according to the positions of the body key points, and identifications (such as key point numbers in fig. 6) of the body key points and displacement information of the body key points are recorded to form body action information of a tested object.
In one embodiment, the positional relationship between different body keypoints may be determined from body keypoint data to generate body action data to be identified; and matching the body motion data to be identified with the body motion data of the plurality of standard body motions, and determining body motion information corresponding to the body key point data according to the matching result. The body action data to be identified may be position relation sequence data of body key points. For example, the positional relationship between body keypoints may be characterized as a vector, where different dimensions of the vector represent distances, orientations, etc. between different keypoints, and from the body keypoint data of each frame, the body keypoint positional relationship data of each frame may be correspondingly generated in the form of a vector. The body movement data of the standard body movement can also be vectors of body key point data, the body movement data of the standard body movement can be obtained by calculating the similarity between the vectors, and if the similarity between the body movement data of the standard body movement and the body key point data of the standard body movement reaches a similarity threshold value, the body movement data to be identified is determined to correspond to the standard body movement, so that body movement information corresponding to the body key point data is obtained. Or, the body movement data of the standard body movement can be sequences, and the body movement information corresponding to the body key point data can be identified through matching among the sequences.
In one embodiment, the sequence of images of interest comprises a sequence of body images. The extracting the image sequence of interest for characterizing the motion of the tested object from the video monitoring data may include the following steps:
and cutting out the body region image from multiple frames of the body video monitoring data according to the body key point data to obtain a body part image sequence.
Wherein the body region image is an image containing the whole body. The body video monitoring data may contain picture content outside the body, such as an environmental picture around the subject. After the body keypoint data is obtained, the location of the body region may be determined and the body region image cropped from the multiple frames of body video monitoring data. The body region images of different frames form a body image sequence.
By clipping the body region image, a body image sequence is obtained, information irrelevant to the body in the body video monitoring data can be removed, the clipping processing process is simple, the calculation amount is reduced, and the accuracy of the subsequent detection result is improved.
In one embodiment, after the body video monitoring data is acquired, the body video monitoring data may be segmented for a certain period of time (e.g., 4 seconds, which may be the same as the biomedical monitoring data is segmented), for example, to obtain segments of the body video monitoring data in units of 4 seconds. In the body video monitoring data of each segment, one or more key frames, such as a first frame, a middle point frame, a last frame and the like, are determined, body key points are detected in the key frames, and displacement information of the body key points is obtained, so that body action information of a tested object is formed. And determining the position of the body area according to the detected body key points, and intercepting body area images from the body video monitoring data to form a body image sequence. And inputting the body motion information, the body image sequence and other information into an abnormal discharge detection model by taking the fragments as units so as to detect whether the tested object generates abnormal discharge in each fragment.
In one embodiment, the sequence of images of interest comprises a sequence of body part images. The extracting the image sequence of interest for characterizing the motion of the tested object from the video monitoring data may include the following steps:
and cutting out a body part area image from multiple frames of body monitoring video data according to the body key point data to obtain a body part image sequence.
Wherein the body part area image is an image comprising one or more body parts. The body video monitoring data may contain the whole body, and may also contain picture contents outside the body, such as an environmental picture around the subject, and the like. After the body key point data is obtained, the positions of the body parts, such as the positions of the neck, the chest, the left arm, the left hand, the right arm, the left leg, the left foot, the right leg, the right foot and the like, the areas containing all the body parts can be determined, the area (such as the neck area, the left arm area and the left hand area) of each body key part can be determined, and the body part area image can be cut. The body part region images of the different frames form a body part image sequence.
By cutting out the body part area image, a body part image sequence is obtained, and information irrelevant to the body or irrelevant to the body action in the body video monitoring data can be removed, so that the calculation amount is reduced, and the accuracy of the subsequent detection result is improved.
In one embodiment, the clipping the body part area image from the multiframe of the body monitoring video data according to the body keypoint data may include the following steps:
and determining the body part with movement according to the body key point data, and cutting out an area image of the body part with movement from multiple frames of body monitoring video data.
Because the body key points represent the static positions of the body key points at the moment of one or more frames, and the body key point data of different frames are combined, the body parts which are in motion (namely the dynamic change of the existence position) can be determined, so that the regional image of the body parts which are in motion is intercepted from the body video monitoring data, and the regional image of the body parts is obtained. For example, it may be determined that the key points of the eyes and the nose position change dynamically (generally, the position changes) according to the body key point data, but the key points of other parts such as the mouth and the ear do not change dynamically, which indicates that the eyes and the nose position move, and the bounding box of the eyes and the nose position (which may be one bounding box containing the eyes and the nose at the same time or may be a bounding box containing the eyes and the nose and a bounding box containing the ears) may be generated from the body video monitoring data according to the key point position information of the eyes and the nose position in the body key point data, and the image in the bounding box may be cut out, or the bounding box may be enlarged appropriately (for example, the width and the height of the bounding box are multiplied by a proportionality coefficient greater than 1, which may be 1.1, 1.2 and the like, and may be determined according to experience or specific requirements), and the image in the enlarged bounding box may be taken to obtain the body part area image.
It should be appreciated that the cropping process described above may be performed on each frame of the body video monitoring data, such as determining the body part in each frame that moves based on the body keypoint data of each frame, and cropping out the corresponding body part region image. Alternatively, the above clipping processing may be performed only for a part of frames in the body video monitoring data without processing each frame, thereby reducing the number and processing amount of body part region images. For example, after acquiring the body video monitoring data, a key frame image may be extracted, for example, one frame is extracted every certain frame number, or a difference between adjacent frames is detected, and when the difference reaches a predetermined value, a subsequent frame in the adjacent frames is extracted as the key frame image. And detecting body key points aiming at the key frame images to obtain body key point data, determining body parts moving in the key frame images according to the body key point data, and cutting out body part area images. Or, for the body video monitoring data in a period of time (usually, the unit detection duration, for example, the duration can be determined according to the number of images which can be processed by the abnormal discharge detection model), determining the body part in which the motion occurs according to the body key point data of each frame, and then cutting out the area image of the body part in which the motion occurs for each frame or key frame to obtain the body part area image.
Since the body motion is mainly embodied on the body part where the movement occurs, the region image of the body part where the movement occurs is extracted from the body video monitoring data, and the emphasis of the subsequent processing can be placed on the body part where the movement occurs. Particularly, under the condition of abnormal discharge of the brain, the body action is usually finer, and the detailed information in the body part is easier to detect by detecting the regional image of the body part with movement, so that the accuracy of the subsequent detection result is further improved, and the calculated amount is reduced.
Fig. 7 shows a schematic diagram of processing video monitoring data. After the video monitoring data are acquired, the face video monitoring data and the body video monitoring data are separated. Inputting the face video monitoring data into a pre-trained face detection model to obtain face key point data; obtaining face action information according to the face key point data; and cutting out a face image sequence from the face video monitoring data according to the face key point data. Inputting the body video monitoring data into a pre-trained body detection model to obtain body key point data; obtaining body action information according to the body key point data; and cutting out a body image sequence from the body video monitoring data according to the body key point data.
With continued reference to fig. 2, in step S250, biomedical feature data, motion information, and an image sequence of interest are processed by using a pre-trained abnormal discharge detection model, so as to obtain an abnormal discharge detection result of the tested object.
The biomedical characteristic data, the action information and the interested image sequence are data of different modes, and the abnormal discharge detection model can comprehensively process the three data to obtain a final abnormal discharge detection result. The data of different types can mutually make up for the information deficiency, for example, when a tested object makes some actions such as blinking, speaking and the like, biomedical characteristic data such as brain electricity and the like can be influenced, so that erroneous judgment is caused, and the action information or the interested image sequence makes up for information which cannot be embodied in the biomedical characteristic data, so that the condition of erroneous judgment can be reduced.
In one embodiment, the abnormal discharge detection model includes a feature processing layer, an attention layer, and a classification layer. The feature processing layer, the attention layer, and the classification layer are three main parts of the abnormal discharge detection model, each of which may include one or more intermediate layers.
Referring to fig. 8, the processing of biomedical feature data, motion information, and an interested image sequence by using the pre-trained abnormal discharge detection model to obtain an abnormal discharge detection result of the tested object may include the following steps S810 to S840:
Step S810, inputting biomedical characteristic data, action information and an interested image sequence into an abnormal discharge detection model;
step S820, extracting action feature data from the action information by utilizing a feature processing layer, extracting image feature data from the interested image sequence, and fusing biomedical feature data, action feature data and image feature data to obtain fused features;
step S830, the fusion feature is characterized by using the attention layer to obtain an embedded feature;
in step S840, the classification layer is used to map the embedded features to the output space, so as to obtain the abnormal discharge detection result of the tested object.
The feature processing layer may directly use the motion information as motion feature data, for example, the motion information and the motion feature data may be motion recognition results, or the motion information includes position information of key points of a face and/or key points of a body, and the feature processing layer may further process the motion information in a full connection manner, an attention manner, and the like, to extract the motion feature data. The feature processing layer may convolve the image sequence of interest, etc., to extract image feature data. By way of example, the feature processing layer may include neural network units of LSTM (Long Short-Term Memory network), GRU (Gated Recurrent Unit, gate cycle unit), CNN (Convolutional Neural Network ) and the like, capable of processing the image sequence of interest and extracting image feature data. The feature processing layer can also adopt MLP (multi-layer perceptron), splicing and other modes to perform feature fusion. The feature processing layer may include one or more full-connection layers and a splicing layer, and the biomedical feature data, the motion feature data and the image feature data are aligned in feature dimensions through the full-connection layers, and then are subjected to splicing operation through the splicing layer to obtain fusion features.
At the attention layer, the fused features may be selected, such as by re-characterizing the fused features with attention weights, resulting in embedded features. The embedded features may be dense features, each dimension of which may fuse information of different modalities.
The classification layer may include a full connection layer or the like, and by performing full connection operation on the embedded feature, maps the embedded feature to an input space step by step, and may obtain a probability value of abnormal discharge detection through an activation function such as sigmoid (S-type function), softmax (normalized exponential function), or the like, where the probability value may be used as a final abnormal discharge detection result, or determine whether abnormal discharge exists according to the probability value, so as to obtain the final abnormal discharge detection result.
Fig. 9 shows a schematic diagram of acquiring a training data set. For example, biomedical monitoring sample data is obtained, for example, electroencephalogram signals can be collected through an electroencephalogram machine, and multichannel electroencephalogram monitoring sample data is obtained. The method comprises the steps of preprocessing, then carrying out abnormal discharge labeling, cutting continuous data according to fixed time length, such as cutting into segments with the time length of 4 seconds, classifying the segments into positive and negative samples, recording the time period of each sample on the original continuous data, extracting the segments containing the abnormal discharge labeling as the positive samples, taking the segments without the abnormal discharge labeling as the negative samples, and discarding the samples marked as the attack period or suspicious (the doctor does not confirm whether the abnormal discharge exists or not). Further extracting to obtain medical characteristic training data. And taking the marked result of whether abnormal discharge occurs or not as marked data. The video monitoring sample data of the same time period as the biomedical monitoring sample data is acquired, and the video monitoring sample data of the face and the body can be included. And analyzing the human face and limbs of the video monitoring sample data, detecting key points of the human face and key points of the body, and further obtaining action sample information and an interested image sample sequence. Biomedical feature training data, action sample information, an interesting image sample sequence and corresponding labeling data in the same time period form a group of supervised data, and a large amount of supervised data form a training data set.
Fig. 10 shows a schematic diagram of training an abnormal discharge detection model. The biomedical feature training data, the action sample information and the interested image sample sequence in the training data set are input into an abnormal discharge detection model to be trained, corresponding abnormal discharge detection sample data is obtained, a loss function value is calculated based on the abnormal discharge detection sample data and the labeling data, and parameters of the abnormal discharge detection model are updated according to the loss function value. And iteratively executing the updating process until the abnormal discharge detection model reaches a preset training completion condition, such as that the accuracy rate on the verification set or the test set reaches the standard, so as to obtain a trained abnormal discharge detection model.
In one embodiment, before processing the biomedical feature data, the motion information, the image sequence of interest using the pre-trained abnormal discharge detection model, the human brain abnormal discharge detection method may further include the steps of:
at least one of the motion information and the sequence of images of interest is time aligned with the biomedical feature data.
The biomedical feature data, the action information and the interested image sequences come from different devices, and time differences possibly exist among the different devices, so that the existence time between the biomedical feature data, the action information and the interested image sequences is not synchronous, and the abnormal discharge detection result of the brain of a person is influenced. For example, the time information of the biomedical feature data is from the time of the biomedical monitoring device, the time information of the motion information and the image sequence of interest is from the time of the video acquisition device. Thus, it can be considered that the motion information and the time information of the image sequence of interest coincide, and at least one of the motion information and the image sequence of interest is time-aligned with the biomedical feature data, thereby achieving the time coincidence of the three data.
The following will take the time for aligning motion information with biomedical feature data as an example.
In one embodiment, the time-aligning at least one of the motion information and the image sequence of interest with the biomedical feature data may include the steps of:
detecting biomedical feature data of one or more suspected abnormal discharges from the biomedical feature data and corresponding one or more first time points;
detecting one or more pieces of action data of suspected abnormal discharge from the action information and one or more corresponding second time points;
matching biomedical characteristic data of the suspected abnormal discharge with action data of the suspected abnormal discharge, and determining a corresponding relation between a first time point and a second time point according to a matching result;
based on the correspondence between the first time point and the second time point, a time calibration parameter is determined, and the action information is time aligned with the biomedical feature data using the time calibration parameter.
The suspected abnormal discharge may occur at one or more time points, and these time points are the first time point or the second time point. Alternatively, the suspected abnormal discharge may occur in one or more time periods, and the start time point and the end time point of these time periods may be regarded as the first time point or the second time point. For example, in a period in which an abnormal rise in signal value is detected in biomedical feature data, biomedical feature data within the period may be regarded as biomedical feature data of suspected abnormal discharge, and a start time point and an end time point of the time point may be regarded as first time points. Based on a similar manner, the action data of the suspected abnormal discharge and the second time point may be determined.
Matching biomedical characteristic data of suspected abnormal discharge with action data of suspected abnormal discharge, and further calculating a time difference between a first time point and a second time point with corresponding relation to obtain a time calibration parameter. For example, biomedical signature data of suspected abnormal discharge at each first time point may be acquired, and the first time point and a second time point closest thereto may be matched into the same set of time calibration data.
In one embodiment, the matching the biomedical characteristic data of the suspected abnormal discharge with the action data of the suspected abnormal discharge may include the following steps:
determining a first relative value between biomedical feature data of suspected abnormal discharge and other biomedical feature data;
determining a second relative value between the action data of the suspected abnormal discharge and other action data in the action information;
and comparing the first relative value with the second relative value to obtain a matching result between the biomedical characteristic data of the suspected abnormal discharge and the action data of the suspected abnormal discharge.
Wherein the first relative value represents a relative difference size, such as in the form of a percentage, between biomedical characteristic data of suspected abnormal discharge and biomedical characteristic data in a normal state. The second relative value indicates the magnitude of the relative difference between the operation data of the suspected abnormal discharge and the operation data in the normal state. The closest first relative value and second relative value can be matched together to obtain a matching result between the biomedical characteristic data of the suspected abnormal discharge and the action data of the suspected abnormal discharge, so that a group of time calibration data is formed at the corresponding first time point and second time point. This can improve the accuracy of the matching.
And calculating a time difference between a first time point and a second time point in each set of time calibration data to obtain time calibration parameters. The time alignment parameters for each group may be averaged to obtain the final time alignment parameters. For example, the time calibration parameter may include a time difference between the second time point and the first time point with reference to the first time point. And adjusting the time stamp of the action information according to the time difference between the second time point and the first time point, and adjusting the time stamp of the image sequence of interest. Thereby achieving the time alignment of biomedical feature data, motion information, and the image sequence of interest.
Fig. 11 shows a schematic diagram of detection of abnormal discharge in the human brain. Acquiring multichannel electroencephalogram monitoring data by an electroencephalogram machine, and obtaining biomedical characteristic data through preprocessing and characteristic extraction; the method comprises the steps that video acquisition equipment acquires face video monitoring data and body video monitoring data, face key point detection is carried out on the face video monitoring data, face action information and a face image sequence are obtained through further processing, body key point detection is carried out on the body video monitoring data, and body action information and a body image sequence are obtained through further processing. The biomedical characteristic data, the face action information, the face image sequence, the body action information and the body image sequence are input into a trained abnormal discharge detection model, and an abnormal discharge detection result is output.
Exemplary apparatus
The exemplary embodiment of the disclosure also provides a device for detecting abnormal discharge of human brain. Referring to fig. 12, the human brain abnormal discharge detection apparatus 1200 may include the following program modules: a first acquisition module 1210 configured to acquire biomedical feature data of a subject; a second obtaining module 1220 configured to obtain video monitoring data of the object under test; a motion information detection module 1230 configured to detect motion information of the object under test from the video monitoring data; an image sequence extraction module 1240 configured to extract an image sequence of interest from the video surveillance data for characterizing the motion of the subject; the model processing module 1250 is configured to process the biomedical feature data, the motion information and the interested image sequence by using a pre-trained abnormal discharge detection model to obtain an abnormal discharge detection result of the tested object.
In one embodiment, obtaining biomedical feature data of a subject includes: acquiring biomedical monitoring data of a tested object acquired by a biomedical monitoring device; preprocessing biomedical monitoring data; and obtaining biomedical characteristic data according to the preprocessed biomedical monitoring data.
In one embodiment, the biomedical monitoring data comprises: multichannel electroencephalogram monitoring data acquired from a plurality of sites of the scalp of a subject; obtaining biomedical characteristic data according to the preprocessed biomedical monitoring data, wherein the biomedical characteristic data comprises: calculating the potential difference between each channel and the reference electrode according to the preprocessed multichannel electroencephalogram monitoring data to obtain electroencephalogram initial characteristic data; biomedical characteristic data are extracted according to the initial characteristic data of the brain electrical signals.
In one embodiment, the biomedical signature data comprises an electroencephalogram waveform signature; extracting biomedical feature data according to the initial feature data of the brain electrical signal, comprising: and processing the initial characteristic data of the electroencephalogram signals by utilizing a pre-trained waveform characteristic extraction model so as to extract waveform characteristics of the electroencephalogram signals.
In one embodiment, the biomedical signature data comprises an electroencephalogram signal time-frequency signature; extracting biomedical feature data according to the initial feature data of the brain electrical signal, comprising: performing time-frequency conversion on the electroencephalogram signal initial characteristic data to obtain electroencephalogram signal time-frequency data corresponding to the electroencephalogram signal initial characteristic data; and extracting the time-frequency characteristics of the brain electrical signals according to the time-frequency data of the brain electrical signals.
In one embodiment, the biomedical monitoring data is pre-processed, including at least one of: resampling, filtering, removing noise data and carrying out numerical value standardization processing.
In one embodiment, the video surveillance data includes face video surveillance data; detecting motion information of a detected object according to video monitoring data, including: detecting face key point data from the face video monitoring data; and obtaining the face action information of the tested object according to the face key point data.
In one embodiment, the sequence of images of interest comprises a sequence of face images; extracting an image sequence of interest for characterizing an action of a subject from video monitoring data, comprising: and cutting out a face area image from a plurality of frames of face video monitoring data according to the face key point data to obtain a face image sequence.
In one embodiment, obtaining face motion information of a detected object according to face key point data includes: and determining the key points of the face with motion and the displacement information thereof according to the key point data of the face to obtain the face action information of the tested object.
In one embodiment, the video monitoring data comprises body video monitoring data; detecting motion information of a detected object according to video monitoring data, including: detecting body key point data from body video monitoring data; and obtaining the body action information of the tested object according to the body key point data.
In one embodiment, the sequence of images of interest comprises a sequence of body images; extracting an image sequence of interest for characterizing an action of a subject from video monitoring data, comprising: and cutting out the body region image from multiple frames of the body monitoring video data according to the body key point data to obtain a body image sequence.
In one embodiment, obtaining body motion information of a subject from body keypoint data comprises: and determining body key points and displacement information thereof which generate movement according to the body key point data, and obtaining body action information of the tested object.
In one embodiment, the abnormal discharge detection model comprises a feature processing layer, an attention layer and a classification layer; processing biomedical feature data, action information and an interested image sequence by utilizing a pre-trained abnormal discharge detection model to obtain an abnormal discharge detection result of a detected object, wherein the method comprises the following steps of: inputting biomedical characteristic data, action information and an interested image sequence into an abnormal discharge detection model; extracting action feature data from action information by utilizing a feature processing layer, extracting image feature data from an interested image sequence, and fusing biomedical feature data, action feature data and image feature data to obtain fused features; characterizing the fusion characteristics by using the attention layer to obtain embedded characteristics; and mapping the embedded features to an output space by using the classification layer to obtain an abnormal discharge detection result of the tested object.
In one implementation, model processing module 1250 is further configured to: at least one of the motion information and the sequence of images of interest is time aligned with the biomedical feature data prior to processing the biomedical feature data, the motion information, the sequence of images of interest with the pre-trained abnormal discharge detection model.
In one embodiment, time-aligning at least one of the motion information and the sequence of images of interest with the biomedical feature data comprises: detecting biomedical feature data of one or more suspected abnormal discharges from the biomedical feature data and corresponding one or more first time points; detecting one or more pieces of action data of suspected abnormal discharge from the action information and one or more corresponding second time points; matching biomedical characteristic data of the suspected abnormal discharge with action data of the suspected abnormal discharge, and determining a corresponding relation between a first time point and a second time point according to a matching result; based on the correspondence between the first time point and the second time point, a time calibration parameter is determined, and the action information is time aligned with the biomedical feature data using the time calibration parameter.
In one embodiment, matching biomedical feature data of a suspected abnormal discharge with action data of a suspected abnormal discharge includes: determining a first relative value between biomedical feature data of suspected abnormal discharge and other biomedical feature data; determining a second relative value between the action data of the suspected abnormal discharge and other action data in the action information; and comparing the first relative value with the second relative value to obtain a matching result between the biomedical characteristic data of the suspected abnormal discharge and the action data of the suspected abnormal discharge.
In addition, other specific details of the embodiments of the present disclosure are described in the embodiments of the above method, and are not described herein.
Exemplary storage Medium
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described methods of the present disclosure. The above method may be implemented by a program product, such as a portable compact disc read only memory (CD-ROM) and comprising program code, and may be run on a device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RE, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Exemplary electronic device
The exemplary embodiments of the present disclosure also provide an electronic device, which may be any of the devices of fig. 1A or 1B. The electronic device includes a processor and a memory for storing executable instructions of the processor. The processor is configured to perform the above-described methods of the present disclosure via execution of the executable instructions.
An electronic device of an exemplary embodiment of the present disclosure is described with reference to fig. 13. The electronic device 1300 shown in fig. 13 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 13, the electronic device 1300 is embodied in the form of a general purpose computing device. The components of the electronic device 1300 may include, but are not limited to: at least one processing unit 1310, at least one memory unit 1320, a bus 1330 connecting the different system components, including the memory unit 1320 and the processing unit 1310.
Wherein the storage unit stores program code that is executable by the processing unit 1310 such that the processing unit 1310 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the "exemplary method" of the present specification. For example, the processing unit 1310 may perform the method steps shown in fig. 2, etc.
The storage unit 1320 may include volatile storage units such as a Random Access Memory (RAM) 1321 and/or a cache memory 1322, and may further include a Read Only Memory (ROM) 1323.
The storage unit 1320 may also include a program/utility 1324 having a set (at least one) of program modules 1325, such program modules 1325 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1330 may include a data bus, an address bus, and a control bus.
The electronic device 1300 may also communicate with one or more external devices 1400 (e.g., keyboard, pointing device, bluetooth device, etc.), which may be via an input/output (I/O) interface 1340. The electronic device 1300 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, e.g., the internet, through a network adapter 1350. As shown, network adapter 1350 communicates with other modules of electronic device 1300 via bus 1330. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1300, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
It should be noted that while several modules or sub-modules of the apparatus are mentioned in the detailed description above, such partitioning is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
Furthermore, although the operations of the methods of the present disclosure are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
While the spirit and principles of the present disclosure have been described with reference to several particular embodiments, it is to be understood that this disclosure is not limited to the particular embodiments disclosed nor does it imply that features in these aspects are not to be combined to benefit from this division, which is done for convenience of description only. The disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
Claims (10)
1. A method for detecting abnormal discharge in the brain of a person, comprising:
acquiring biomedical characteristic data of a tested object;
acquiring video monitoring data of the tested object;
detecting action information of the detected object according to the video monitoring data;
extracting an interested image sequence used for representing the action of the tested object from the video monitoring data;
And processing the biomedical feature data, the action information and the interested image sequence by utilizing a pre-trained abnormal discharge detection model to obtain an abnormal discharge detection result of the tested object.
2. The method of claim 1, wherein the acquiring biomedical feature data of the subject comprises:
acquiring biomedical monitoring data of the tested object acquired by a biomedical monitoring device;
preprocessing the biomedical monitoring data;
and obtaining biomedical characteristic data according to the preprocessed biomedical monitoring data.
3. The method of claim 2, wherein the biomedical monitoring data comprises: multichannel brain electrical monitoring data acquired from a plurality of sites of the scalp of the subject; the obtaining the biomedical characteristic data according to the preprocessed biomedical monitoring data comprises the following steps:
calculating the potential difference between each channel and the reference electrode according to the preprocessed multichannel electroencephalogram monitoring data to obtain electroencephalogram initial characteristic data;
and extracting biomedical feature data according to the initial feature data of the electroencephalogram signals.
4. A method according to claim 3, wherein the biomedical signature data comprises an electroencephalogram waveform signature; the extracting the biomedical feature data according to the electroencephalogram signal initial feature data comprises the following steps:
and processing the initial characteristic data of the electroencephalogram signals by utilizing a pre-trained waveform characteristic extraction model so as to extract the waveform characteristics of the electroencephalogram signals.
5. A method according to claim 3, wherein the biomedical signature data comprises an electroencephalogram signal time-frequency signature; the extracting the biomedical feature data according to the electroencephalogram signal initial feature data comprises the following steps:
performing time-frequency conversion on the electroencephalogram signal initial characteristic data to obtain electroencephalogram signal time-frequency data corresponding to the electroencephalogram signal initial characteristic data;
and extracting the time-frequency characteristics of the electroencephalogram signals according to the time-frequency characteristics of the electroencephalogram signals.
6. The method of claim 2, wherein the preprocessing of the biomedical monitoring data comprises at least one of: resampling, filtering, removing noise data and carrying out numerical value standardization processing.
7. The method of claim 1, wherein the video surveillance data comprises face video surveillance data; the detecting the motion information of the tested object according to the video monitoring data comprises the following steps:
Detecting face key point data from the face video monitoring data;
and obtaining the face action information of the tested object according to the face key point data.
8. A human brain abnormal discharge detection device, comprising:
a first acquisition module configured to acquire biomedical feature data of a subject;
the second acquisition module is configured to acquire video monitoring data of the tested object;
the motion information detection module is configured to detect motion information of the tested object according to the video monitoring data;
an image sequence extraction module configured to extract an image sequence of interest from the video surveillance data for characterizing an action of the subject;
and the model processing module is configured to process the biomedical feature data, the action information and the interested image sequence by utilizing a pre-trained abnormal discharge detection model to obtain an abnormal discharge detection result of the tested object.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 7 via execution of the executable instructions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311630555.XA CN117462146A (en) | 2023-11-30 | 2023-11-30 | Method and device for detecting abnormal discharge of human brain, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311630555.XA CN117462146A (en) | 2023-11-30 | 2023-11-30 | Method and device for detecting abnormal discharge of human brain, storage medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117462146A true CN117462146A (en) | 2024-01-30 |
Family
ID=89623978
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311630555.XA Pending CN117462146A (en) | 2023-11-30 | 2023-11-30 | Method and device for detecting abnormal discharge of human brain, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117462146A (en) |
-
2023
- 2023-11-30 CN CN202311630555.XA patent/CN117462146A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019100566A1 (en) | Artificial intelligence self-learning-based static electrocardiography analysis method and apparatus | |
WO2019161607A1 (en) | Electrocardiogram information dynamic monitoring method and dynamic monitoring system | |
Iftikhar et al. | Multiclass classifier based cardiovascular condition detection using smartphone mechanocardiography | |
US7359749B2 (en) | Device for analysis of a signal, in particular a physiological signal such as an ECG signal | |
Klug et al. | The BeMoBIL Pipeline for automated analyses of multimodal mobile brain and body imaging data | |
US11426113B2 (en) | System and method for the prediction of atrial fibrillation (AF) | |
CN116439725A (en) | Abnormal discharge detection method, model training method, device, medium and equipment | |
CN114343672A (en) | Partial collection of biological signals, speech-assisted interface cursor control based on biological electrical signals, and arousal detection based on biological electrical signals | |
Martinho et al. | Towards continuous user recognition by exploring physiological multimodality: An electrocardiogram (ECG) and blood volume pulse (BVP) approach | |
US20220022805A1 (en) | Seizure detection via electrooculography (eog) | |
Rasheed et al. | Classification of hand-grasp movements of stroke patients using eeg data | |
CN117462146A (en) | Method and device for detecting abnormal discharge of human brain, storage medium and electronic equipment | |
Rahman et al. | A real-time tunable ECG noise-aware system for IoT-enabled devices | |
Rashkovska et al. | Clustering of heartbeats from ECG recordings obtained with wireless body sensors | |
CN117503163A (en) | Method and device for detecting abnormal discharge of human brain, storage medium and electronic equipment | |
Sanamdikar et al. | Classification of ECG Signal for Cardiac Arrhythmia Detection Using GAN Method | |
RU2661756C2 (en) | Brain computer interface device for remote control of exoskeleton | |
Vidyasagar et al. | Signal to Image Conversion and Convolutional Neural Networks for Physiological Signal Processing: A Review | |
Joshi | Fundamentals of Electrocardiografia (ECG) With Arduino Uno | |
CN118787371A (en) | Brain abnormal discharge detection method, model training method, device, medium and equipment | |
CN110739042A (en) | Limb movement rehabilitation method and device based on brain-computer interface, storage medium and equipment | |
Rabbani et al. | Detection of Different Brain Diseases from EEG Signals Using Hidden Markov Model | |
CN117838146A (en) | Method and device for positioning epilepsy induction range, electronic equipment and computer readable storage medium | |
CN117530702A (en) | Method and device for positioning epilepsy induction range, electronic equipment and computer readable storage medium | |
CN118787344A (en) | Method and device for detecting body movement of patient, computer readable storage medium and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |