CN112466437A - Apoplexy information processing system - Google Patents

Apoplexy information processing system Download PDF

Info

Publication number
CN112466437A
CN112466437A CN202011212137.5A CN202011212137A CN112466437A CN 112466437 A CN112466437 A CN 112466437A CN 202011212137 A CN202011212137 A CN 202011212137A CN 112466437 A CN112466437 A CN 112466437A
Authority
CN
China
Prior art keywords
picture
information
action
arm
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011212137.5A
Other languages
Chinese (zh)
Inventor
李清华
文剑
林奕斌
蔡枭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Affiliated Hospital of Guilin Medical University
Original Assignee
Affiliated Hospital of Guilin Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Affiliated Hospital of Guilin Medical University filed Critical Affiliated Hospital of Guilin Medical University
Priority to CN202011212137.5A priority Critical patent/CN112466437A/en
Publication of CN112466437A publication Critical patent/CN112466437A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Abstract

The invention provides a stroke information processing system, which comprises a client and a server: the client is used for receiving the video pictures sent by the patient intelligent terminal and sending the video data to the server, wherein the video data are pictures of actions of the patient according to the instructions of the doctor; the server is used for receiving the video data, obtaining human body part information and character action information through the video data identification, and obtaining reference information of whether the user is in a suspected stroke state or not according to the human body part information and the character action information. The invention can automatically detect the apoplexy information, greatly reduces the work of doctors and patients, saves the time of doctors and patients, avoids the delay of the state of an illness and can help the doctors to research the state of the illness.

Description

Apoplexy information processing system
Technical Field
The invention mainly relates to the technical field of medical monitoring, in particular to a stroke information processing system.
Background
The existing stroke detection is mainly performed through on-site observation of doctors, brain CT and nuclear magnetic resonance, the detection mode needs doctors and patients to be on site, and a large amount of time is needed, sometimes, the diagnosis is delayed due to delay of doctors or too far distance of patients, and finally, the disease condition is delayed.
Disclosure of Invention
The invention aims to solve the technical problem of providing a stroke information processing system aiming at the defects of the prior art.
The technical scheme for solving the technical problems is as follows: a stroke information processing system comprising a client and a server:
the client is used for receiving the video pictures sent by the patient intelligent terminal and sending the video data to the server, wherein the video data are pictures of actions of the patient according to the instructions of the doctor;
the server is used for receiving the video data, obtaining human body part information and character action information through the video data identification, and obtaining reference information of whether the user is in a suspected stroke state or not according to the human body part information and the character action information.
The invention has the beneficial effects that: the client is used for receiving the video pictures sent by the patient intelligent terminal and sending the video data to the server, the server receives the video data and obtains human body part information and character action information through video data identification, and reference information of whether a user is in a suspected apoplexy state or not is obtained according to the human body part information and the character action information, the apoplexy information can be automatically detected, the work of doctors and patients is greatly reduced, the time of the doctors and the patients is also saved, the delay of the state of an illness is avoided, and meanwhile, the doctor can be helped to research the state of the illness.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, the server comprises a model training unit and a video data identification unit;
the model training unit is used for inputting preset video data to be trained, preprocessing the preset video data to be trained, constructing a training model, and training the training model through the preprocessed preset video data to be trained to obtain a trained training model;
and the video data identification unit is used for identifying the trained training model according to the video data to obtain the human body part information and the character action information.
The beneficial effect of adopting the further scheme is that: the method comprises the steps of preprocessing preset video data to be trained, constructing a training model, obtaining the trained training model by training the training model through the preprocessed preset video data to be trained, obtaining human body part information and character action information according to the identification processing of the video data on the trained training model, improving the identification accuracy, automatically detecting the stroke information, greatly reducing the work of doctors and patients, saving the time of the doctors and the patients, avoiding delay of illness state, and helping the doctors to research the illness state.
Further, the model training unit is specifically configured to:
learning preset video data to be trained according to a deep learning algorithm to obtain a channel state information amplitude variation characteristic and a channel state information phase variation characteristic;
constructing a training model based on an SVM algorithm;
and inputting the amplitude change characteristic of the channel state information and the phase change characteristic of the channel state information into the training model together for training to obtain a trained training model.
The beneficial effect of adopting the further scheme is that: the method comprises the steps of obtaining channel state information amplitude variation characteristics and channel state information phase variation characteristics according to the learning of a deep learning algorithm on preset video data to be trained, inputting the channel state information amplitude variation characteristics and the channel state information phase variation characteristics into a training model constructed based on an SVM algorithm together for training to obtain a trained training model, improving the accuracy of recognition, realizing automatic detection of the apoplexy information, greatly reducing the work of doctors and patients, saving the time of the doctors and the patients, avoiding delay of illness states, and helping the doctors to research the illness states.
Further, the human body part information comprises face information, trunk information, hand information and leg information, and the character action information comprises a micro-expression action picture, a tongue extending action picture, an arm raising action picture and a foot raising action picture;
the server also comprises a micro-expression action judging unit, a tongue extending action judging unit, an arm lifting action judging unit, a foot lifting action judging unit and a comprehensive judging unit;
the micro-expression action judging unit is used for obtaining micro-expression action reference information of whether the user is in a suspected stroke state or not according to the face information and the micro-expression action picture;
the tongue extending action judging unit is used for obtaining tongue extending action reference information of whether the user is in a suspected stroke state or not according to the trunk information and the tongue extending action picture;
the arm-lifting action judging unit is used for obtaining arm-lifting action reference information of whether the user is in a suspected stroke disease state or not according to the hand information and the arm-lifting action picture;
the foot lifting action judging unit is used for obtaining foot lifting action reference information of whether the user is in a suspected stroke state or not according to the leg information and the foot lifting action picture;
the comprehensive judgment unit is used for obtaining reference information of whether the user is in a suspected disease state or not according to the micro-expression action reference information, the tongue extending action reference information, the arm raising action reference information and the foot raising action reference information.
The beneficial effect of adopting the further scheme is that: the method has the advantages that the action judgment of the action information of the human body according to the information of the body part of the human body is carried out, the reference information of whether the user is in a suspected disease state or not is obtained according to the micro-expression action reference information, the tongue extending action reference information, the arm lifting action reference information and the foot lifting action reference information, the patient is comprehensively analyzed according to the multiple action reference information, the identification accuracy is improved, the automatic detection of the apoplexy information is realized, the work of doctors and patients is greatly reduced, the time of the doctors and the patients is saved, the delay of the disease condition is avoided, and meanwhile, the method can help the doctors to carry out the research on the disease condition.
Further, the micro expression action picture includes a frown action picture and a tooth-showing action picture, and the micro expression action judgment unit is specifically configured to:
extracting left and right face characteristic points from the face information by using a face recognition algorithm to obtain left face characteristic points and right face characteristic points, wherein the left face characteristic points comprise left face characteristic points, left face frontal line characteristic points and left face mouth corner characteristic points, and the right face characteristic points comprise right face characteristic points, right face frontal line characteristic points and right face mouth corner characteristic points;
comparing the similarity of the right face feature points according to the left face feature points to obtain face comparison values;
judging whether the face comparison value is larger than a preset face judgment value or not, if so, judging that the reference information of the micro expression actions is symmetrical about the face; if not, comparing the similarity of the right frontal line feature points according to the left frontal line feature points to obtain frontal line comparison values;
judging whether the forehead line comparison value is larger than a preset forehead line judgment value or not according to the frown action picture, if so, judging that the forehead lines of the left face and the right face are symmetrical according to the judgment result of the frown action picture; if not, the judgment result of the frown action picture is that the forehead lines of the left face and the right face are not symmetrical;
comparing the similarity of the right face mouth angle characteristic points according to the left face mouth angle characteristic points to obtain mouth angle comparison values;
judging whether the mouth angle comparison value is larger than a preset mouth angle judgment value or not according to the tooth indication action picture, if so, judging that the left face mouth angle and the right face mouth angle are symmetrical according to the tooth indication action picture; if not, the judgment result of the tooth-indicating action picture is that the left face and the right face are asymmetric in mouth angle;
and obtaining the micro-expression action reference information according to the judgment result of the frown action picture and the judgment result of the tooth-showing action picture.
The beneficial effect of adopting the further scheme is that: according to the facial information and the micro-expression action pictures, the micro-expression action reference information of whether the user is in a suspected stroke state is obtained, the conditions of all parts of the body of the patient are more carefully known through multi-directional judgment, the identification accuracy is improved, the automatic detection of the micro-expression information is realized, the work of doctors and the patient is greatly reduced, the time of the doctors and the patient is saved, the delay of the illness state is avoided, and meanwhile, the doctor can be helped to research the illness state.
Further, the tongue extending action picture includes a tongue picture, the trunk information includes a trunk picture, and the tongue extending action determining unit is specifically configured to:
acquiring tongue position information of a human tongue in the picture according to the tongue picture, and acquiring central axis position information of a human body trunk in the picture according to the trunk picture; obtaining a tongue extending included angle between the human body tongue and the central axis of the human body trunk according to the tongue position information and the central axis position information;
judging whether the tongue extending included angle is larger than a preset included angle or not, if so, obtaining the tongue extending action reference information as that the tongue is not centered; if not, the tongue extending action reference information is obtained as the tongue is centered.
The beneficial effect of adopting the further scheme is that: the tongue extending included angle of the human body tongue and the central axis of the human body trunk is obtained according to the tongue position information and the central axis position information, and the judgment on the tongue extending included angle is carried out, so that the conditions of all parts of the body of the patient can be more carefully known, the identification accuracy is improved, the diagnosis of a doctor is facilitated, the work of the doctor and the patient is greatly reduced, the time of the doctor and the patient is saved, the delay of the state of an illness is avoided, and meanwhile, the doctor can be helped to research the state of the illness.
Further, the arm-raising action picture includes a single-arm picture and a double-arm picture, and the arm-raising action determining unit is specifically configured to:
obtaining the holding time of arm lifting action in the picture according to the arm lifting action picture, obtaining single-arm position information of a single arm of a human body in the picture and reference line position information preset in the picture according to the single-arm picture, and obtaining left arm position information and right arm position information of two arms of the human body in the picture according to the two-arm picture;
obtaining the included angle of the single arm of the human body and the datum line according to the single arm position information and the datum line position information; obtaining a double-arm included angle between the left arm of the human body and the right arm of the human body according to the left arm position information and the right arm position information;
judging whether the arm lifting action maintaining time is larger than preset arm lifting time or not, and if not, obtaining the arm lifting action maintaining time again according to the arm lifting action picture; if so, judging whether the included angle of the single arm is larger than a preset single-arm judgment value or judging whether the included angle of the double arms is larger than a preset double-arm judgment value, and obtaining the arm lifting action reference information.
The beneficial effect of adopting the further scheme is that: the arm-lifting action reference information of whether the user is in a suspected stroke state or not is obtained according to the hand information and the arm-lifting action picture, so that the conditions of all parts of the body of the patient can be known more carefully, the identification accuracy is improved, the diagnosis of a doctor is facilitated, the work of the doctor and the patient is greatly reduced, the time of the doctor and the patient is saved, the delay of the state of an illness is avoided, and meanwhile, the doctor can be helped to research the state of the illness.
Further, the foot lifting action picture comprises a single-foot picture and a double-foot picture, and the foot lifting action judgment unit is specifically configured to:
obtaining the maintaining time of the foot lifting action in the picture according to the foot lifting action picture, obtaining the sole position information of the sole of the human body in the picture, the single foot position information of the single foot of the human body in the picture and the ground position information in the picture according to the single-foot picture, and obtaining the left foot position information and the right foot position information of the double feet of the human body in the picture according to the double-foot picture;
obtaining a foot lifting height value of the human sole and the ground according to the sole position information and the ground position information; obtaining the included angle of the single foot of the human body and the ground according to the single foot position information and the ground position information; obtaining the included angle between the human body leg and the human body leg according to the left leg position information and the right leg position information;
judging whether the foot lifting action maintaining time is larger than a preset foot lifting time or not, if not, obtaining the foot lifting action maintaining time again according to the foot lifting action picture; if so, judging whether the foot lifting height value is larger than a preset ground clearance judgment value or whether the included angle of the single foot is smaller than a preset single foot judgment value or whether the included angle of the double feet is larger than a preset double foot judgment value, and obtaining the foot lifting action reference information.
The beneficial effect of adopting the further scheme is that: the foot lifting action reference information of whether the user is in a suspected stroke state or not is obtained according to the leg information and the foot lifting action picture, the conditions of all parts of the body of the patient are more carefully known, the identification accuracy is improved, the diagnosis of a doctor is facilitated, the work of the doctor and the patient is greatly reduced, the time of the doctor and the patient is saved, the delay of the state of an illness is avoided, and meanwhile, the doctor can be helped to research the state of the illness.
Drawings
Fig. 1 is a block diagram of a stroke information processing system according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a block diagram of a stroke information processing system according to an embodiment of the present invention.
As shown in fig. 1, a stroke information processing system includes a client and a server:
the client is used for receiving the video pictures sent by the patient intelligent terminal and sending the video data to the server, wherein the video data are pictures of actions of the patient according to the instructions of the doctor;
the server is used for receiving the video data, obtaining human body part information and character action information through the video data identification, and obtaining reference information of whether the user is in a suspected stroke state or not according to the human body part information and the character action information.
Preferably, the client may be a wechat applet client.
It should be understood that the patient sits well according to the requirements of the doctor, then makes the designated actions and expressions according to the requirements of the doctor in sequence, the process is stored into the video data in a video recording mode, the video data is sent to the server through the client, and the video data is collected to require the whole body to be shot, and no other irrelevant personnel can appear.
In the above embodiment, the client is configured to receive a video picture sent by the patient intelligent terminal, send the video data to the server, receive the video data, obtain the human body part information and the person action information through video data identification, and obtain reference information of whether the user is in a suspected stroke state according to the human body part information and the person action information, so that the stroke information can be automatically detected, the work of a doctor and a patient is greatly reduced, the time of the doctor and the patient is also saved, delay of an illness state is avoided, and meanwhile, the doctor can be helped to research the illness state.
Optionally, as an embodiment of the present invention, the server includes a model training unit and a video data recognition unit;
the model training unit is used for inputting preset video data to be trained, preprocessing the preset video data to be trained, constructing a training model, and training the training model through the preprocessed preset video data to be trained to obtain a trained training model;
and the video data identification unit is used for identifying the trained training model according to the video data to obtain the human body part information and the character action information.
In the above embodiment, the video data to be trained is preprocessed and the training model is constructed, the trained training model is obtained by training the training model with the preprocessed video data to be trained, and the human body part information and the character action information are obtained by recognizing and processing the trained training model according to the video data, so that the recognition accuracy is improved, the automatic detection of the stroke information is realized, the work of doctors and patients is greatly reduced, the time of doctors and patients is also saved, the delay of the state of an illness is avoided, and meanwhile, the method can help the doctors to research the state of the illness.
Optionally, as an embodiment of the present invention, the model training unit is specifically configured to:
the model training unit is specifically configured to:
learning preset video data to be trained according to a deep learning algorithm to obtain a channel state information amplitude variation characteristic and a channel state information phase variation characteristic;
constructing a training model based on an SVM algorithm;
and inputting the amplitude change characteristic of the channel state information and the phase change characteristic of the channel state information into the training model together for training to obtain a trained training model.
It should be understood that the SVM algorithm is also called a support vector machine, is a two-classification model, is good at processing the small sample classification problem, has strong generalization capability and can process the nonlinear classification problem, and can be used for processing the multi-classification problem by the combined use of a plurality of SVM; meanwhile, Bayesian classification is adopted to enhance the classification capability of the SVM, so that a more accurate result is obtained.
Specifically, firstly, a large amount of preset video data to be trained is collected, and the preset video data to be trained is learned according to a deep learning algorithm to obtain a channel state information amplitude variation characteristic and a channel state information phase variation characteristic, specifically: the method comprises the steps of performing feature acquisition on preset video data to be trained by adopting a channel state information algorithm (CSI algorithm), acquiring Channel State Information (CSI) of behavior and action features in a video, wherein the CSI presents the amplitude and the phase of multipath propagation under different frequencies, so that a channel with frequency selective fading characteristics is more accurately described; then, an SVM algorithm is adopted to take the amplitude variation characteristic of the channel state information and the phase variation characteristic of the channel state information as input, an optimal separation surface is found in an assumed space through training to separate positive and negative samples, and the learning strategy is the maximum separation criterion; after learning is finished, the trained training model can recognize each part of the human body, recognize the appointed action and carry out decomposition judgment on the action.
In the above embodiment, the channel state information amplitude variation characteristic and the channel state information phase variation characteristic are obtained by learning the preset video data to be trained according to the deep learning algorithm, and the channel state information amplitude variation characteristic and the channel state information phase variation characteristic are input to the training model constructed based on the SVM algorithm together for training to obtain the trained training model, so that the accuracy of recognition is improved, the automatic detection of the apoplexy information is realized, the work of doctors and patients is greatly reduced, the time of doctors and patients is also saved, the delay of the state of an illness is avoided, and the method can help the doctors to research the state of an illness.
Optionally, as an embodiment of the present invention, the human body part information includes face information, torso information, hand information, and leg information, and the person motion information includes a micro-expression motion picture, a tongue-stretching motion picture, an arm-raising motion picture, and a foot-raising motion picture;
the server also comprises a micro-expression action judging unit, a tongue extending action judging unit, an arm lifting action judging unit, a foot lifting action judging unit and a comprehensive judging unit;
the micro-expression action judging unit is used for obtaining micro-expression action reference information of whether the user is in a suspected stroke state or not according to the face information and the micro-expression action picture;
the tongue extending action judging unit is used for obtaining tongue extending action reference information of whether the user is in a suspected stroke state or not according to the trunk information and the tongue extending action picture;
the arm-lifting action judging unit is used for obtaining arm-lifting action reference information of whether the user is in a suspected stroke disease state or not according to the hand information and the arm-lifting action picture;
the foot lifting action judging unit is used for obtaining foot lifting action reference information of whether the user is in a suspected stroke state or not according to the leg information and the foot lifting action picture;
the comprehensive judgment unit is used for obtaining reference information of whether the user is in a suspected disease state or not according to the micro-expression action reference information, the tongue extending action reference information, the arm raising action reference information and the foot raising action reference information.
In the above embodiment, the action of the action information of the human body is judged according to the information of the body part of the human body, whether the user is in a suspected disease state or not is known according to the judgment result, the patient is analyzed in all directions according to various actions, the identification accuracy is improved, the automatic detection of the apoplexy is realized, the work of a doctor and the patient is greatly reduced, the time of the doctor and the patient is saved, the delay of the disease condition is avoided, and meanwhile, the doctor can be helped to research the disease condition.
Optionally, as an embodiment of the present invention, the micro-expression action picture includes a frown action picture and a tooth-showing action picture, and the micro-expression action determining unit is specifically configured to:
extracting left and right face characteristic points from the face information by using a face recognition algorithm to obtain left face characteristic points and right face characteristic points, wherein the left face characteristic points comprise left face characteristic points, left face frontal line characteristic points and left face mouth corner characteristic points, and the right face characteristic points comprise right face characteristic points, right face frontal line characteristic points and right face mouth corner characteristic points;
comparing the similarity of the right face feature points according to the left face feature points to obtain face comparison values;
judging whether the face comparison value is larger than a preset face judgment value or not, if so, judging that the reference information of the micro expression actions is symmetrical about the face; if not, comparing the similarity of the right frontal line feature points according to the left frontal line feature points to obtain frontal line comparison values;
judging whether the forehead line comparison value is larger than a preset forehead line judgment value or not according to the frown action picture, if so, judging that the forehead lines of the left face and the right face are symmetrical according to the judgment result of the frown action picture; if not, the judgment result of the frown action picture is that the forehead lines of the left face and the right face are not symmetrical;
comparing the similarity of the right face mouth angle characteristic points according to the left face mouth angle characteristic points to obtain mouth angle comparison values;
judging whether the mouth angle comparison value is larger than a preset mouth angle judgment value or not according to the tooth indication action picture, if so, judging that the left face mouth angle and the right face mouth angle are symmetrical according to the tooth indication action picture; if not, the judgment result of the tooth-indicating action picture is that the left face and the right face are asymmetric in mouth angle;
and obtaining the micro-expression action reference information according to the judgment result of the frown action picture and the judgment result of the tooth-showing action picture.
Preferably, the preset face determination value is 95%.
In the above embodiment, the reference information of the micro-expression actions of whether the user is in the suspected apoplexy condition is obtained according to the face information and the micro-expression action picture, and the situation of each part of the body of the patient is more carefully known through multi-directional judgment, so that the accuracy of identification is improved, the detection of the micro-expression information is automatically realized, the work of doctors and patients is greatly reduced, the time of the doctors and the patients is saved, the delay of the illness condition is avoided, and the study of the illness condition by the doctors can be helped.
Optionally, as an embodiment of the present invention, the tongue extending action picture includes a tongue picture, the trunk information includes a trunk picture, and the tongue extending action determining unit is specifically configured to:
acquiring tongue position information of a human tongue in the picture according to the tongue picture, and acquiring central axis position information of a human body trunk in the picture according to the trunk picture; obtaining a tongue extending included angle between the human body tongue and the central axis of the human body trunk according to the tongue position information and the central axis position information;
judging whether the tongue extending included angle is larger than a preset included angle or not, if so, obtaining the tongue extending action reference information as that the tongue is not centered; if not, the tongue extending action reference information is obtained as the tongue is centered.
Preferably, the preset included angle may be 10 °.
In the above embodiment, the tongue extending included angle between the human body tongue and the central axis of the human body is obtained according to the tongue position information and the central axis position information, and the judgment of the tongue extending included angle is performed, so that the conditions of each part of the body of the patient can be more carefully known, the identification accuracy is improved, the diagnosis of a doctor is facilitated, the work of the doctor and the patient is greatly reduced, the time of the doctor and the patient is saved, the delay of the state of an illness is avoided, and meanwhile, the doctor can be helped to research the state of the illness.
Optionally, as an embodiment of the present invention, the arm-raising motion picture includes a single-arm picture and a double-arm picture, and the arm-raising motion determining unit is specifically configured to:
obtaining the holding time of arm lifting action in the picture according to the arm lifting action picture, obtaining single-arm position information of a single arm of a human body in the picture and reference line position information preset in the picture according to the single-arm picture, and obtaining left arm position information and right arm position information of two arms of the human body in the picture according to the two-arm picture;
obtaining the included angle of the single arm of the human body and the datum line according to the single arm position information and the datum line position information; obtaining a double-arm included angle between the left arm of the human body and the right arm of the human body according to the left arm position information and the right arm position information;
judging whether the arm lifting action maintaining time is larger than preset arm lifting time or not, and if not, obtaining the arm lifting action maintaining time again according to the arm lifting action picture; if so, judging whether the included angle of the single arm is larger than a preset single-arm judgment value or judging whether the included angle of the double arms is larger than a preset double-arm judgment value, and obtaining the arm lifting action reference information.
Preferably, the preset arm lifting time may be 10 seconds, and both the preset single-arm determination value and the preset double-arm determination value may be 10 °.
It should be understood that the preset reference line position information refers to taking an arm parallel to the ground as a reference line and acquiring position information of the reference line.
In the above embodiment, the arm-raising action reference information indicating whether the user is in the suspected stroke state is obtained according to the hand information and the arm-raising action picture, so that the conditions of all parts of the body of the patient can be known more carefully, the identification accuracy is improved, the diagnosis of a doctor is facilitated, the work of the doctor and the patient is greatly reduced, the time of the doctor and the patient is saved, the delay of the state of an illness is avoided, and meanwhile, the doctor can be helped to research the state of the illness.
Optionally, as an embodiment of the present invention, the foot lifting action picture includes a single-foot picture and a two-foot picture, and the foot lifting action determining unit is specifically configured to:
obtaining the maintaining time of the foot lifting action in the picture according to the foot lifting action picture, obtaining the sole position information of the sole of the human body in the picture, the single foot position information of the single foot of the human body in the picture and the ground position information in the picture according to the single-foot picture, and obtaining the left foot position information and the right foot position information of the double feet of the human body in the picture according to the double-foot picture;
obtaining a foot lifting height value of the human sole and the ground according to the sole position information and the ground position information; obtaining the included angle of the single foot of the human body and the ground according to the single foot position information and the ground position information; obtaining the included angle between the human body leg and the human body leg according to the left leg position information and the right leg position information;
judging whether the foot lifting action maintaining time is larger than a preset foot lifting time or not, if not, obtaining the foot lifting action maintaining time again according to the foot lifting action picture; if so, judging whether the foot lifting height value is larger than a preset ground clearance judgment value or whether the included angle of the single foot is smaller than a preset single foot judgment value or whether the included angle of the double feet is larger than a preset double foot judgment value, and obtaining the foot lifting action reference information.
Preferably, the preset foot lifting time may be 10 seconds, the preset ground clearance determination value may be 30cm, the preset single foot determination value may be 10 °, and the preset double foot determination value may be 5 °.
In the above embodiment, the reference information of the foot lifting action of the user whether in the suspected stroke state is obtained according to the leg information and the foot lifting action picture, so that the condition of each part of the body of the patient can be known more carefully, the accuracy of identification is improved, diagnosis of a doctor is facilitated, the work of the doctor and the patient is greatly reduced, the time of the doctor and the patient is saved, delay of the state of an illness is avoided, and the doctor can be helped to research the state of the illness.
Optionally, as an embodiment of the present invention, the server further includes a storage unit, where the storage unit is configured to:
and storing the video data, the micro-expression action reference information, the tongue extending action reference information, the arm raising action reference information and the foot raising action reference information into a preset user personal account in the server.
It should be understood that the storage is permanent storage.
In the embodiment, the video data and the reference information are stored, so that the doctor can conveniently retrieve and check the historical data at any time.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A stroke information processing system, comprising a client and a server:
the client is used for receiving the video pictures sent by the patient intelligent terminal and sending the video data to the server, wherein the video data are pictures of actions of the patient according to the instructions of the doctor;
the server is used for receiving the video data, obtaining human body part information and character action information through the video data identification, and obtaining reference information of whether the user is in a suspected stroke state or not according to the human body part information and the character action information.
2. The stroke information processing system of claim 1, wherein the server comprises a model training unit and a video data recognition unit;
the model training unit is used for inputting preset video data to be trained, preprocessing the preset video data to be trained, constructing a training model, and training the training model through the preprocessed preset video data to be trained to obtain a trained training model;
and the video data identification unit is used for identifying the trained training model according to the video data to obtain the human body part information and the character action information.
3. The stroke information processing system of claim 2, wherein the model training unit is specifically configured to:
learning preset video data to be trained according to a deep learning algorithm to obtain a channel state information amplitude variation characteristic and a channel state information phase variation characteristic;
constructing a training model based on an SVM algorithm;
and inputting the amplitude change characteristic of the channel state information and the phase change characteristic of the channel state information into the training model together for training to obtain a trained training model.
4. The stroke information processing system according to claim 1, wherein the human body part information includes face information, torso information, hand information, and leg information, and the character motion information includes a micro-expression motion picture, a tongue-extending motion picture, an arm-raising motion picture, and a foot-raising motion picture;
the server also comprises a micro-expression action judging unit, a tongue extending action judging unit, an arm lifting action judging unit, a foot lifting action judging unit and a comprehensive judging unit;
the micro-expression action judging unit is used for obtaining micro-expression action reference information of whether the user is in a suspected stroke state or not according to the face information and the micro-expression action picture;
the tongue extending action judging unit is used for obtaining tongue extending action reference information of whether the user is in a suspected stroke state or not according to the trunk information and the tongue extending action picture;
the arm-lifting action judging unit is used for obtaining arm-lifting action reference information of whether the user is in a suspected stroke disease state or not according to the hand information and the arm-lifting action picture;
the foot lifting action judging unit is used for obtaining foot lifting action reference information of whether the user is in a suspected stroke state or not according to the leg information and the foot lifting action picture;
the comprehensive judgment unit is used for obtaining reference information of whether the user is in a suspected disease state or not according to the micro-expression action reference information, the tongue extending action reference information, the arm raising action reference information and the foot raising action reference information.
5. The stroke information processing system of claim 4, wherein the micro-expression action frame comprises a frown action frame and a tooth-showing action frame, and the micro-expression action determination unit is specifically configured to:
extracting left and right face characteristic points from the face information by using a face recognition algorithm to obtain left face characteristic points and right face characteristic points, wherein the left face characteristic points comprise left face characteristic points, left face frontal line characteristic points and left face mouth corner characteristic points, and the right face characteristic points comprise right face characteristic points, right face frontal line characteristic points and right face mouth corner characteristic points;
comparing the similarity of the right face feature points according to the left face feature points to obtain face comparison values;
judging whether the face comparison value is larger than a preset face judgment value or not, if so, judging that the reference information of the micro expression actions is symmetrical about the face; if not, comparing the similarity of the right frontal line feature points according to the left frontal line feature points to obtain frontal line comparison values;
judging whether the forehead line comparison value is larger than a preset forehead line judgment value or not according to the frown action picture, if so, judging that the forehead lines of the left face and the right face are symmetrical according to the judgment result of the frown action picture; if not, the judgment result of the frown action picture is that the forehead lines of the left face and the right face are not symmetrical;
comparing the similarity of the right face mouth angle characteristic points according to the left face mouth angle characteristic points to obtain mouth angle comparison values;
judging whether the mouth angle comparison value is larger than a preset mouth angle judgment value or not according to the tooth indication action picture, if so, judging that the left face mouth angle and the right face mouth angle are symmetrical according to the tooth indication action picture; if not, the judgment result of the tooth-indicating action picture is that the left face and the right face are asymmetric in mouth angle;
and obtaining the micro-expression action reference information according to the judgment result of the frown action picture and the judgment result of the tooth-showing action picture.
6. The system according to claim 4, wherein the tongue-extending action picture includes a tongue picture, the trunk information includes a trunk picture, and the tongue-extending action determining unit is specifically configured to:
acquiring tongue position information of a human tongue in the picture according to the tongue picture, and acquiring central axis position information of a human body trunk in the picture according to the trunk picture; obtaining a tongue extending included angle between the human body tongue and the central axis of the human body trunk according to the tongue position information and the central axis position information;
judging whether the tongue extending included angle is larger than a preset included angle or not, if so, obtaining the tongue extending action reference information as that the tongue is not centered; if not, the tongue extending action reference information is obtained as the tongue is centered.
7. The stroke information processing system according to claim 4, wherein the arm-up action picture includes a single-arm picture and a double-arm picture, and the arm-up action determining unit is specifically configured to:
obtaining the holding time of arm lifting action in the picture according to the arm lifting action picture, obtaining single-arm position information of a single arm of a human body in the picture and reference line position information preset in the picture according to the single-arm picture, and obtaining left arm position information and right arm position information of two arms of the human body in the picture according to the two-arm picture;
obtaining the included angle of the single arm of the human body and the datum line according to the single arm position information and the datum line position information; obtaining a double-arm included angle between the left arm of the human body and the right arm of the human body according to the left arm position information and the right arm position information;
judging whether the arm lifting action maintaining time is larger than preset arm lifting time or not, and if not, obtaining the arm lifting action maintaining time again according to the arm lifting action picture; if so, judging whether the included angle of the single arm is larger than a preset single-arm judgment value or judging whether the included angle of the double arms is larger than a preset double-arm judgment value, and obtaining the arm lifting action reference information.
8. The stroke information processing system according to claim 4, wherein the foot lifting action picture includes a single-foot picture and a two-foot picture, and the foot lifting action determining unit is specifically configured to:
obtaining the maintaining time of the foot lifting action in the picture according to the foot lifting action picture, obtaining the sole position information of the sole of the human body in the picture, the single foot position information of the single foot of the human body in the picture and the ground position information in the picture according to the single-foot picture, and obtaining the left foot position information and the right foot position information of the double feet of the human body in the picture according to the double-foot picture;
obtaining a foot lifting height value of the human sole and the ground according to the sole position information and the ground position information; obtaining the included angle of the single foot of the human body and the ground according to the single foot position information and the ground position information; obtaining the included angle between the human body leg and the human body leg according to the left leg position information and the right leg position information;
judging whether the foot lifting action maintaining time is larger than a preset foot lifting time or not, if not, obtaining the foot lifting action maintaining time again according to the foot lifting action picture; if so, judging whether the foot lifting height value is larger than a preset ground clearance judgment value or whether the included angle of the single foot is smaller than a preset single foot judgment value or whether the included angle of the double feet is larger than a preset double foot judgment value, and obtaining the foot lifting action reference information.
CN202011212137.5A 2020-11-03 2020-11-03 Apoplexy information processing system Pending CN112466437A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011212137.5A CN112466437A (en) 2020-11-03 2020-11-03 Apoplexy information processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011212137.5A CN112466437A (en) 2020-11-03 2020-11-03 Apoplexy information processing system

Publications (1)

Publication Number Publication Date
CN112466437A true CN112466437A (en) 2021-03-09

Family

ID=74835012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011212137.5A Pending CN112466437A (en) 2020-11-03 2020-11-03 Apoplexy information processing system

Country Status (1)

Country Link
CN (1) CN112466437A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023002503A1 (en) * 2021-07-19 2023-01-26 Ranjani Ramesh A system and a method for synthesization and classification of a micro-motion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018053521A1 (en) * 2016-09-19 2018-03-22 Ntt Innovation Institute, Inc. Stroke detection and prevention system and method
CN108735279A (en) * 2018-06-21 2018-11-02 广西虚拟现实科技有限公司 A kind of virtual reality headstroke rehabilitation training of upper limbs system and control method
CN109508644A (en) * 2018-10-19 2019-03-22 陕西大智慧医疗科技股份有限公司 Facial paralysis grade assessment system based on the analysis of deep video data
CN109686418A (en) * 2018-12-14 2019-04-26 深圳先进技术研究院 Facial paralysis degree evaluation method, apparatus, electronic equipment and storage medium
CN109741834A (en) * 2018-11-13 2019-05-10 安徽乐叟健康产业研究中心有限责任公司 A kind of monitor system for apoplexy personnel
CN110612057A (en) * 2017-06-07 2019-12-24 柯惠有限合伙公司 System and method for detecting stroke
CN110866450A (en) * 2019-10-21 2020-03-06 桂林医学院附属医院 Parkinson disease monitoring method and device and storage medium
CN111126180A (en) * 2019-12-06 2020-05-08 四川大学 Facial paralysis severity automatic detection system based on computer vision
CN111312389A (en) * 2020-02-20 2020-06-19 万达信息股份有限公司 Intelligent cerebral apoplexy diagnosis system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018053521A1 (en) * 2016-09-19 2018-03-22 Ntt Innovation Institute, Inc. Stroke detection and prevention system and method
CN110612057A (en) * 2017-06-07 2019-12-24 柯惠有限合伙公司 System and method for detecting stroke
CN108735279A (en) * 2018-06-21 2018-11-02 广西虚拟现实科技有限公司 A kind of virtual reality headstroke rehabilitation training of upper limbs system and control method
CN109508644A (en) * 2018-10-19 2019-03-22 陕西大智慧医疗科技股份有限公司 Facial paralysis grade assessment system based on the analysis of deep video data
CN109741834A (en) * 2018-11-13 2019-05-10 安徽乐叟健康产业研究中心有限责任公司 A kind of monitor system for apoplexy personnel
CN109686418A (en) * 2018-12-14 2019-04-26 深圳先进技术研究院 Facial paralysis degree evaluation method, apparatus, electronic equipment and storage medium
CN110866450A (en) * 2019-10-21 2020-03-06 桂林医学院附属医院 Parkinson disease monitoring method and device and storage medium
CN111126180A (en) * 2019-12-06 2020-05-08 四川大学 Facial paralysis severity automatic detection system based on computer vision
CN111312389A (en) * 2020-02-20 2020-06-19 万达信息股份有限公司 Intelligent cerebral apoplexy diagnosis system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023002503A1 (en) * 2021-07-19 2023-01-26 Ranjani Ramesh A system and a method for synthesization and classification of a micro-motion

Similar Documents

Publication Publication Date Title
EP4101371A1 (en) Electroencephalogram signal classifying method and apparatus, electroencephalogram signal classifying model training method and apparatus, and medium
CN110650671A (en) Processing physiological electrical data for analyte evaluation
KR20170052628A (en) Motor task analysis system and method
CN110660058B (en) Method, medium, and system for analyzing a sequence of images of periodic physiological activity
KR101910982B1 (en) Method and apparatus for eliminating motion artifact of biosignal using personalized biosignal pattern
WO2019234458A1 (en) Detecting abnormalities in ecg signals
CN113781440B (en) Ultrasonic video focus detection method and device
WO2021057423A1 (en) Image processing method, image processing apparatus, and storage medium
WO2019234457A1 (en) Detecting abnormalities in ecg signals
CN104182984B (en) Method and system for rapidly and automatically collecting blood vessel edge forms in dynamic ultrasonic image
Tao et al. A comparative home activity monitoring study using visual and inertial sensors
CN108903942A (en) A method of utilizing plural number fMRI spatial source phase identification spatial diversity
CN111275755B (en) Mitral valve orifice area detection method, system and equipment based on artificial intelligence
CN107480716A (en) A kind of combination EOG and video pan signal recognition method and system
CN111222464B (en) Emotion analysis method and system
CN114469120A (en) Multi-scale Dtw-BiLstm-Gan electrocardiosignal generation method based on similarity threshold migration
CN111053552B (en) QRS wave detection method based on deep learning
CN112466437A (en) Apoplexy information processing system
CN114220543B (en) Body and mind pain index evaluation method and system for tumor patient
So-In et al. Real-time ECG noise reduction with QRS complex detection for mobile health services
CN109492585B (en) Living body detection method and electronic equipment
CN110403631A (en) A kind of Noninvasive intracranial pressure measurement method based on continuous ultrasound image
CN113171102B (en) ECG data classification method based on continuous deep learning
Hügli et al. Model performance for visual attention in real 3D color scenes
CN111345815B (en) Method, device, equipment and storage medium for detecting QRS wave in electrocardiosignal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210309