CN110610754A - Immersive wearable diagnosis and treatment device - Google Patents

Immersive wearable diagnosis and treatment device Download PDF

Info

Publication number
CN110610754A
CN110610754A CN201910760914.0A CN201910760914A CN110610754A CN 110610754 A CN110610754 A CN 110610754A CN 201910760914 A CN201910760914 A CN 201910760914A CN 110610754 A CN110610754 A CN 110610754A
Authority
CN
China
Prior art keywords
neural network
convolutional neural
data
data analysis
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910760914.0A
Other languages
Chinese (zh)
Inventor
于雅楠
董建设
华春杰
赵洪利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Technology and Education China Vocational Training Instructor Training Center
Original Assignee
Tianjin University of Technology and Education China Vocational Training Instructor Training Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Technology and Education China Vocational Training Instructor Training Center filed Critical Tianjin University of Technology and Education China Vocational Training Instructor Training Center
Priority to CN201910760914.0A priority Critical patent/CN110610754A/en
Publication of CN110610754A publication Critical patent/CN110610754A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4884Other medical applications inducing physiological or psychological stress, e.g. applications for stress testing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/06Children, e.g. for attention deficit diagnosis

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Psychiatry (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Social Psychology (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Developmental Disabilities (AREA)
  • Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Computational Linguistics (AREA)
  • Physiology (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Oral & Maxillofacial Surgery (AREA)

Abstract

The invention discloses an immersive wearable diagnosis and treatment device, which comprises: the sensing monitoring module consists of a visual sensing platform, an environment monitoring sensing platform and a biological sensing platform; the visual sensing platform adopts a monocular camera as a photoelectric detection element, and obtains the human expression and the limb action in the visual field of the intelligent sensing equipment wearer through a minimum FPGA system platform of a convolutional neural network; the environment monitoring sensing platform monitors the physical factors of the environment in real time; the biological sensing platform monitors the physiological reaction index in real time; the data analysis module collects monitoring results of interaction objects, environmental parameters, human body signs and the like, characteristic parameter extraction processing is carried out on the data, the received data, a pre-stored corresponding normal index and information collected in the previous time are compared by a data analysis program, the development trend of the index value is judged, the difference value between the detected index value and the pre-stored corresponding normal index is calculated and stored, and the physiology and the psychology of the child patient are evaluated.

Description

Immersive wearable diagnosis and treatment device
Technical Field
The invention relates to the field of sensing tests and artificial intelligence, in particular to an immersive wearable diagnosis and treatment device which is used for monitoring physiological and psychological conditions of autistic children in real time and helping to diagnose and treat illness states.
Background
The development of the autistic condition makes it easier for the patient to develop an anxious mood and form a social barrier, largely affecting the patient's perception of the world and interaction with others. Cognition is the ability of the brain to receive external information or things, process and process through the heart activity, transform into self-recognized knowledge, and apply the knowledge. Cognitive impairment is the impairment of cognitive functions in the human's most prominent brain functions, failing to regulate as well as normal attention between stimuli, either over-or under-responsive. Since the condition of adults is difficult to monitor and diagnose, research and services directed to children are more meaningful and targeted.
At present, the known autism child examination method is to screen according to the abnormal performances of the sense, behavior, emotion, language and the like of the autism child according to a behavior scale, and the evaluation result needs to be comprehensively judged by combining professional knowledge, so that the method is a lagging evaluation method and cannot be used as the basis for intervention and treatment of doctors. In addition, the auxiliary devices on the market for treating autistic children only stay in monitoring physiological reactions of stress, anxiety and the like of children, and can not capture and even analyze induction causes causing anxiety of children patients.
Disclosure of Invention
The invention provides an immersive wearable diagnosis and treatment device, which can establish a bridge between an external environment and the emotional requirements of a child patient, relieve the potential mental hazards of the child patient on one hand, and improve the social ability of the child patient on the other hand, as described in detail below:
an immersive wearable diagnostic and therapy device, the device comprising: a sensing monitoring module, a data analysis module, a voice interaction module and a wireless data transmission module,
the sensing monitoring module consists of a visual sensing platform, an environment monitoring sensing platform and a biological sensing platform; the three parts are respectively connected with a data analysis module and a wireless data transmission module; the data analysis module is connected with the voice interaction module and the wireless data transmission module;
the visual sensing platform adopts a monocular camera as a photoelectric detection element, obtains the human expressions and the limb actions in the visual field of the intelligent sensing equipment wearer through a minimum FPGA system platform of a convolutional neural network, respectively inputs the human expressions and the limb actions into the convolutional neural network, outputs score values through an output layer, and transmits learning results to a data analysis module;
the environment monitoring sensing platform monitors physical factors such as environment temperature, humidity, illumination, noise, vibration, atmospheric pressure, electromagnetic waves and rays in real time and sends monitoring data to the data analysis module;
the biosensing platform monitors physiological reaction indexes such as heart rate, electrocardiogram, sleep, blood pressure, sweat secretion and the like, and sends monitoring data to the data analysis module;
the data analysis module collects a series of data monitoring results about interactive objects, environmental parameters, human body signs and the like, extracts and interpolates data, removes noise and extracts characteristic parameters, transmits the processed data to a data analysis program, the data analysis program compares the received data with a pre-stored corresponding normal index and information collected at the previous time, judges the development trend of the index value, compares the value with the last detected index value to judge whether the value is increased or decreased, calculates and stores the difference value between the current detected index value and the pre-stored corresponding normal index, and evaluates the physiology and psychology of the child patient by integrating the information.
Further, the obtaining of the human expression and the limb movement in the field of view of the intelligent sensing device wearer through the minimum FPGA system platform of the convolutional neural network specifically includes:
identifying a character expression image through a convolutional neural network A; and identifying the human limb action image through the convolutional neural network B.
The identification of the human expression image through the convolutional neural network A specifically comprises the following steps:
receiving a character expression image by using a photoelectric detection element, and calculating a space gray level co-occurrence matrix of the preprocessed image as a feature vector;
constructing a convolutional neural network A; the structure comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a third pooling layer, a full-connection layer and an output layer;
loading the trained convolutional neural network A;
inputting the feature vectors of the expression images received by the photoelectric detection element into a trained convolutional neural network A, and judging the expression of the child patient according to the class label to which the highest score value output by the neural network belongs; and transmitting the recognized facial expression result to a data analysis module.
Further, the constructing the convolutional neural network a further includes:
carrying out feature extraction on a plurality of character expression images by calculating a space gray level co-occurrence matrix to obtain feature vectors of the expression images and inputting the feature vectors into a convolutional neural network A;
the neural network is trained by adopting back propagation, all weights in the network after training are stored, and a plurality of images are used as test pictures for testing.
The identification of the human limb action image through the convolutional neural network B specifically comprises the following steps:
receiving a human limb action image by using a photoelectric detection element;
constructing a convolutional neural network B, wherein the structure comprises an input layer, a first convolutional layer and a first pooling layer, a second convolutional layer and a second pooling layer, a full-link layer and an output layer;
loading the trained convolutional neural network B; inputting the limb action image received by the photoelectric detection element into a trained convolutional neural network B, and judging the action of the child patient according to the class label to which the highest score value output by the convolutional neural network B belongs; and transmitting the action of the infant patient to a data analysis module.
In specific implementation, the building of the convolutional neural network specifically includes:
inputting a plurality of limb action image sets into a convolutional neural network B, training the convolutional neural network B by adopting back propagation, storing all weights in the convolutional neural network B after training, and testing by using a plurality of images as test pictures.
The wireless data transmission module is matched with an APP developed by the wearable device and is based on an android operating system, and is used for user query, data management and data uploading;
the data management module receives the data from the data analysis module, stores the data, analyzes and arranges the data, draws the arranged data into a visual image or a physiological and psychological index value which is easy to understand, and transmits the visual image or the physiological and psychological index value to the health management module of the user;
uploading the drawn image or the sorted physiological and psychological index value to a personal information database of the infant, observing the disease change of the infant on the whole, and formulating a treatment scheme.
Further, the apparatus further comprises:
and uploading the data monitoring information of the sensing monitoring module, the comprehensive evaluation result of the data analysis module, the image drawn in the wireless data transmission module and the organized physiological and psychological index value to an established database, and then uploading the data to a cloud platform for a medical institution to perform deep data mining.
The technical scheme provided by the invention has the beneficial effects that:
1. the product monitors the physiological and psychological conditions of the autistic children in real time, and a bridge for external inducement and internal emotional reaction of the children is established;
2. the product combines voice interaction to recover the tension and anxiety emotion of the infant patient, satisfies the emotional demand and improves the social ability; through the storage and analysis of big data information, the infant is intervened and treated more specifically.
Drawings
Fig. 1 is a schematic structural diagram of an immersive wearable diagnostic and therapeutic device;
in fig. 1, the list of components represented by each reference numeral is as follows:
1: a sensing monitoring module; 2: a data analysis module;
3: a voice interaction module; 4: a wireless data transmission module;
1-1: a visual sensing platform; 1-2, an environment monitoring sensing platform;
1-3: a biosensing platform.
Fig. 2 is a structural diagram of a convolutional neural network a capable of recognizing the expression of a child patient;
because the expression feature change of the human face is complex, the convolutional neural network A in the figure 2 is set to be three convolutional layers, the three pooling layers are alternated, and the first convolutional layer is a-convolutional layer 1-1 and a-convolutional layer 1-2.
FIG. 3 is a block diagram of a convolutional neural network B for identifying the limb movements of a child patient;
in fig. 3, the convolutional neural network B is set as two convolutional layers, and the two pooling layers are alternated.
FIG. 4 is a flow chart of data analysis in the data analysis module 2;
fig. 5 is a functional block diagram of a wearable APP in the wireless data transmission module 4.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below.
An immersive wearable diagnostic and therapy device apparatus, see fig. 1, comprising: the device comprises a sensing monitoring module 1, a data analysis module 2, a voice interaction module 3 and a wireless data transmission module 4.
The sensing monitoring module 1 consists of a visual sensing platform 1-1, an environmental monitoring sensing platform 1-2 and a biological sensing platform 1-3; the visual sensing platform 1-1, the environment monitoring sensing platform 1-2 and the biological sensing platform 1-3 are respectively connected with the data analysis module 2; the data analysis module 2 is connected with the voice interaction module 3; the sensing monitoring module 1 and the data analysis module 2 are connected with the wireless data transmission module 4.
In the embodiment shown in fig. 1, a monocular camera is adopted as a photoelectric detection element in the visual sensing platform 1-1, and visual information such as human expressions, body movements, the number of people and the like in the visual field of an intelligent sensing device wearer is acquired through a minimum FPGA system platform based on an image depth learning method (well known to those skilled in the art), and a learning result is sent to the data analysis module 2.
The deep learning adopted by the visual sensing platform 1-1 is a convolutional neural network method, the human expression images and the limb action images received by the photoelectric detection elements are respectively input into convolutional neural networks A and B, score values are output by an output layer, and learning results are transmitted to a data analysis module 2.
The specific steps of identifying the human expression image in the convolutional neural network A are as follows:
s1: initializing operation;
the method comprises the steps of receiving a character expression image by a photoelectric detection element, preprocessing the image into a uniform size specification, and calculating a space gray level co-occurrence matrix of the preprocessed image into a feature vector.
The specification of the image is set according to the requirement in practical application, which is not limited in the embodiment of the present invention.
S2: loading the trained convolutional neural network A;
the steps to be performed before the step S2 is performed are:
s3: constructing a convolutional neural network A;
the convolutional neural network A is structurally composed of an input layer, a first convolutional layer and a first pooling layer, a second convolutional layer and a second pooling layer, a third convolutional layer and a third pooling layer, a full-connection layer and an output layer, wherein the first convolutional layer is divided into two small convolutional layers, and the structural diagram of the convolutional neural network A is shown in FIG. 2. And (2) performing feature extraction by calculating a space gray level co-occurrence matrix by using 300 human expression images to obtain feature vectors of the expression images, inputting the feature vectors of the expression images into a convolutional neural network A, training the neural network by adopting a back propagation method, storing all weights in the network after training, and testing by using 100 images as test pictures.
S4: and inputting the feature vector of the expression image received by the photoelectric detection element into the trained convolutional neural network A, and judging the expression of the child patient according to the class label to which the highest score value output by the neural network belongs.
The type label is set for the convolutional neural network a before training, and the method for judging the limb movement of the child patient is the same as that described above, which is not described in detail in the embodiments of the present invention.
S5: and transmitting the recognized facial expression result to the data analysis module 2.
The convolutional neural network B identifies the human limb action image by the following specific steps:
s1: initializing operation;
receiving a human limb action image by using a photoelectric detection element, preprocessing the image into a uniform size specification, and putting the preprocessed image into a convolutional neural network B;
the specification of the image is set according to the requirement in practical application, which is not limited in the embodiment of the present invention.
S2: loading the trained convolutional neural network B;
the steps to be performed before the step S2 is performed are:
s3: a convolutional neural network B is constructed.
The structure of the convolutional neural network B is an input layer, a first convolutional layer and a first pooling layer, a second convolutional layer and a second pooling layer, a full-link layer and an output layer, and the structure diagram of the convolutional neural network B is shown in fig. 3. Inputting 300 limb action image sets into a convolutional neural network B, training the convolutional neural network B by adopting a back propagation method, storing all weights in the convolutional neural network B after training, and testing by using 100 images as test pictures.
S4: and inputting the limb action image received by the photoelectric detection element into a trained convolutional neural network B, and judging the action of the child patient according to the class label to which the highest score value output by the convolutional neural network B belongs.
S5: the movement of the infant patient is transmitted to the data analysis module 2.
During specific implementation, the environment monitoring sensing platform 1-2 integrates various sensing measurement principles, monitors physical factors such as environment temperature, humidity, illumination, noise, vibration, atmospheric pressure, electromagnetic waves and rays on line in real time, and sends monitoring data to the data analysis module 2.
The biological sensing platform 1-3 adopts a contact human body biological index monitoring element to monitor physiological reaction indexes of the heart rate, electrocardiogram, sleep, blood pressure, sweat secretion and the like of a device wearer in real time and send monitoring data to the data analysis module 2.
The data analysis module 2 collects a series of data monitoring results about interaction objects, environmental parameters, human body signs and the like from the visual sensing platform 1-1, the environmental monitoring sensing platform 1-2 and the biological sensing platform 1-3, and the storage device only stores data received in the last two times because the memory of the wearable device is limited.
The data analysis module firstly preprocesses the information collected by the three platforms, extracts and interpolates the data, removes noise and extracts characteristic parameters, transmits the processed data to the data analysis program, the data analysis program compares the received data with the pre-stored corresponding normal index and the information collected at the previous time, judges the development trend of the index value, and compares the value with the previously detected index value to calculate and store the difference value between the currently detected index value and the pre-stored corresponding normal index, and evaluates the physiology and the psychology of the infant patient by integrating the information, wherein the evaluation flow chart is shown in fig. 4.
During concrete implementation, data analysis module 2 sends the assessment result to language interaction module 3 on the one hand, and on the other hand sends the assessment result to the smart mobile phone terminal APP in wireless data transmission module 4 through wireless transmission, and this APP develops for the cooperation wearing equipment, and the person of nursing of being convenient for in time controls the nervous degree of infant, and the help head of a family and medical care worker know the state of an illness of infant.
The language interaction module 3 plays different voice prompts according to the built-in functions of voice recognition, voice synthesis, MP3 playing and the like according to the received comprehensive evaluation result and the emotional characteristics of the infant and the result analyzed by the data analysis program, communicates with the autistic child, consolidates different languages, restores the tension or anxiety and other emotions, helps the infant to improve the social ability and reduce the mental hidden danger. During specific implementation, in the wireless data transmission module 4, the APP developed by matching with the wearable device is based on an android operating system, and the main functions include user query, data management and data uploading, wherein the data management module firstly receives data from the data analysis module 2, then stores the data, analyzes and arranges the data, then draws the arranged data into a visual image or easily understood physiological and psychological index values, such as a waveform diagram, a column diagram, a sector diagram and the like, and transmits the visual image or the physiological and psychological index values to the health management module of the user, and the user can conveniently query physiological and psychological changes of a sick child by registering and logging in an account of the user.
Finally, uploading the drawn image or the sorted physiological and psychological index value to a personal information database of the infant patient, so that the disease change of the infant patient can be observed on the whole, and a corresponding efficient treatment scheme can be formulated; meanwhile, the data monitoring information of the sensing monitoring module 1, the comprehensive evaluation result of the data analysis module 2, the image drawn in the wireless data transmission module 4 and the organized physiological and psychological index value are uploaded to an established database, and then uploaded to a cloud platform to be provided for a medical institution to carry out deep data mining; the functional block diagram of this APP is shown in fig. 3.
In specific implementation, a visual sensing platform 1-1 is installed on the outer side of the wearable device, and an object is artificially monitored and captured based on an image depth learning method: (1) the interactive object has facial expressions such as happy, depressed, crying and the like; (2) the interactive object makes the limb actions such as call, handshake, contact and gait; (3) the compression of the children may be caused.
The environment monitoring sensing platform 1-2 is installed on the outer side of the wearable device, and physical factors such as environment temperature, humidity, illumination, noise, vibration, atmospheric pressure, electromagnetic waves and rays are monitored in real time.
Furthermore, the biological sensing platform 1-3 is installed at the inner side of the wearable device, physiological reaction indexes such as heart rate, electrocardiogram, sleep, blood pressure and sweat secretion are monitored, and the sick children can express the feelings of the sick children under the condition that the sick children do not need to speak out loudly through the wearable device.
In the embodiment of the present invention, except for the specific description of the model of each device, the model of other devices is not limited, as long as the device can perform the above functions.
Those skilled in the art will appreciate that the drawings are only schematic illustrations of preferred embodiments, and the above-described embodiments of the present invention are merely provided for description and do not represent the merits of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principles of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. An immersive wearable diagnostic and therapy device, the device comprising: a sensing monitoring module, a data analysis module, a voice interaction module and a wireless data transmission module,
the sensing monitoring module consists of a visual sensing platform, an environment monitoring sensing platform and a biological sensing platform; the three parts are respectively connected with a data analysis module and a wireless data transmission module; the data analysis module is connected with the voice interaction module and the wireless data transmission module;
the visual sensing platform adopts a monocular camera as a photoelectric detection element, obtains the human expressions and the limb actions in the visual field of the intelligent sensing equipment wearer through a minimum FPGA system platform of a convolutional neural network, respectively inputs the human expressions and the limb actions into the convolutional neural network, outputs score values through an output layer, and transmits learning results to a data analysis module;
the environment monitoring sensing platform monitors physical factors in real time and sends the physical factors to the data analysis module;
the biological sensing platform monitors physiological reaction indexes and sends the physiological reaction indexes to the data analysis module;
the data analysis module collects interactive objects, physical factors and physiological response indexes, extracts and interpolates data, removes noise and extracts characteristic parameters, transmits the processed data to a data analysis program, the data analysis program compares the received data with a pre-stored corresponding normal index and information collected at the previous time, judges the development trend of the index value, and compared with the last detected index value, the value is increased or decreased, calculates and stores the difference value between the current detected index value and the pre-stored corresponding normal index, and evaluates the physiology and the psychology of the child patient by integrating the information.
2. The immersive wearable diagnosis and treatment apparatus according to claim 1, wherein the minimum FPGA system platform through the convolutional neural network is configured to obtain human expressions and body movements in the field of view of the smart sensing device wearer by:
identifying a character expression image through a convolutional neural network A; and identifying the human limb action image through the convolutional neural network B.
3. The immersive wearable diagnosis and treatment device of claim 2, wherein the recognition of the human expression image by the convolutional neural network a is specifically:
receiving a character expression image by using a photoelectric detection element, and calculating a space gray level co-occurrence matrix of the preprocessed image as a feature vector;
constructing a convolutional neural network A; the structure comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a third pooling layer, a full-connection layer and an output layer;
loading the trained convolutional neural network A;
inputting the feature vectors of the expression images received by the photoelectric detection element into a trained convolutional neural network A, and judging the expression of the child patient according to the class label to which the highest score value output by the neural network belongs; and transmitting the recognized facial expression result to a data analysis module.
4. The immersive wearable diagnostic and therapy device of claim 3, wherein said constructing convolutional neural network A further comprises:
carrying out feature extraction on a plurality of character expression images by calculating a space gray level co-occurrence matrix to obtain feature vectors of the expression images and inputting the feature vectors into a convolutional neural network A;
the neural network is trained by adopting back propagation, all weights in the network after training are stored, and a plurality of images are used as test pictures for testing.
5. The immersive wearable diagnosis and treatment device according to claim 2, wherein the recognition of the human limb motion image by the convolutional neural network B is specifically:
receiving a human limb action image by using a photoelectric detection element;
constructing a convolutional neural network B, wherein the structure comprises an input layer, a first convolutional layer and a first pooling layer, a second convolutional layer and a second pooling layer, a full-link layer and an output layer;
loading the trained convolutional neural network B; inputting the limb action image received by the photoelectric detection element into a trained convolutional neural network B, and judging the action of the child patient according to the class label to which the highest score value output by the convolutional neural network B belongs; and transmitting the action of the infant patient to a data analysis module.
6. The immersive wearable diagnostic and therapeutic device of claim 5, wherein said constructing a convolutional neural network is specifically:
inputting a plurality of limb action image sets into a convolutional neural network B, training the convolutional neural network B by adopting back propagation, storing all weights in the convolutional neural network B after training, and testing by using a plurality of images as test pictures.
7. The immersive wearable diagnosis and treatment device according to claim 1, wherein the wireless data transmission module is used for matching with an APP developed by the wearable device to perform user query, data management and data uploading based on an android operating system;
the data management module receives the data from the data analysis module, stores the data, analyzes and arranges the data, draws the arranged data into a visual image or a physiological and psychological index value which is easy to understand, and transmits the visual image or the physiological and psychological index value to the health management module of the user;
uploading the drawn image or the sorted physiological and psychological index value to a personal information database of the infant, observing the disease change of the infant on the whole, and formulating a treatment scheme.
8. The immersive wearable diagnostic and therapy device of claim 1, further comprising:
and uploading the data monitoring information of the sensing monitoring module, the comprehensive evaluation result of the data analysis module, the image drawn in the wireless data transmission module and the organized physiological and psychological index value to an established database, and then uploading the data to a cloud platform for a medical institution to perform deep data mining.
CN201910760914.0A 2019-08-16 2019-08-16 Immersive wearable diagnosis and treatment device Pending CN110610754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910760914.0A CN110610754A (en) 2019-08-16 2019-08-16 Immersive wearable diagnosis and treatment device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910760914.0A CN110610754A (en) 2019-08-16 2019-08-16 Immersive wearable diagnosis and treatment device

Publications (1)

Publication Number Publication Date
CN110610754A true CN110610754A (en) 2019-12-24

Family

ID=68890484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910760914.0A Pending CN110610754A (en) 2019-08-16 2019-08-16 Immersive wearable diagnosis and treatment device

Country Status (1)

Country Link
CN (1) CN110610754A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402200A (en) * 2020-02-18 2020-07-10 江苏大学 Fried food detection system based on symbiotic double-current convolution network and digital image
CN111477299A (en) * 2020-04-08 2020-07-31 广州艾博润医疗科技有限公司 Method and device for regulating and controlling sound-electricity stimulation nerves by combining electroencephalogram detection and analysis control
CN113827203A (en) * 2021-09-23 2021-12-24 广州利科科技有限公司 Criminal personnel sign monitoring method and equipment based on combination of video and equipment

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030174881A1 (en) * 2002-03-15 2003-09-18 Simard Patrice Y. System and method facilitating pattern recognition
US20050102246A1 (en) * 2003-07-24 2005-05-12 Movellan Javier R. Weak hypothesis generation apparatus and method, learning apparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial expression recognition apparatus and method, and robot apparatus
WO2011109716A2 (en) * 2010-03-04 2011-09-09 Neumitra LLC Devices and methods for treating psychological disorders
CN102184661A (en) * 2011-03-17 2011-09-14 南京大学 Childhood autism language training system and internet-of-things-based centralized training center
CN104254275A (en) * 2012-02-22 2014-12-31 阿克拉里斯医疗有限责任公司 Physiological signal detecting device and system
CN104992059A (en) * 2015-06-24 2015-10-21 天津职业技术师范大学 Intrinsic motivation based self-cognition system for motion balance robot and control method
CN105359166A (en) * 2013-02-08 2016-02-24 意莫森特公司 Collection of machine learning training data for expression recognition
WO2016049234A1 (en) * 2014-09-23 2016-03-31 Icahn School Of Medicine At Mount Sinai Systems and methods for treating a psychiatric disorder
WO2017080264A1 (en) * 2015-11-10 2017-05-18 广景视睿科技(深圳)有限公司 Method and system for adjusting mood state
US20170323485A1 (en) * 2016-05-09 2017-11-09 Magic Leap, Inc. Augmented reality systems and methods for user health analysis
US20170365277A1 (en) * 2016-06-16 2017-12-21 The George Washington University Emotional interaction apparatus
CN107799165A (en) * 2017-09-18 2018-03-13 华南理工大学 A kind of psychological assessment method based on virtual reality technology
US20180190376A1 (en) * 2017-01-04 2018-07-05 StoryUp, Inc. System and method for modifying biometric activity using virtual reality therapy
CN108334735A (en) * 2017-09-18 2018-07-27 华南理工大学 Intelligent psychological assessment based on mini separate space and tutorship system and method
CN108389620A (en) * 2018-05-11 2018-08-10 天津职业技术师范大学 A kind of wearable sport health monitor system based on Internet of Things
CN108461126A (en) * 2018-03-19 2018-08-28 傅笑 In conjunction with virtual reality(VR)The novel intelligent psychological assessment of technology and interfering system
CN108919950A (en) * 2018-06-26 2018-11-30 上海理工大学 Autism children based on Kinect interact device for image and method
CN109171772A (en) * 2018-08-13 2019-01-11 李丰 A kind of psychological quality training system and training method based on VR technology
CN109215804A (en) * 2018-10-09 2019-01-15 华南理工大学 Mental disorder assistant diagnosis system based on virtual reality technology and physio-parameter detection
CN109256192A (en) * 2018-07-24 2019-01-22 同济大学 A kind of undergraduate psychological behavior unusual fluctuation monitoring and pre-alarming method neural network based
CN109475294A (en) * 2016-05-06 2019-03-15 斯坦福大学托管董事会 For treat phrenoblabia movement and wearable video capture and feedback platform
CN109621151A (en) * 2018-12-12 2019-04-16 上海玺翎智能科技有限公司 A kind of Virtual Reality+auditory integrative training autism of children and appraisal procedure

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030174881A1 (en) * 2002-03-15 2003-09-18 Simard Patrice Y. System and method facilitating pattern recognition
US20050102246A1 (en) * 2003-07-24 2005-05-12 Movellan Javier R. Weak hypothesis generation apparatus and method, learning apparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial expression recognition apparatus and method, and robot apparatus
WO2011109716A2 (en) * 2010-03-04 2011-09-09 Neumitra LLC Devices and methods for treating psychological disorders
CN102184661A (en) * 2011-03-17 2011-09-14 南京大学 Childhood autism language training system and internet-of-things-based centralized training center
CN104254275A (en) * 2012-02-22 2014-12-31 阿克拉里斯医疗有限责任公司 Physiological signal detecting device and system
CN105359166A (en) * 2013-02-08 2016-02-24 意莫森特公司 Collection of machine learning training data for expression recognition
WO2016049234A1 (en) * 2014-09-23 2016-03-31 Icahn School Of Medicine At Mount Sinai Systems and methods for treating a psychiatric disorder
CN104992059A (en) * 2015-06-24 2015-10-21 天津职业技术师范大学 Intrinsic motivation based self-cognition system for motion balance robot and control method
WO2017080264A1 (en) * 2015-11-10 2017-05-18 广景视睿科技(深圳)有限公司 Method and system for adjusting mood state
CN109475294A (en) * 2016-05-06 2019-03-15 斯坦福大学托管董事会 For treat phrenoblabia movement and wearable video capture and feedback platform
US20170323485A1 (en) * 2016-05-09 2017-11-09 Magic Leap, Inc. Augmented reality systems and methods for user health analysis
US20170365277A1 (en) * 2016-06-16 2017-12-21 The George Washington University Emotional interaction apparatus
US20180190376A1 (en) * 2017-01-04 2018-07-05 StoryUp, Inc. System and method for modifying biometric activity using virtual reality therapy
CN108334735A (en) * 2017-09-18 2018-07-27 华南理工大学 Intelligent psychological assessment based on mini separate space and tutorship system and method
CN107799165A (en) * 2017-09-18 2018-03-13 华南理工大学 A kind of psychological assessment method based on virtual reality technology
CN108461126A (en) * 2018-03-19 2018-08-28 傅笑 In conjunction with virtual reality(VR)The novel intelligent psychological assessment of technology and interfering system
CN108389620A (en) * 2018-05-11 2018-08-10 天津职业技术师范大学 A kind of wearable sport health monitor system based on Internet of Things
CN108919950A (en) * 2018-06-26 2018-11-30 上海理工大学 Autism children based on Kinect interact device for image and method
CN109256192A (en) * 2018-07-24 2019-01-22 同济大学 A kind of undergraduate psychological behavior unusual fluctuation monitoring and pre-alarming method neural network based
CN109171772A (en) * 2018-08-13 2019-01-11 李丰 A kind of psychological quality training system and training method based on VR technology
CN109215804A (en) * 2018-10-09 2019-01-15 华南理工大学 Mental disorder assistant diagnosis system based on virtual reality technology and physio-parameter detection
CN109621151A (en) * 2018-12-12 2019-04-16 上海玺翎智能科技有限公司 A kind of Virtual Reality+auditory integrative training autism of children and appraisal procedure

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
刘婷婷等: "虚拟现实在特殊人群康复中的应用研究", 《系统仿真学报》 *
宋文凯等: "面向孤独症谱系障碍儿童的三维虚拟海豚治疗系统", 《生物医学工程学杂志》 *
朱俊鹏等: "基于卷积神经网络的视差图生成技术", 《计算机应用》 *
熊金婷等: "基于物联网的穿戴式运动健康监护系统", 《电子技术与软件工程》 *
赵洪利: "高校图书馆开展网上心理咨询的对策", 《科技情报开发与经济》 *
邓向红等: "一种新的智能型混合现实互动情景治疗平台对儿童神经系统的健康干预", 《医疗保健器具》 *
邵宏宇等: "基于BP神经网络的产品性能满意度预测分析", 《天津大学学报(自然科学与工程技术版)》 *
韩冰: "《数字音视频处理》", 西安电子科技大学出版社 *
高华等: "认知和感觉统合训练对学习困难儿童的干预作用", 《中国临床康复》 *
黄红惠等: "对孤独症儿童采取多感官训练系统培养其交往能力的作用", 《中外医疗》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402200A (en) * 2020-02-18 2020-07-10 江苏大学 Fried food detection system based on symbiotic double-current convolution network and digital image
CN111402200B (en) * 2020-02-18 2021-12-21 江苏大学 Fried food detection system based on symbiotic double-current convolution network and digital image
CN111477299A (en) * 2020-04-08 2020-07-31 广州艾博润医疗科技有限公司 Method and device for regulating and controlling sound-electricity stimulation nerves by combining electroencephalogram detection and analysis control
CN113827203A (en) * 2021-09-23 2021-12-24 广州利科科技有限公司 Criminal personnel sign monitoring method and equipment based on combination of video and equipment

Similar Documents

Publication Publication Date Title
US20230309887A1 (en) System and method for brain modelling
CN111225612A (en) Neural obstacle identification and monitoring system based on machine learning
Jung et al. Multi-level assessment model for wellness service based on human mental stress level
CN110024038A (en) The system and method that synthesis interacts are carried out with user and device
US20190313966A1 (en) Pain level determination method, apparatus, and system
JP2013099392A (en) Brain function promotion support device and brain function promotion support method
CN110610754A (en) Immersive wearable diagnosis and treatment device
EP4251048A1 (en) Method and system for detecting mood
KR102379132B1 (en) device and method for providing digital therapeutics information
Zhang Stress recognition from heterogeneous data
Majumder et al. A real-time cardiac monitoring using a multisensory smart IoT system
Li et al. Multi-modal emotion recognition based on deep learning of EEG and audio signals
Ahamad System architecture for brain-computer interface based on machine learning and internet of things
KR102165833B1 (en) Method and Device for diagnosing abnormal health conditions
CN116570283A (en) Perioperative patient emotion monitoring system and method
WO2020209846A1 (en) Pain level determination method, apparatus, and system
EP4367609A1 (en) Integrative system and method for performing medical diagnosis using artificial intelligence
CN115317304A (en) Intelligent massage method and system based on physiological characteristic detection
Sperandeo et al. Toward a technological oriented assessment in psychology: a proposal for the use of contactless devices for heart rate variability and facial emotion recognition in psychological diagnosis.
Cho et al. Instant automated inference of perceived mental stress through smartphone ppg and thermal imaging
CN110960189A (en) Wireless cognitive regulator and eye movement treatment and treatment effect evaluation method
Greene IoT development for healthy independent living
CN116602644B (en) Vascular signal acquisition system and human body characteristic monitoring system
Subash et al. CNN and Arduino based Stress Level Detection System
Martínez-Rodrigo et al. EEG mapping for arousal level quantification using dynamic quadratic entropy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191224

RJ01 Rejection of invention patent application after publication