CN114617769B - Aphasia patient auxiliary rehabilitation training device based on fusion voice recognition - Google Patents
Aphasia patient auxiliary rehabilitation training device based on fusion voice recognition Download PDFInfo
- Publication number
- CN114617769B CN114617769B CN202210251880.4A CN202210251880A CN114617769B CN 114617769 B CN114617769 B CN 114617769B CN 202210251880 A CN202210251880 A CN 202210251880A CN 114617769 B CN114617769 B CN 114617769B
- Authority
- CN
- China
- Prior art keywords
- user
- voice
- information
- rehabilitation training
- auxiliary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 167
- 201000007201 aphasia Diseases 0.000 title claims abstract description 45
- 230000004927 fusion Effects 0.000 title claims abstract description 37
- 238000011282 treatment Methods 0.000 claims abstract description 110
- 238000000034 method Methods 0.000 claims abstract description 55
- 230000036541 health Effects 0.000 claims abstract description 20
- 230000003993 interaction Effects 0.000 claims abstract description 18
- 230000003862 health status Effects 0.000 claims abstract description 12
- 238000012216 screening Methods 0.000 claims abstract description 11
- 230000004044 response Effects 0.000 claims abstract description 10
- 238000002560 therapeutic procedure Methods 0.000 claims abstract description 10
- 238000012360 testing method Methods 0.000 claims description 44
- 238000001467 acupuncture Methods 0.000 claims description 31
- 230000000638 stimulation Effects 0.000 claims description 17
- 238000012795 verification Methods 0.000 claims description 17
- 238000010586 diagram Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 12
- 238000010438 heat treatment Methods 0.000 claims description 10
- 238000004088 simulation Methods 0.000 claims description 10
- 238000011156 evaluation Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 230000003238 somatosensory effect Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 5
- 230000001225 therapeutic effect Effects 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 10
- 238000005516 engineering process Methods 0.000 description 9
- 230000008676 import Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000019771 cognition Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 206010008190 Cerebrovascular accident Diseases 0.000 description 2
- 208000006011 Stroke Diseases 0.000 description 2
- 230000002490 cerebral effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 239000000243 solution Substances 0.000 description 2
- 238000011269 treatment regimen Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 229920001971 elastomer Polymers 0.000 description 1
- 239000000806 elastomer Substances 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000010030 laminating Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H39/00—Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
- A61H39/08—Devices for applying needles to such points, i.e. for acupuncture ; Acupuncture needles or accessories therefor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F7/00—Heating or cooling appliances for medical or therapeutic treatment of the human body
- A61F7/007—Heating or cooling appliances for medical or therapeutic treatment of the human body characterised by electric heating
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H15/00—Massage by means of rollers, balls, e.g. inflatable, chains, or roller chains
- A61H15/02—Massage by means of rollers, balls, e.g. inflatable, chains, or roller chains adapted for simultaneous treatment with light, heat or drugs
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H23/00—Percussion or vibration massage, e.g. using supersonic vibration; Suction-vibration massage; Massage with moving diaphragms
- A61H23/02—Percussion or vibration massage, e.g. using supersonic vibration; Suction-vibration massage; Massage with moving diaphragms with electric or magnetic drive
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F7/00—Heating or cooling appliances for medical or therapeutic treatment of the human body
- A61F2007/0001—Body part
- A61F2007/0002—Head or parts thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F7/00—Heating or cooling appliances for medical or therapeutic treatment of the human body
- A61F7/007—Heating or cooling appliances for medical or therapeutic treatment of the human body characterised by electric heating
- A61F2007/0071—Heating or cooling appliances for medical or therapeutic treatment of the human body characterised by electric heating using a resistor, e.g. near the spot to be heated
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H2201/00—Characteristics of apparatus not provided for in the preceding codes
- A61H2201/02—Characteristics of apparatus not provided for in the preceding codes heated or cooled
- A61H2201/0207—Characteristics of apparatus not provided for in the preceding codes heated or cooled heated
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H2201/00—Characteristics of apparatus not provided for in the preceding codes
- A61H2201/02—Characteristics of apparatus not provided for in the preceding codes heated or cooled
- A61H2201/0221—Mechanism for heating or cooling
- A61H2201/0228—Mechanism for heating or cooling heated by an electric resistance element
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H2201/00—Characteristics of apparatus not provided for in the preceding codes
- A61H2201/16—Physical interface with patient
- A61H2201/1602—Physical interface with patient kind of interface, e.g. head rest, knee support or lumbar support
- A61H2201/165—Wearable interfaces
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H2201/00—Characteristics of apparatus not provided for in the preceding codes
- A61H2201/50—Control means thereof
- A61H2201/5007—Control means thereof computer controlled
- A61H2201/501—Control means thereof computer controlled connected to external computer devices or networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Public Health (AREA)
- Rehabilitation Therapy (AREA)
- Veterinary Medicine (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Pain & Pain Management (AREA)
- Physical Education & Sports Medicine (AREA)
- Acoustics & Sound (AREA)
- Physics & Mathematics (AREA)
- Epidemiology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Vascular Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Electrically Operated Instructional Devices (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The invention relates to the technical field of medical auxiliary rehabilitation, and particularly provides an auxiliary rehabilitation training method for aphasia patients based on fusion voice recognition, which comprises the following steps: responding to the voice recognition interaction request; acquiring user basic information, wherein the user basic information comprises user self-building information, family member auxiliary information, nurse creation information and treatment scheme information; based on the voice recognition interaction request, evaluating the health state of the user; generating a treatment plan architecture tree corresponding to the treatment plan based on the health status of the patient, and screening a resource sub-data set of the treatment plan architecture tree corresponding to the treatment plan from the loaded pre-configured resource database; according to the selected result of the health state content of the patient, the selected rehabilitation therapy scheme is imported into an auxiliary rehabilitation training device; the invention can realize quick response of patient training and treatment, and personalized customization of different training and treatment schemes aiming at different users, thereby solving the problems of poor treatment effect and low efficiency of the existing method.
Description
Technical Field
The invention relates to a fusion voice recognition-based aphasia patient auxiliary rehabilitation training method and a fusion voice recognition-based aphasia patient auxiliary rehabilitation training device.
Background
In recent years, with the development of computer science and technology, various technologies of artificial intelligence, such as speech recognition technology, instruction recognition technology, data mining technology, etc., have been substantially developed and successfully applied to various products under the tremendous push of new intelligent technology method deep learning, and deep learning is an important point and hot point of research in the current computer vision field, and is one of the commonly used methods for solving the complex environmental problems. Computer vision serves as a milestone in the history of human science and technology development, and plays a significant role in the development of intelligent technology.
The treatment of aphasia patients is a new subject by combining an intelligent technology, and because the incidence rate of aphasia in cerebral apoplexy is as high as 26% -38%, the aphasia is one of common disability lesions and sequelae of cerebral apoplexy, and is represented as disorder for understanding or generating language, the treatment course of aphasia is generally longer, and acupuncture and moxibustion combined with language rehabilitation training is an effective therapy for treating aphasia commonly used in clinic. In order to perform the acupuncture combined language rehabilitation treatment, the aphasia patient often needs to be hospitalized or repeatedly go to a hospital for many times, and usually, the cognition of doctors is taken as subjective judgment, similar cases are simply screened in a large number of medical record libraries, and the treatment effect is poor and the efficiency is low, so that an auxiliary rehabilitation training method for the aphasia patient based on fusion voice recognition is provided.
Disclosure of Invention
The invention aims to provide an aphasia patient auxiliary rehabilitation training method based on fusion voice recognition, which aims to solve the problems that the current rehabilitation training usually takes cognition of doctors as subjective judgment, a large number of medical records are simply screened, similar cases are selected for treatment, and the treatment effect is poor and the efficiency is low.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the aphasia patient auxiliary rehabilitation training method based on the fusion voice recognition comprises the following steps of:
responding to the voice recognition interaction request;
acquiring user basic information, wherein the user basic information comprises user self-building information, family member auxiliary information, nurse creation information and treatment scheme information;
based on the voice recognition interaction request, evaluating the health state of the user;
generating a treatment plan architecture tree corresponding to the treatment plan based on the health status of the patient, and screening a resource sub-data set of the treatment plan architecture tree corresponding to the treatment plan from the loaded pre-configured resource database;
and (3) according to the selected result of the health state content of the patient, the selected rehabilitation therapy scheme is imported into the auxiliary rehabilitation training device.
Further, the auxiliary rehabilitation training method further comprises the following steps:
acquiring an access request of a user, wherein the access request comprises a user login password and auxiliary verification information;
verifying the user login password and auxiliary verification information, and logging in a personal rehabilitation training account;
and loading a next period rehabilitation training plan according to the loaded historical rehabilitation training record.
Further, the auxiliary rehabilitation training method further comprises the following steps:
identifying a current voice instruction of a user;
extracting feature recognition points of a current voice instruction;
matching the feature recognition points with a preset voice feature database;
and acquiring standard voice recognition information matched with the feature recognition points of the voice command in the voice feature database, and confirming the voice command information to be recognized when the matching degree of the standard voice recognition information and the feature recognition points of the voice command is greater than a preset matching degree.
Further, the auxiliary rehabilitation training method further comprises the step of establishing a voice characteristic database, and the voice characteristic database establishment method comprises the following steps:
obtaining standard voice instruction information, wherein the standard voice instruction information comprises a system voice writing standard;
acquiring a simulation voice instruction of a target user aiming at a system voice writing standard;
extracting standard simulation voice instruction feature recognition points;
inputting the feature recognition points of the simulation voice command into a feature library establishment model to be trained, and extracting semantic sample features corresponding to the feature recognition points of the simulation voice command through the feature library establishment model to obtain standard voice recognition information.
Further, the step of obtaining standard voice instruction information specifically includes:
acquiring standard sample voice, wherein the standard sample voice is acquired through a somatosensory camera, and color data and depth data of the user sample voice at a sampling moment are acquired based on the somatosensory camera;
detecting the standard sample voice through a cascade multitask convolutional neural network model to obtain voice feature points of the feature standard sample voice;
performing approximate transformation on the standard sample voice based on the voice characteristic points and the voice characteristic points of the standard instruction; and carrying out random volume and contrast preprocessing on the standard sample voice to obtain the standard voice instruction information.
Further, the auxiliary rehabilitation training method further comprises the following steps:
acquiring a current voice instruction of a user, and calling user basic information based on the current voice instruction of the user;
loading resource data of the history medical record and acquiring rehabilitation training contents corresponding to the history medical record;
editing the history importing medical record according to the rehabilitation training content or importing a training program according to the rehabilitation training program;
based on the current voice instruction feedback new training program of the user, comparing the actual training result with the ideal training result, and when the actual training result is larger than the ideal training result, transmitting the actual training result back to the rehabilitation training server; when the actual training result is smaller than the ideal training result, changing the expected ideal training result, substituting the changed ideal training result back to the function model A, substituting the obtained result into the function model B again, repeating the above process until the actual training result is larger than the ideal training result, and outputting the actual training result.
Further, the step of evaluating the health status of the user specifically includes:
acquiring evaluation test questions in a test question library, wherein the test question library comprises test question questions, knowledge structure labels of the test questions and question related illness state information;
generating test information, collecting test data generated by a user for answering the questions, analyzing test results, collecting the test data generated by the user for answering the questions, and simultaneously obtaining answers of the questions from a question bank;
and acquiring an analysis test result, generating a user test data matrix diagram based on the analysis test result, inputting the user test data matrix diagram into a capability level grading deep neural network, and carrying out classification identification on the user test data matrix diagram, wherein the classification identification result comprises training capability level grading information of the user.
Further, the auxiliary rehabilitation training method further comprises the following steps:
the auxiliary rehabilitation training device guides the rehabilitation therapy scheme into the acupoint stimulator, and the acupoint stimulator controls the acupoint patch components connected with the acupoint stimulator to work according to the received stimulation therapy scheme, and the acupoint patch components stimulate the acupoints given in the rehabilitation therapy scheme.
The aphasia patient auxiliary rehabilitation training device based on the fusion voice recognition based on the aphasia patient auxiliary rehabilitation training method based on the fusion voice recognition specifically comprises:
the voice recognition response module is used for responding to the user voice recognition interaction request;
the user basic information acquisition module is used for acquiring user basic information, wherein the user basic information comprises user self-building information, family member auxiliary information, nurse creation information and treatment scheme information;
the user health state assessment module is used for assessing the health state of the user based on the voice recognition interaction request;
the treatment scheme screening module generates a treatment scheme architecture tree corresponding to the treatment scheme based on the health state of the patient, and screens a resource sub-data set of the treatment scheme architecture tree corresponding to the treatment scheme from the loaded pre-configured resource database;
the treatment scheme importing module is used for importing the screened rehabilitation treatment scheme into the auxiliary rehabilitation training device according to the selected result of the health state content of the patient.
Further, the user basic information acquisition module specifically includes:
an access request acquisition unit for acquiring an access request of a user, wherein the access request comprises a user login password and auxiliary verification information;
the user login verification unit verifies the user login password and the auxiliary verification information and logs in the personal rehabilitation training account;
and the rehabilitation training program loading unit loads the rehabilitation training program of the next period according to the loaded historical rehabilitation training record.
In summary, compared with the prior art, the invention has the following beneficial effects:
compared with the prior art, the user performs training and customizing the treatment scheme, then loads the rehabilitation training program imported by the rehabilitation training server in a communication data mode, the rehabilitation training server acquires user nurse creation information and treatment scheme information through a data connection doctor end, meanwhile, the doctor end interacts with the user end through the network to realize data transmission and sharing, the voice recognition response module responds to a user voice recognition interaction request, the user basic information is acquired based on the user basic information, the user basic information comprises user self-building information, family member auxiliary information, nurse creation information and treatment scheme information, the treatment scheme screening module carries out multiple screening on the user treatment scheme, the treatment scheme importing module imports the rehabilitation treatment scheme into an auxiliary rehabilitation training device, rapid response of patient training and treatment is realized, different training and treatment schemes are customized individually for different users, and the problems of poor treatment effect and low efficiency of the existing method are solved.
Drawings
Fig. 1 is a flowchart of an implementation of a method for assisting rehabilitation training of aphasia patients based on fusion speech recognition according to an embodiment of the present invention.
Fig. 2 is a schematic sub-flowchart of a method for assisting rehabilitation training of aphasia patients based on fusion speech recognition according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of another sub-flowchart of an aphasia patient assisted rehabilitation training method based on fusion voice recognition according to an embodiment of the present invention.
Fig. 4 is a flowchart of an implementation of establishing a voice feature database in a method for assisting rehabilitation training of aphasia patients based on fusion voice recognition according to an embodiment of the present invention.
Fig. 5 is a flowchart of an implementation of acquiring standard voice instruction information in a method for assisting rehabilitation training of aphasia patients based on fusion voice recognition according to an embodiment of the present invention.
Fig. 6 is a flowchart of a sub-process implementation for evaluating the health status of a user in the method for assisting rehabilitation training of aphasia patients based on fusion speech recognition according to an embodiment of the present invention.
Fig. 7 is a flowchart of an implementation of assessing a health state of a user in a method for assisting rehabilitation training of a aphasia patient based on fusion speech recognition according to an embodiment of the present invention.
Fig. 8 is a flowchart of an assisted rehabilitation training device for aphasia patients based on fusion voice recognition according to an embodiment of the present invention.
Fig. 9 is a schematic structural diagram of a user basic information acquisition module in a speech recognition fusion-based aphasia patient auxiliary rehabilitation training device according to an embodiment of the present invention.
Fig. 10 is a system architecture diagram of an aphasia patient auxiliary rehabilitation training device based on fusion voice recognition according to an embodiment of the present invention.
Fig. 11 is a schematic structural diagram of an acupoint stimulator according to an embodiment of the present invention.
Fig. 12 is a schematic structural view of an auxiliary needling portion according to an embodiment of the present invention.
Description of the embodiments
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making creative efforts based on the embodiments of the present invention are included in the protection scope of the present invention.
In order to perform the acupuncture combined language rehabilitation treatment, the patient suffering from aphasia often needs to be hospitalized or repeatedly go to a hospital for many times, and usually uses the cognition of doctors as subjective judgment, and the patient is simply screened from a large number of medical records, and similar cases are selected for treatment, so that the treatment effect is poor and the efficiency is low.
Therefore, we propose a method and a device for assisting rehabilitation training of aphasia patients based on fusion voice recognition, as shown in fig. 5, a user performs training and customizing treatment scheme through a user terminal 10, then loads a rehabilitation training program imported by a rehabilitation training server 30 in a communication data mode, the rehabilitation training server 30 acquires user nurse creation information and treatment scheme information through a data connection doctor terminal 20, meanwhile, the doctor terminal 20 interacts with the user terminal 10 through the network to realize data transmission and sharing, a voice recognition response module responds to a user voice recognition interaction request, acquires user basic information based on the user basic information, the user basic information comprises user self-building information, family assistance information, nurse creation information and treatment scheme information, a treatment scheme screening module screens user treatment scheme for a plurality of times, and a treatment scheme importing module imports the rehabilitation treatment scheme into the assisting rehabilitation training device, so that quick response of patient training and treatment is realized, and different personalized customization of different training and treatment schemes are aimed at different users, and the problem of poor treatment effect and low efficiency of the existing method is solved.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
As shown in fig. 1, the method for assisting rehabilitation training of aphasia patients based on fusion voice recognition according to one embodiment of the present invention includes:
step S1, responding to a voice recognition interaction request;
step S2, acquiring user basic information, wherein the user basic information comprises user self-building information, family member auxiliary information, nurse creation information and treatment scheme information;
in an embodiment of the present invention, the user self-established information includes, but is not limited to, age, sex, cause of illness, age of illness, historical treatment plan, patient classification, wherein the patient classification is classified into severe patients, moderate patients, mild patients, etc.; family assist information includes, but is not limited to, patient-non-expressive or expressive-obstructive base information, where nurse creation information includes, but is not limited to, user-attending physician information, user hospitalization records (bed number, patient room number, corresponding caregiver information), and usage or injection medication records, treatment regimen information is loaded through physician's end 20, and physician's end 20 retrieves current and historical treatment regimen information for the user through the hospital patient profile record.
Step S3, based on the voice recognition interaction request, the health state of the user is estimated;
step S4, generating a treatment scheme architecture tree corresponding to the treatment scheme based on the health state of the patient, and screening a resource sub-data set of the treatment scheme architecture tree corresponding to the treatment scheme from a loaded pre-configured resource database;
and S5, according to the selected result of the health state content of the patient, the selected rehabilitation treatment scheme is imported into the auxiliary rehabilitation training device.
In the embodiment of the invention, the rehabilitation training device is interactively connected with the acupuncture point stimulator through the Ethernet, the acupuncture point stimulator comprises a stimulator host and a stimulator execution unit, the stimulator execution unit is specifically an electrode cap for executing treatment operation on a user, the electrode cap is electrically connected with the stimulator host, the stimulator host is in the prior art, the stimulation host is not limited in the prior art, and when in use, the stimulator host acquires a treatment scheme, and then controls the electrode cap to stimulate and massage the acupuncture point of the user so as to assist the user in treatment.
Fig. 2 shows a sub-flow implementation flowchart of a fusion speech recognition-based aphasia patient assisted rehabilitation training method according to an embodiment of the present invention.
As shown in fig. 2, in a preferred embodiment of the present invention, an assisted rehabilitation training method for aphasia patients based on fusion speech recognition includes:
step S101, obtaining an access request of a user, wherein the access request comprises a user login password and auxiliary verification information;
in this embodiment, specifically, each patient corresponds to a unique account number, and when verification is passed, step S102 may be executed; the user authenticates and registers the corresponding account number according to the identity verification name, so that the privacy of the user is ensured, the user sets a password formed by fixed codes for login during registration, and meanwhile, the physical information of the user is input through the user terminal 10, wherein the physical information comprises at least one group of fingerprint information of the user, facial information of the user and voice information of the user, but is not limited to the fingerprint information, the facial information and the voice information of the user.
In this embodiment, the user terminal 10 may be a smart phone, a tablet computer or a personal service terminal.
Step S102, verifying the user login password and auxiliary verification information, and logging in a personal rehabilitation training account;
step S103, loading a next period rehabilitation training plan according to the loaded history rehabilitation training record.
In this embodiment, the history rehabilitation training record refers to a history rehabilitation training record loaded from the user terminal 10 after logging in, where the history rehabilitation training record includes a medical record name, a medical record file size, a rehabilitation training plan corresponding to medical record content, and a rehabilitation training database included in the rehabilitation training plan.
Fig. 3 shows another sub-flow implementation flowchart of the aphasia patient assisted rehabilitation training method based on fusion speech recognition according to the embodiment of the present invention.
As shown in fig. 3, in a preferred embodiment of the present invention, the method for assisting rehabilitation training for aphasia patients based on fusion speech recognition further includes:
step S201, identifying the current voice instruction of the user;
step S202, extracting feature recognition points of a current voice instruction;
step S203, the feature recognition points are matched with a preset voice feature database;
step S204, obtaining standard voice recognition information matched with the feature recognition points of the voice command in the voice feature database, and confirming the voice command information to be recognized when the matching degree of the standard voice recognition information and the feature recognition points of the voice command is greater than a preset matching degree.
In this embodiment, the current voice command of the user is recognized by the external video recorder or video camera of the user terminal 10, and the external video recorder or video camera is movable, and the movement of the external video recorder or video camera is completed by the servo hydraulic cylinder or the cylinder drive, so that the voice of the user is conveniently collected in all directions, the voice recognition is ensured to the greatest extent, and the rehabilitation training work of the patient is facilitated.
In this embodiment, the recognition of the current voice command of the user is performed based on the Yolov2 convolutional neural network, the voice command is firstly obtained through the user terminal 10, the Yolov2 convolutional neural network is input, the target object type and the position information in the scene are obtained, the random forest is input after the color information is fused, the rehabilitation training content is obtained and output, and the semantics of the current voice command are obtained through the user interaction module.
Fig. 4 shows a flowchart for implementing voice feature database establishment in a fusion voice recognition-based aphasia patient auxiliary rehabilitation training method according to an embodiment of the present invention.
As shown in fig. 4, in a preferred embodiment of the present invention, the method for assisting rehabilitation training further includes creating a voice feature database, where the method for creating the voice feature database includes:
step S2011, standard voice instruction information is obtained, wherein the standard voice instruction information comprises a system voice writing standard;
step S2012, obtaining a simulation voice instruction of a target user aiming at a system voice writing standard;
s2013, extracting standard simulation voice instruction feature recognition points;
step S2014, inputting the simulated voice instruction feature recognition points into a feature library building model to be trained, and extracting semantic sample features corresponding to the simulated voice instruction feature recognition points through the feature library building model to obtain standard voice recognition information.
In this embodiment, the standard simulated voice command feature recognition point is extracted and selected as required, which may be but is only the region feature information near the finger, or may be the region feature information only including the arm, or may be the region feature information randomly selecting at least one of the finger, the wrist, the palm, and the arm as the simulated voice command.
Fig. 5 shows a flowchart for implementing standard voice instruction information acquisition in a fusion voice recognition-based aphasia patient auxiliary rehabilitation training method according to an embodiment of the present invention.
As shown in fig. 5, in a preferred embodiment of the present invention, the step of obtaining standard voice command information specifically includes:
step S2021, obtaining standard sample voice, wherein the standard sample voice is collected by a somatosensory camera, and color data and depth data of the user sample voice at the sampling moment are collected based on the somatosensory camera;
step S2022, detecting the standard sample voice through a cascade multitasking convolutional neural network model to obtain voice feature points of the feature standard sample voice;
step S2023, performing approximate transformation on the standard sample voice based on the voice feature points and the voice feature points of the standard command, and performing random volume and contrast preprocessing on the standard sample voice to obtain the standard voice command information.
In this embodiment, the deep convolutional neural network model is a DenseNet network model, that is, the deep convolutional neural network model is a DenseNet network model for target classification and recognition tasks, and at this time, the DenseNet network model is improved in light weight by adopting a light weight method, and experimental verification is performed on standard sample speech by a miniImageNet instruction.
Fig. 6 shows a flowchart of a sub-process implementation for evaluating the health status of a user in the method for assisting rehabilitation training of aphasia patients based on fusion speech recognition according to an embodiment of the present invention.
As shown in fig. 6, in a preferred embodiment of the present invention, an assisted rehabilitation training method for aphasia patients based on fusion speech recognition includes:
step S301, a current voice instruction of a user is obtained, and basic information of the user is called based on the current voice instruction of the user;
step S302, loading resource data of a history medical record and obtaining rehabilitation training contents corresponding to the history medical record;
step S303, editing the history importing medical record according to the rehabilitation training content or importing a training program according to the rehabilitation training program;
step S304, feeding back a new training program based on the current voice instruction of the user, and comparing the actual training result with the ideal training result;
step S305, when the actual training result is larger than the ideal training result, the actual training result is transmitted back to the rehabilitation training server 30;
step S306, when the actual training result is smaller than the ideal training result, changing the expected ideal training result, substituting the changed ideal training result back to the function model A, substituting the obtained result into the function model B again, repeating the above process until the actual training result is larger than the ideal training result,
step S307, and outputs the actual training result.
In this embodiment, the construction of the function model a and the function model B is based on defining the voice command, which can upload the voice command in the sampling area to the rehabilitation training server 30 through the user terminal 10, and the rehabilitation training server 30 performs recognition comparison on the training result of the user in the sampling area, thereby establishing the function model a and the function model B, substituting the ideal training plan into the function model a and the function model B, and further obtaining the ideal training result, so as to realize adjustment of different user training plans, and improve the user training efficiency.
Fig. 7 shows a flowchart for implementing evaluation of health status of a user in a method for assisting rehabilitation training of aphasia patients based on fusion speech recognition according to an embodiment of the present invention.
As shown in fig. 7, in a preferred embodiment of the present invention, a method for assisting rehabilitation training for aphasia patients based on fusion speech recognition, wherein the method for assisting rehabilitation training specifically includes:
step S401, acquiring an evaluation test question in a test question library, wherein the test question library comprises test questions, knowledge structure labels of the test questions and question related illness state information;
step S402, generating test information, collecting test data generated by a user for answering the questions, analyzing test results, collecting the test data generated by the user for answering the questions, and simultaneously obtaining answers of the questions from a question bank;
step S403, obtaining analysis test results, generating a user test data matrix diagram based on the analysis test results, inputting the user test data matrix diagram into a capability level grading deep neural network, and carrying out classification identification on the user test data matrix diagram, wherein the classification identification result comprises training capability level grading information of the user.
In this embodiment, the questions are sequentially evaluated according to the four modules (test question subsystem) of hearing understanding, expressing, reading and writing, each module 9 questions (3 questions of simple, medium and difficult are randomly extracted from the question bank of evaluation, so that the test subjects cannot influence the evaluation effect after being familiar with the questions), and therefore, which language module of the patient is damaged greatly is judged, the severity of each language module of the patient is classified into normal, mild, medium and severe by the evaluation result, and the corresponding questions are randomly extracted from the question bank of treatment according to the damage degree of each language module, so that a language rehabilitation training scheme is generated. The evaluation result and the generated rehabilitation training scheme can be stored for the next use and rehabilitation effect monitoring.
The auxiliary rehabilitation training method in this embodiment further includes:
step S6, the auxiliary rehabilitation training device guides the rehabilitation therapy scheme into an acupoint stimulator, and the acupoint stimulator controls an acupoint patch assembly connected with the acupoint stimulator to work according to the received stimulation therapy scheme, and the acupoint patch assembly stimulates the acupoint given in the rehabilitation therapy scheme.
Fig. 8 shows a workflow diagram of an aphasia patient assisted rehabilitation training device based on fusion speech recognition according to an embodiment of the present invention.
As shown in fig. 8, in a preferred embodiment of the present invention, an auxiliary rehabilitation training device for aphasia patients based on fusion speech recognition includes:
a voice recognition response module 100 for responding to a user voice recognition interaction request;
the user basic information acquisition module 200 is configured to acquire user basic information, where the user basic information includes user self-building information, family member auxiliary information, nurse creation information and treatment plan information;
the user health status evaluation module 300 evaluates the user health status based on the voice recognition interaction request;
the treatment plan screening module 400 generates a treatment plan architecture tree corresponding to the treatment plan based on the health status of the patient, and screens a resource sub-dataset of the treatment plan architecture tree corresponding to the treatment plan from the loaded pre-configured resource database;
the therapeutic scheme importing module 500 imports the selected rehabilitation therapeutic scheme into the auxiliary rehabilitation training device according to the selected result of the health status content of the patient.
Fig. 9 shows a schematic structural diagram of a user basic information acquisition module 200 in a speech recognition based aphasia patient assisted rehabilitation training device according to an embodiment of the present invention, where the user basic information acquisition module 200 specifically includes:
an access request obtaining unit 210, configured to obtain an access request of a user, where the access request includes a user login password and auxiliary authentication information;
a user login verification unit 220 for verifying the user login password and the auxiliary verification information, and logging in the personal rehabilitation training account;
the rehabilitation training program loading unit 230 loads the next period rehabilitation training program according to the loaded history rehabilitation training record.
In a preferred embodiment of the present invention, as shown in fig. 10, the auxiliary rehabilitation training device further comprises an acupoint stimulator;
the acupuncture point stimulation instrument comprises a stimulation instrument host and a stimulation instrument execution unit, wherein the stimulation instrument execution unit is specifically acupuncture equipment for executing treatment operation on a user, the acupuncture equipment is electrically connected with the stimulation instrument host 3, the stimulation instrument host is not limited in the prior art, and when the acupuncture point stimulation instrument is used, the stimulation instrument host acquires a treatment scheme, and then the acupuncture equipment is controlled to stimulate and massage acupuncture points of the user to assist the user in treatment;
the acupuncture equipment includes treatment helmet 1 and treatment paster 2, wherein, treatment helmet 1 installs at patient's head for the treatment of patient's head, treatment paster 2 laminating patient's health setting for the treatment of supplementary patient's health acupuncture point, treatment paster 2 are provided with main acupuncture portion 21 and main heating piece 22 through control wire and stimulator host 3 electric connection on the treatment paster, and main acupuncture portion 21 is with the cooperation work with main heating piece 22, has played reinforcing patient's comfort level, and improves patient's therapeutic efficiency's effect.
In this embodiment, the therapeutic helmet 1 includes:
the helmet main body 11 is used for being sleeved on the head of a patient and protecting the head of the patient;
the auxiliary needling portion 12 is arranged on the helmet main body 11, the auxiliary needling portion 12 is at least provided with a group, the circumference is arranged in the helmet main body, and in order to adapt to needling work of different acupuncture points of a patient, the auxiliary needling portion 12 is fixedly connected with a needling portion adjusting piece 13, the needling portion adjusting piece 13 is specifically a hydraulic cylinder, a cylinder or an electric push rod, and when in work, the needling portion adjusting piece 13 drives the auxiliary needling portion 12 to move, so that the position of the auxiliary needling portion 12 is adjusted.
As shown in fig. 11, the auxiliary needling section 12 includes:
a needling section housing 121;
an auxiliary acupuncture 122 for performing acupuncture point massage to assist the patient in rehabilitation;
patient massaging member 123 fixedly connected to auxiliary needling member 122, at least one set is provided for massaging the patient's head to reduce patient discomfort and pain, and
an auxiliary heating member 124 provided on the acupuncture part housing for heating the head and acupoints of the patient to assist the treatment of the patient.
In this embodiment, the patient massaging member 123 is specifically a massaging ball or a vibrating block, which is made of an elastomer material, the patient massaging member 123 is fixedly connected with a vibrating motor, and the auxiliary heating member 124 is specifically an electrical heating rod or a resistance wire, which is electrically connected with the stimulator host 3.
In summary, compared with the prior art, the user performs training and customization of the treatment scheme through the user terminal 10, then loads the rehabilitation training program imported by the rehabilitation training server 30 in a communication data mode, the rehabilitation training server 30 acquires user nurse creation information and treatment scheme information through the data connection doctor terminal 20, meanwhile, the doctor terminal 20 interacts with the user terminal 10 through the network to achieve data transmission and sharing, the voice recognition response module responds to the user voice recognition interaction request, acquires the user basic information based on the user basic information, the user basic information comprises user self-building information, family member auxiliary information, nurse creation information and treatment scheme information, the treatment scheme screening module screens the user treatment scheme for a plurality of times, the treatment scheme importing module imports the rehabilitation treatment scheme into the auxiliary rehabilitation training device, rapid response of patient training and treatment is achieved, different training and treatment schemes are customized according to different users, and the problems of poor treatment effect and low efficiency of the existing method are solved.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (2)
1. An auxiliary rehabilitation training device based on an auxiliary rehabilitation training method for aphasia patients based on fusion voice recognition is characterized by being applied to a terminal,
the auxiliary rehabilitation training device specifically comprises:
the voice recognition response module is used for responding to the user voice recognition interaction request;
the user basic information acquisition module is used for acquiring user basic information, wherein the user basic information comprises user self-building information, family member auxiliary information, nurse creation information and treatment scheme information;
the user health state assessment module is used for assessing the health state of the user based on the voice recognition interaction request;
the treatment scheme screening module generates a treatment scheme architecture tree corresponding to the treatment scheme based on the health state of the patient, and screens a resource sub-data set of the treatment scheme architecture tree corresponding to the treatment scheme from the loaded pre-configured resource database;
the treatment scheme importing module is used for importing the screened rehabilitation treatment scheme into the auxiliary rehabilitation training device according to the content selection result of the health state of the patient;
the auxiliary rehabilitation training device also comprises an acupoint stimulator;
the acupuncture point stimulation instrument comprises a stimulation instrument host and a stimulation instrument execution unit, wherein the stimulation instrument execution unit is acupuncture equipment, and the acupuncture equipment is electrically connected with the stimulation instrument host;
the acupuncture equipment comprises a treatment helmet and a treatment patch, wherein the treatment helmet is arranged on the head of a patient and used for treating the head of the patient, the treatment patch is arranged to be attached to the body of the patient and used for assisting in treating acupuncture points of the body of the patient, the treatment patch is electrically connected with a stimulation instrument host through a control lead, and a main acupuncture part and a main heating part are arranged on the treatment patch;
the therapeutic helmet comprises: the helmet main body is used for being sleeved on the head of a patient and protecting the head of the patient;
the auxiliary needling parts are arranged on the helmet main body, at least one group of auxiliary needling parts are arranged in the helmet main body in the circumferential direction, the auxiliary needling parts are fixedly connected with needling part adjusting pieces, and the needling part adjusting pieces are specifically hydraulic cylinders, air cylinders or electric push rods;
the auxiliary needling section includes:
a needling section housing;
the auxiliary acupuncture piece is used for performing acupuncture point massage and assisting a patient in rehabilitation treatment;
the patient massage part is fixedly connected with the auxiliary acupuncture part, at least one group of auxiliary heating parts are arranged for massaging the head of a patient, relieving discomfort and pain of the patient, and the auxiliary heating parts are arranged on the acupuncture part shell and used for heating the head and acupuncture points of the patient and assisting treatment of the patient;
the aphasia patient auxiliary rehabilitation training method based on fusion voice recognition comprises the following steps:
responding to the voice recognition interaction request;
acquiring user basic information, wherein the user basic information comprises user self-building information, family member auxiliary information, nurse creation information and treatment scheme information;
based on the voice recognition interaction request, evaluating the health state of the user;
generating a treatment plan architecture tree corresponding to the treatment plan based on the health status of the patient, and screening a resource sub-data set of the treatment plan architecture tree corresponding to the treatment plan from the loaded pre-configured resource database;
according to the selected result of the health state content of the patient, the selected rehabilitation therapy scheme is imported into an auxiliary rehabilitation training device;
the auxiliary rehabilitation training method further comprises the following steps:
acquiring an access request of a user, wherein the access request comprises a user login password and auxiliary verification information;
verifying the user login password and auxiliary verification information, and logging in a personal rehabilitation training account;
loading a next period rehabilitation training plan according to the loaded historical rehabilitation training record;
the auxiliary rehabilitation training method further comprises the following steps:
identifying a current voice instruction of a user;
extracting feature recognition points of a current voice instruction;
matching the feature recognition points with a preset voice feature database;
acquiring standard voice recognition information matched with the feature recognition points of the voice command in a voice feature database, and confirming voice command information to be recognized when the matching degree of the standard voice recognition information and the feature recognition points of the voice command is greater than a preset matching degree;
the auxiliary rehabilitation training method also comprises the steps of establishing a voice characteristic database, wherein the voice characteristic database establishment method comprises the following steps:
obtaining standard voice instruction information, wherein the standard voice instruction information comprises a system voice writing standard;
acquiring a simulation voice instruction of a target user aiming at a system voice writing standard;
extracting standard simulation voice instruction feature recognition points;
inputting the feature recognition points of the simulation voice command into a feature library establishment model to be trained, and extracting semantic sample features corresponding to the feature recognition points of the simulation voice command through the feature library establishment model to obtain standard voice recognition information;
the step of obtaining standard voice instruction information specifically comprises the following steps:
acquiring standard sample voice, wherein the standard sample voice is acquired through a somatosensory camera, and color data and depth data of the user sample voice at a sampling moment are acquired based on the somatosensory camera;
detecting the standard sample voice through a cascade multitask convolutional neural network model to obtain voice feature points of the feature standard sample voice;
performing approximate transformation on the standard sample voice based on the voice characteristic points and the voice characteristic points of the standard instruction; preprocessing the standard sample voice with random volume and contrast to obtain the standard voice instruction information;
the auxiliary rehabilitation training method further comprises the following steps:
acquiring a current voice instruction of a user, and calling user basic information based on the current voice instruction of the user;
loading resource data of the history medical record and acquiring rehabilitation training contents corresponding to the history medical record;
editing the history importing medical record according to the rehabilitation training content or importing a training program according to the rehabilitation training program;
based on the current voice instruction feedback new training program of the user, comparing the actual training result with the ideal training result, and when the actual training result is larger than the ideal training result, transmitting the actual training result back to the rehabilitation training server; when the actual training result is smaller than the ideal training result, changing the expected ideal training result, substituting the changed ideal training result back to the function model A, substituting the obtained result into the function model B again, repeating the above process until the actual training result is larger than the ideal training result, and outputting the actual training result;
the step of assessing the health status of the user specifically comprises:
acquiring evaluation test questions in a test question library, wherein the test question library comprises test question questions, knowledge structure labels of the test questions and question related illness state information;
generating test information, collecting test data generated by a user for answering the questions, analyzing test results, collecting the test data generated by the user for answering the questions, and simultaneously obtaining answers of the questions from a question bank;
and acquiring an analysis test result, generating a user test data matrix diagram based on the analysis test result, inputting the user test data matrix diagram into a capability level grading deep neural network, and carrying out classification identification on the user test data matrix diagram, wherein the classification identification result comprises training capability level grading information of the user.
2. The assisted rehabilitation training device based on the fusion speech recognition assisted rehabilitation training method for aphasia patients according to claim 1, wherein the user basic information acquisition module specifically comprises:
an access request acquisition unit for acquiring an access request of a user, wherein the access request comprises a user login password and auxiliary verification information;
the user login verification unit verifies the user login password and the auxiliary verification information and logs in the personal rehabilitation training account;
and the rehabilitation training program loading unit loads the rehabilitation training program of the next period according to the loaded historical rehabilitation training record.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210251880.4A CN114617769B (en) | 2022-03-15 | 2022-03-15 | Aphasia patient auxiliary rehabilitation training device based on fusion voice recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210251880.4A CN114617769B (en) | 2022-03-15 | 2022-03-15 | Aphasia patient auxiliary rehabilitation training device based on fusion voice recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114617769A CN114617769A (en) | 2022-06-14 |
CN114617769B true CN114617769B (en) | 2024-03-12 |
Family
ID=81902353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210251880.4A Active CN114617769B (en) | 2022-03-15 | 2022-03-15 | Aphasia patient auxiliary rehabilitation training device based on fusion voice recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114617769B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007292979A (en) * | 2006-04-25 | 2007-11-08 | Shimada Seisakusho:Kk | Device for supporting aphasia rehabilitation training |
CN104598758A (en) * | 2015-02-12 | 2015-05-06 | 上海市徐汇区中心医院 | System and method for evaluating hearing-speech rehabilitation training and curative effect of patients with post-stroke dysarthria |
CN108877891A (en) * | 2018-09-05 | 2018-11-23 | 北京中医药大学东直门医院 | Portable acupuncture point stimulation and human-computer interaction Speech rehabilitation training instrument and test method |
CN208389188U (en) * | 2017-09-15 | 2019-01-18 | 李雪芹 | A kind of mental patient's clinical treatment device |
CN109313933A (en) * | 2017-09-11 | 2019-02-05 | 深圳市得道健康管理有限公司 | The synchronous self-diagnosis system of trick channels and collaterals based on cloud computing platform and method |
CN111126280A (en) * | 2019-12-25 | 2020-05-08 | 华南理工大学 | Gesture recognition fusion-based aphasia patient auxiliary rehabilitation training system and method |
CN111145867A (en) * | 2019-11-25 | 2020-05-12 | 泰康保险集团股份有限公司 | Method and device for generating dietary scheme |
CN111179919A (en) * | 2019-12-20 | 2020-05-19 | 华中科技大学鄂州工业技术研究院 | Method and device for determining aphasia type |
CN112992124A (en) * | 2020-11-09 | 2021-06-18 | 深圳市神经科学研究院 | Feedback type language intervention method, system, electronic equipment and storage medium |
-
2022
- 2022-03-15 CN CN202210251880.4A patent/CN114617769B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007292979A (en) * | 2006-04-25 | 2007-11-08 | Shimada Seisakusho:Kk | Device for supporting aphasia rehabilitation training |
CN104598758A (en) * | 2015-02-12 | 2015-05-06 | 上海市徐汇区中心医院 | System and method for evaluating hearing-speech rehabilitation training and curative effect of patients with post-stroke dysarthria |
CN109313933A (en) * | 2017-09-11 | 2019-02-05 | 深圳市得道健康管理有限公司 | The synchronous self-diagnosis system of trick channels and collaterals based on cloud computing platform and method |
CN208389188U (en) * | 2017-09-15 | 2019-01-18 | 李雪芹 | A kind of mental patient's clinical treatment device |
CN108877891A (en) * | 2018-09-05 | 2018-11-23 | 北京中医药大学东直门医院 | Portable acupuncture point stimulation and human-computer interaction Speech rehabilitation training instrument and test method |
CN111145867A (en) * | 2019-11-25 | 2020-05-12 | 泰康保险集团股份有限公司 | Method and device for generating dietary scheme |
CN111179919A (en) * | 2019-12-20 | 2020-05-19 | 华中科技大学鄂州工业技术研究院 | Method and device for determining aphasia type |
CN111126280A (en) * | 2019-12-25 | 2020-05-08 | 华南理工大学 | Gesture recognition fusion-based aphasia patient auxiliary rehabilitation training system and method |
CN112992124A (en) * | 2020-11-09 | 2021-06-18 | 深圳市神经科学研究院 | Feedback type language intervention method, system, electronic equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
脑中风并失语症患者的语言康复训练及护理干预;黄红;;中国现代药物应用(第12期);第256-257页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114617769A (en) | 2022-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114898832B (en) | Rehabilitation training remote control system, method, device, equipment and medium | |
Caligiuri et al. | The neuroscience of handwriting: Applications for forensic document examination | |
CN101464729B (en) | Independent desire expression method based on auditory sense cognition neural signal | |
CN108776788A (en) | A kind of recognition methods based on brain wave | |
CN108721048B (en) | Computer-readable storage medium and terminal | |
CN113362946A (en) | Video processing apparatus, electronic device, and computer-readable storage medium | |
CN105700689A (en) | Personalized MI-EEG training and collecting method based on mirror image virtualization and Skinner reinforced learning | |
CN114949608B (en) | Program control device, medical system, and computer-readable storage medium | |
Mason et al. | A general framework for characterizing studies of brain interface technology | |
CN108289634A (en) | Learn the system and method for brain-computer interface for operator | |
CN111724882A (en) | System and method for training psychology of friend-already based on virtual reality technology | |
CN112951449A (en) | Cloud AI (artificial intelligence) regulation diagnosis and treatment system and method for neurological dysfunction diseases | |
CN111297379A (en) | Brain-computer combination system and method based on sensory transmission | |
Lim et al. | Patient-specific functional electrical stimulation strategy based on muscle synergy and walking posture analysis for gait rehabilitation of stroke patients | |
CN114617769B (en) | Aphasia patient auxiliary rehabilitation training device based on fusion voice recognition | |
CN108814569B (en) | Rehabilitation training control device | |
Singh et al. | A Survey of EEG and Machine Learning based methods for Neural Rehabilitation | |
Bi et al. | TDLNet: Transfer data learning network for cross-subject classification based on multiclass upper limb motor imagery EEG | |
Tolu et al. | Perspective on investigation of neurodegenerative diseases with neurorobotics approaches | |
Randolph et al. | Towards predicting control of a brain-computer interface | |
CN111967333B (en) | Signal generation method, system, storage medium and brain-computer interface spelling device | |
WO2021130766A2 (en) | Virtual brain cloning : telepathic data communications with virtual reality holographic projections using artificial intelligence | |
Matanga et al. | A Matlab/Simulink framework for real time implementation of endogenous brain computer interfaces | |
CN116052836B (en) | Laser treatment device and system based on gas analysis | |
Ramírez-Arias et al. | EEG-Based Motor and Imaginary Movement Classification: ML Approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |