CN114917544B - Visual method and device for assisting orbicularis stomatitis function training - Google Patents
Visual method and device for assisting orbicularis stomatitis function training Download PDFInfo
- Publication number
- CN114917544B CN114917544B CN202210519592.2A CN202210519592A CN114917544B CN 114917544 B CN114917544 B CN 114917544B CN 202210519592 A CN202210519592 A CN 202210519592A CN 114917544 B CN114917544 B CN 114917544B
- Authority
- CN
- China
- Prior art keywords
- training
- signals
- orbicularis
- folding
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B23/00—Exercising apparatus specially adapted for particular parts of the body
- A63B23/025—Exercising apparatus specially adapted for particular parts of the body for the head or the neck
- A63B23/03—Exercising apparatus specially adapted for particular parts of the body for the head or the neck for face muscles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/389—Electromyography [EMG]
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Otolaryngology (AREA)
- Physical Education & Sports Medicine (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The application provides a visual method and a visual device for assisting orbicularis oculi function training, wherein the method comprises the following steps: acquiring oral state information of a trainer during training; displaying or triggering and displaying corresponding target graphics on a display device according to the training items; and defining the form of the target graph according to the mouth state information.
Description
Technical Field
The application relates to the field of medical treatment and rehabilitation equipment and biomedical engineering, in particular to a visual method and visual equipment for assisting orbicularis stomatitis function training.
Background
The upper airway refers to the passage through which airflow is drawn from the nostrils to the tracheal entrance, and includes the nasal cavity, nasopharyngeal cavity, oropharyngeal cavity, and laryngopharyngeal cavity. Obstruction of various segments of the upper airway is likely, with rhinitis, tonsillar and/or adenoid hypertrophy being the leading cause. The prevalence of adenoids is about 34% in children and young children aged 5-14 years, and the incidence of allergic rhinitis is higher. Because the nasal cavity and the nasopharyngeal cavity are completely or incompletely blocked, the air flow completely or partially enters the lower airway through the oral cavity, the oropharyngeal cavity and the laryngopharyngeal cavity, namely, the infant has oral respiratory compensation.
Long-term opening habit leads to relaxation of labial muscles of the infant, even the valgus of the upper and lower lips can not be closed, the lower jaw is retracted, air flow stimulates the oral cavity, a high arch and narrow hard palate is formed, and the lower jaw bone grows backwards and downwards. In addition, the impact of the air flow forces the tongue to droop, and the cheek-palate side muscle force of the upper jaw is unbalanced, so that the upper dental arch is narrowed; together with the weakness of the labial muscles, the anterior upper teeth protrude forward, forming a deep anterior coating, known as "glandular facial appearance".
The etiology of the obstruction of the upper airway can be removed by surgical removal of tonsils and/or adenoids, but the function of the cheilium is reduced due to long-term opening respiration, and the lips of most children patients still cannot be naturally closed after operation, so that the opening habit is broken through the training of the function of orbicularis stomatitis after operation, which is one of the important points of treatment.
Because the training action needs to reach a certain standard, and the evaluation of the current training mainly depends on subjectively judging whether the action is in place or not, the objective and unified standard is lacking. In addition, habit development is a long-term process, and single repeated training actions are often too boring and tedious for children to arouse their interests, so that they cannot concentrate on training, and thus, it is difficult to adhere to training, and the training effect is poor.
Disclosure of Invention
In view of the above technical problems, the present application provides a visual method for assisting orbicularis oculi function training, which includes:
acquiring oral state information of a trainer during training;
displaying or triggering and displaying corresponding target graphics on a display device according to the training items;
and defining the form of the target graph according to the mouth state information.
Further, the oral status information is defined by myoelectric signals collected from the orbicularis surface of the trainer and/or pressure signals between the upper and lower lips.
Further, when the training program is a line-folding training, the reduction of the length of the target graph is limited according to the electromyographic signals.
Further, when the training program is lip-folding training, the length of the target graph is defined according to the pressure signal.
Further, when the training program is "step-up" sound, the size of the target graph is defined according to the pressure signal, and the moving distance of the target graph is defined according to the electromyographic signal.
Further, before training, the electromyographic signals and the pressure signals of the actions related to the training program are acquired to obtain the threshold value of each action.
Further, the morphology of the corresponding target graph is defined according to the myoelectric energy value or the sample entropy value corresponding to the myoelectric signal.
Further, the morphology of the corresponding target graph is defined according to the amplitude mean value corresponding to the pressure signal.
The present application also provides an apparatus for assisting orbicularis stomatitis function training, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of the above-described method.
The present application also provides a computer readable medium storing instructions that, when executed, cause a system to perform the operations of the above-described method.
Aiming at the problems of subjectivity of the traditional lip muscle training action standard evaluation means, singleness and boring of the training process and difficulty in monitoring of the training process, the visual method and equipment for assisting the orbicularis stomatitis function training are based on the orbicularis stomatitis information acquisition device with multichannel surface electromyographic signals and pressure signals, different training actions are carried out by taking game images as carriers, preprocessing and feature extraction are carried out by utilizing the electromyographic signals on the surface of the orbicularis stomatitis of a patient and the pressure signals between the upper lip and the lower lip, and the extracted features are respectively mapped into training games, so that the different training actions correspond to different training games, visual real-time training feedback is provided through the signal features to control the activities of people or objects in a game environment, thereby realizing effective quantitative training process and helping the rehabilitation of the patient.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 shows a flow diagram of a method of visually assisting orbicularis oculi function training in accordance with one embodiment of the application;
fig. 2 shows a schematic flow chart of the tuck training in one embodiment of the application;
FIG. 3 illustrates a graphical user interface for a tuck training in one embodiment of the application;
fig. 4 shows a schematic flow chart of the lipping training in one embodiment of the present application;
FIG. 5 illustrates a graphical user interface for a lipping exercise in one embodiment of the present application;
FIG. 6 shows a schematic flow diagram of a "send" tone training in one embodiment of the application;
FIG. 7 illustrates a graphical user interface for "voice training" in one embodiment of the application;
FIG. 8 illustrates functional modules of an exemplary system that may be used with embodiments of the present application.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
The application is described in further detail below with reference to the accompanying drawings.
In one typical configuration of the application, the terminal, the devices of the services network and the trusted party each include one or more processors (e.g., central processing units (Central Processing Unit, CPU)), input/output interfaces, network interfaces and memory.
The Memory may include non-volatile Memory in a computer readable medium, random access Memory (Random Access Memory, RAM) and/or non-volatile Memory, etc., such as Read Only Memory (ROM) or Flash Memory (Flash Memory). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase-Change Memory (PCM), programmable Random Access Memory (Programmable Random Access Memory, PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (Dynamic Random Access Memory, DRAM), other types of Random Access Memory (Random Access Memory, RAM), read-Only Memory (ROM), electrically erasable programmable Read-Only Memory (EEPROM), flash Memory (Flash Memory) or other Memory technology, read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disks (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission media, which may be used to store information that may be accessed by the computing device.
The device includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product which can perform man-machine interaction with a user (for example, perform man-machine interaction through a touch pad), such as a smart phone, a tablet computer and the like, and the mobile electronic product can adopt any operating system, such as an Android operating system, an iOS operating system and the like. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a digital signal processor (Digital Signal Processor, DSP), an embedded device, and the like. The network device includes, but is not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud of servers; here, the Cloud is composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, a virtual supercomputer composed of a group of loosely coupled computer sets. Including but not limited to the internet, wide area networks, metropolitan area networks, local area networks, VPN networks, wireless Ad Hoc networks (Ad Hoc networks), and the like. Preferably, the device may be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the above-described devices are merely examples, and that other devices now known or hereafter may be present as applicable to the present application, and are intended to be within the scope of the present application and are incorporated herein by reference.
In the description of embodiments of the present application, the meaning of "a plurality" is two or more unless specifically defined otherwise.
As shown in fig. 1, a method of visually assisting orbicularis oculi function training according to an embodiment of the present application includes:
s100, initializing training project actions;
s200, selecting a training item;
s300, collecting mouth state information;
s400, controlling a visualized target graph according to the mouth state information;
s500, training is finished.
In this embodiment, myoelectricity electrodes are disposed on the orbicularis surface of the trainer to collect myoelectricity signals, preferably, a plurality of myoelectricity electrodes are disposed at different positions on the orbicularis surface, for example, 8 myoelectricity electrodes are employed, wherein 4 orbicularis surfaces disposed above the upper lip and 4 orbicularis surfaces disposed below the lower lip are employed, and the specific number of myoelectricity electrodes is not limited herein; pressure sensors are arranged between the upper lip and the lower lip of the trainer to collect pressure signals, so that oral state information of the trainer during training is obtained based on the electromyographic signals and the pressure signals.
For the original signals acquired from the myoelectric electrode and the pressure sensor, signal conditioning is needed, such as the original myoelectric signal acquired by the myoelectric electrode, the frequency range of the effective signal is 20-500 Hz, a corresponding amplifying filter is arranged to intercept the effective signal, and a corresponding notch filter is arranged to remove the power frequency interference of 50Hz/60 Hz; a signal amplifying circuit is often provided for the output signal of the pressure sensor. After the conditioned signal is subjected to digital-to-analog conversion, data can be further processed in a data processing system, and in this embodiment, the "electromyographic signal" and the "pressure signal" refer to digital signals after digital-to-analog conversion.
In the chinese patent application with application number 202110359579.0, entitled "a lip sensor device combining myoelectricity and pressure signals", a lip sensor device combining myoelectricity and pressure signals is provided, which is convenient for a trainer to wear and integrally set myoelectricity electrodes and pressure sensors at detection positions of the mouth, and can provide conditioned digital signals, and is suitable for being used as a device for acquiring state information of the mouth of the trainer in this embodiment. Of course, other ways of providing the myoelectric electrodes and pressure sensors may be used in applying the method of the present application, without limitation.
The content of the orbicularis stomatitis muscle function training in this embodiment includes:
1. performing line folding training: taking a sterilized cotton thread (50 cm length is recommended), containing one end into the inlet, and using the force of labial muscle to tuck the cotton thread into the inlet;
2. lip folding training: the trainer places an article (such as jade pendant) with a small thickness and weight and a flat surface between the two lips, and can firmly fold the article (the article cannot be snapped by the front teeth) and can naturally drop when the lips are loosened;
3. the "kiss" sound: the upper and lower lips of the trainer wrap the upper and lower front teeth respectively, and the trainer tucks the two lips (the red upper and lower lips cannot be seen when the front view is observed) to stay for 3-5 seconds and then exert force outwards to make a sound of 'Bo'.
After the myoelectric electrode and the pressure sensor are arranged on the trainer, before the first training, the initializing operation of the action related to the training project is required to be carried out, so as to obtain the action threshold value of the myoelectric signal corresponding to the corresponding action and the pressure threshold value of the pressure signal corresponding to the corresponding action. Preferably, the collected electromyographic signals and pressure signals can be subjected to further filtering noise reduction treatment, such as wavelet noise reduction and the like. The action threshold of the electromyographic signals can be obtained by a Gaussian distribution method or a sample entropy method; the pressure threshold value of the pressure signal can be obtained by an amplitude average value method.
After the initialization operation of the action is completed, a corresponding training item is selected on the terminal equipment executing the method of the embodiment, and the terminal equipment can be connected with a display device or the terminal equipment is provided with the display device, so that a graphical interface matched with the orbicularis function training can be displayed to a trainer. The terminal equipment can be a computer, a tablet, a mobile phone, a set top box and the like.
When the selected training item is the line-folding training, a target graph matched with the line-folding training is included in a graphical interface displayed on the display device, and the reduction of the length of the target graph is limited according to the collected electromyographic signals.
The embodiment provides a noodle-eating game to match with the line-folding training, the control principle is based on the control of electromyographic signals, specifically the characteristics of the surface electromyographic signals collected by a quantitative sensor, and the length change of people eating the noodle in the game environment is controlled by the characteristics.
As shown in fig. 2 and 3, the pictures of the child sucking noodles are displayed in the graphical interface, during the process of performing the line-folding training by the trainer, the collected myoelectric signals are subjected to feature extraction, such as myoelectric energy values or sample entropy values, when the extracted feature values reach the action threshold values of the corresponding actions, the actions of the child sucking noodles in the graphical interface are reflected, so that the length of the noodles is correspondingly reduced, and the reduction of the length of the noodles in the graphical interface can be controlled through the extracted feature values. For the trainer, when the trainer contains cotton thread in the mouth, the length of the noodle in the graphical interface is correspondingly shortened. After training is finished, the training data are saved, statistical data related to training, such as training duration, total length of noodles, estimated average muscle strength value, training score and the like, are displayed in a graphical interface, and an inlet for checking historical training results and statistical analysis results can be set in the interface.
In the line-folding training process, the feedback of the real-time image is helpful for a trainer to standardize training actions, if the actions are not in accordance with the requirements, the reduction of the length of the noodles can be influenced, the continuous standardized actions are helpful for forming training habits, and the training effect is enhanced; the visual assistance is also helpful to improve the attention and enthusiasm of the trainee, and especially increases the interest for children.
Preferably, filtering and noise reduction treatment can be performed on the collected electromyographic signals in advance, such as wavelet noise reduction and the like.
In some embodiments, the eigenvalues of the electromyographic signals are extracted from the optimal channel, which is the channel with the highest signal-to-noise ratio among the multiple channels.
In some embodiments, the intensity of training may also be set when selecting the tuck training, such as setting training goals for training time, overall length of noodle reduction, etc. And when the training target is reached, displaying a training ending prompt or training result on the graphical interface.
When the selected training item is the lip-folding training, a target graph matched with the lip-folding training is included in a graphical interface displayed on the display device, and the length of the target graph is limited according to the acquired pressure signal.
The embodiment provides a 'spring pressing' game to cooperate with lip-folding training, and the control principle is based on control of pressure signals, specifically, the average amplitude of the pressure signals collected by a quantitative sensor is used for controlling the height of a character spring pressing in a game environment, so that the height of the character spring pressing is kept within a specified pressed height range.
As shown in fig. 4 and fig. 5, the pictures of the spring trojan horse are displayed in the graphical interface, during the lip-folding training process of the trainer, the collected pressure signals are subjected to feature extraction, such as amplitude average, and when the extracted feature value reaches the pressure threshold value of the action, the springs in the graphical interface are compressed and the length is reduced, so that the extracted feature value can be reflected in the length of the springs in the graphical interface. For a trainer, the magnitude of the force can be reflected in real time by the length of the spring (or the compression of the spring) in the graphical interface during the lip-folding process. After training is finished, the training data are saved, statistical data related to training, such as training duration, standard reaching time, estimated average muscle strength value, training score and the like, are displayed in a graphical interface, and an inlet for checking historical training results and statistical analysis results can be arranged in the interface.
In the lip-folding training process, feedback of real-time images is helpful for a trainer to adjust and keep the lip-folding force during training, for example, when the force is insufficient, the spring is not compressed or the compression amount is small, for example, the length of the spring in the graphical interface is correspondingly recovered after the lip-folding force is weakened, so that the trainer is reminded to strengthen the lip-folding force. Likewise, the visual assistance also helps to increase the attention and enthusiasm of the trainer, especially for children.
Preferably, filtering and noise reduction treatment can be performed on the collected pressure signals in advance, such as wavelet noise reduction and the like.
In some embodiments, the strength of training may also be set when selecting the lip-closed training, such as setting training goals for training time, time to reach standard, difficulty level, etc. The difficulty level may be set to correlate with the amount of compression of the spring, and the action is deemed to be up to standard when the amount of compression preset for the respective level is reached, and the up to standard time is calculated accordingly. And when the training target is reached, displaying a training ending prompt or training result on the graphical interface.
When the selected training item is a 'Bo' sound, a target graph matched with the 'Bo' sound is displayed or triggered and displayed in a graphical interface displayed on the display device, the size of the target graph is limited according to the acquired pressure signal, and the moving distance of the target graph is limited according to the acquired electromyographic signal.
The embodiment provides a 'bubble blowing' game to match with 'bubble blowing' sound training, wherein the control principle is that the electromyographic signals and the pressure signals are controlled, specifically, the characteristics of the electromyographic signals and the average amplitude of the pressure signals acquired by a quantitative sensor are used, the size of bubbles blown by people in a game environment, namely the diameter of the bubbles, is controlled by the average amplitude of the pressure signals, and the distance of the bubbles blown by people in the game environment is controlled by the characteristics of the electromyographic signals.
As shown in fig. 6 and fig. 7, the images of the child bubbling bubble are displayed in the graphical interface, and in the process of "jetting" sound training, the characteristic extraction is performed on the collected electromyographic signals and pressure signals, such as the electromyographic energy value or the sample entropy value of the electromyographic signals, and the amplitude average value of the pressure signals.
When the trainer folds the lips, when the characteristic value extracted by the pressure signal reaches the pressure threshold value of the action, a child in the graphical interface can blow out bubbles, and the size of the characteristic value can control the size of the bubbles (such as changing the diameter of the bubbles). When the lips of the trainer send out the clapping sound outwards, the action threshold value of the corresponding action reached according to the characteristic value extracted by the electromyographic signals is reflected in the blown-out distance of bubbles in the graphical interface. For a trainer, the act of "serging" the sound can be reflected by the size of the bubble in the graphical interface and the distance blown out. After training is finished, the training data are saved, statistical data related to training, such as training times, standard reaching times, estimated average muscle strength values, training scores and the like, are displayed in a graphical interface, and an inlet for checking historical training results and statistical analysis results can be arranged in the interface.
In the process of training the 'kiss' sound, the feedback of the real-time image is helpful for the trainer to adjust the strength of the lips and the action of the voice during training, and meanwhile, the attention and the enthusiasm of the trainer can be improved, and the interest is especially increased for children.
Preferably, filtering noise reduction treatment can be performed on the collected electromyographic signals and pressure signals in advance, such as wavelet noise reduction and the like.
In some embodiments, the eigenvalues of the electromyographic signals are extracted from the optimal channel, which is the channel with the highest signal-to-noise ratio among the multiple channels.
In some embodiments, the strength of training, such as training targets including training times, standard reaching times, difficulty level and the like, can be set when the "step" is selected for training. The difficulty level may be set to correlate to the size and/or blowing distance of the blown bubble, and the action is considered to be up to standard when the bubble size and/or blowing distance preset for the corresponding level is reached, and the number of up to standard times is calculated accordingly. And when the training target is reached, displaying a training ending prompt or training result on the graphical interface.
The present embodiment also provides a computer readable storage medium storing computer code which, when executed, performs a method as claimed in any preceding claim.
The present embodiment also provides a computer program product which, when executed by a computer device, performs a method as claimed in any preceding claim.
The present embodiment also provides a computer device including:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 8 illustrates an exemplary system that can be used to implement various embodiments described in the present application.
As shown in fig. 8, in some embodiments, system 1000 can function as any of the user terminal devices of the various described embodiments. In some embodiments, system 1000 can include one or more computer-readable media (e.g., system memory or NVM/storage 1020) having instructions and one or more processors (e.g., processor(s) 1005) coupled with the one or more computer-readable media and configured to execute the instructions to implement the modules to perform the actions described in this disclosure.
For one embodiment, the system control module 1010 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 1005 and/or any suitable device or component in communication with the system control module 1010.
The system control module 1010 may include a memory controller module 1030 to provide an interface to the system memory 1015. The memory controller module 1030 may be a hardware module, a software module, and/or a firmware module.
System memory 1015 may be used, for example, to load and store data and/or instructions for system 1000. For one embodiment, system memory 1015 may comprise any suitable volatile memory, such as, for example, suitable DRAM. In some embodiments, the system memory 1015 may comprise double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, the system control module 1010 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 1020 and communication interface(s) 1025.
For example, NVM/storage 1020 may be used to store data and/or instructions. NVM/storage 1020 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., hard Disk drive(s) (HDD), compact Disk drive(s) (CD) and/or digital versatile Disk drive (s)).
NVM/storage 1020 may include storage resources that are physically part of the device on which system 1000 is installed or which may be accessed by the device without being part of the device. For example, NVM/storage 1020 may be accessed over a network via communication interface(s) 1025.
Communication interface(s) 1025 may provide an interface for system 1000 to communicate over one or more networks and/or with any other suitable device. The system 1000 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 1005 may be packaged together with logic of one or more controllers (e.g., memory controller module 1030) of the system control module 1010. For one embodiment, at least one of the processor(s) 1005 may be packaged together with logic of one or more controllers of the system control module 1010 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1005 may be integrated on the same die with logic of one or more controllers of the system control module 1010. For one embodiment, at least one of the processor(s) 1005 may be integrated on the same die with logic of one or more controllers of the system control module 1010 to form a system on chip (SoC).
In various embodiments, system 1000 may be, but is not limited to being: a server, workstation, desktop computing device, or mobile computing device (e.g., laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 1000 may have more or fewer components and/or different architectures. For example, in some embodiments, system 1000 includes one or more cameras, keyboards, liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application Specific Integrated Circuits (ASICs), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, e.g., using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present application may be executed by a processor to perform the steps or functions described above. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Furthermore, portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application by way of operation of the computer. Those skilled in the art will appreciate that the form of computer program instructions present in a computer readable medium includes, but is not limited to, source files, executable files, installation package files, etc., and accordingly, the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Communication media includes media whereby a communication signal containing, for example, computer readable instructions, data structures, program modules, or other data, is transferred from one system to another. Communication media may include conductive transmission media such as electrical cables and wires (e.g., optical fibers, coaxial, etc.) and wireless (non-conductive transmission) media capable of transmitting energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied as a modulated data signal, for example, in a wireless medium, such as a carrier wave or similar mechanism, such as that embodied as part of spread spectrum technology. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and nonvolatile memory such as flash memory, various read only memory (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed computer-readable information/data that can be stored for use by a computer system.
An embodiment according to the application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to operate a method and/or a solution according to the embodiments of the application as described above.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Claims (6)
1. A method of visually assisting orbicularis muscle functional training comprising:
acquiring oral state information of a trainer during training, wherein the oral state information is defined by myoelectric signals acquired from the surface of orbicularis stomatitis of the trainer and/or pressure signals between upper lips and lower lips;
displaying or triggering and displaying corresponding target graphics on a display device according to training items, wherein the training items comprise line-folding training, lip-folding training and 'Bo' voice;
defining a form of the target pattern according to the mouth status information, wherein,
when the training program is the line-folding training, limiting the reduction of the length of the target graph according to the electromyographic signals;
when the training program is the lip-folding training, limiting the length of the target graph according to the pressure signal;
and when the training item is the 'jingzhao', limiting the size of the target graph according to the pressure signal, and limiting the moving distance of the target graph according to the electromyographic signal.
2. The method of claim 1, wherein the electromyographic signals and the pressure signals of the actions involved in the training program are acquired prior to training to obtain thresholds for each action.
3. The method of claim 1, wherein the morphology of the respective target pattern is defined according to a myoelectric energy value or a sample entropy value of the myoelectric signal.
4. The method of claim 1, wherein the morphology of the respective target pattern is defined according to a corresponding magnitude mean of the pressure signal.
5. An apparatus for assisting orbicularis oculi function training, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform operations in accordance with the method of any one of claims 1 to 4.
6. A computer readable medium storing instructions that, when executed, cause a system to perform the operations of the method of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210519592.2A CN114917544B (en) | 2022-05-13 | 2022-05-13 | Visual method and device for assisting orbicularis stomatitis function training |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210519592.2A CN114917544B (en) | 2022-05-13 | 2022-05-13 | Visual method and device for assisting orbicularis stomatitis function training |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114917544A CN114917544A (en) | 2022-08-19 |
CN114917544B true CN114917544B (en) | 2023-09-22 |
Family
ID=82808325
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210519592.2A Active CN114917544B (en) | 2022-05-13 | 2022-05-13 | Visual method and device for assisting orbicularis stomatitis function training |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114917544B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017184274A1 (en) * | 2016-04-18 | 2017-10-26 | Alpha Computing, Inc. | System and method for determining and modeling user expression within a head mounted display |
CN108415560A (en) * | 2018-02-11 | 2018-08-17 | 广东欧珀移动通信有限公司 | Electronic device, method of controlling operation thereof and Related product |
CN109646889A (en) * | 2019-02-18 | 2019-04-19 | 河南翔宇医疗设备股份有限公司 | Tongue muscle training system and tongue muscle training equipment |
CN109885173A (en) * | 2018-12-29 | 2019-06-14 | 深兰科技(上海)有限公司 | A kind of noiseless exchange method and electronic equipment |
CN110865705A (en) * | 2019-10-24 | 2020-03-06 | 中国人民解放军军事科学院国防科技创新研究院 | Multi-mode converged communication method and device, head-mounted equipment and storage medium |
CN113274038A (en) * | 2021-04-02 | 2021-08-20 | 上海大学 | Lip sensor device combining myoelectricity and pressure signals |
CN113362924A (en) * | 2021-06-05 | 2021-09-07 | 郑州铁路职业技术学院 | Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7452016B2 (en) * | 2020-01-09 | 2024-03-19 | 富士通株式会社 | Learning data generation program and learning data generation method |
-
2022
- 2022-05-13 CN CN202210519592.2A patent/CN114917544B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017184274A1 (en) * | 2016-04-18 | 2017-10-26 | Alpha Computing, Inc. | System and method for determining and modeling user expression within a head mounted display |
CN108415560A (en) * | 2018-02-11 | 2018-08-17 | 广东欧珀移动通信有限公司 | Electronic device, method of controlling operation thereof and Related product |
CN109885173A (en) * | 2018-12-29 | 2019-06-14 | 深兰科技(上海)有限公司 | A kind of noiseless exchange method and electronic equipment |
CN109646889A (en) * | 2019-02-18 | 2019-04-19 | 河南翔宇医疗设备股份有限公司 | Tongue muscle training system and tongue muscle training equipment |
CN110865705A (en) * | 2019-10-24 | 2020-03-06 | 中国人民解放军军事科学院国防科技创新研究院 | Multi-mode converged communication method and device, head-mounted equipment and storage medium |
CN113274038A (en) * | 2021-04-02 | 2021-08-20 | 上海大学 | Lip sensor device combining myoelectricity and pressure signals |
CN113362924A (en) * | 2021-06-05 | 2021-09-07 | 郑州铁路职业技术学院 | Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system |
Also Published As
Publication number | Publication date |
---|---|
CN114917544A (en) | 2022-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Amrulloh et al. | Automatic cough segmentation from non-contact sound recordings in pediatric wards | |
US9398873B2 (en) | Method of obtaining a desired state in a subject | |
CN103190905B (en) | Multi-channel surface electromyography signal collection system based on wireless fidelity (Wi-Fi) and processing method thereof | |
CN108461126A (en) | In conjunction with virtual reality(VR)The novel intelligent psychological assessment of technology and interfering system | |
JP2012524596A (en) | Nasal flow device controller | |
Jiang et al. | Objective acoustic analysis of pathological voices from patients with vocal nodules and polyps | |
US9801570B2 (en) | Auditory stimulus for auditory rehabilitation | |
WO2020118797A1 (en) | Prosthesis control method, apparatus, system and device, and storage medium | |
US9662266B2 (en) | Systems and methods for the predictive assessment and neurodevelopment therapy for oral feeding | |
CN104768588A (en) | Controlling coughing and swallowing | |
CN107802262A (en) | A kind of brain is electrically coupled the device that VR is used for the more dynamic attention deficit therapeutic intervention of children | |
Orlandi et al. | Effective pre-processing of long term noisy audio recordings: An aid to clinical monitoring | |
CN109195518A (en) | Nervous feedback system and method | |
KR20130121854A (en) | Simulator for learning tracheal intubation | |
AU2019204112A1 (en) | Localized collection of biological signals, cursor control in speech-assistance interface based on biological electrical signals and arousal detection based on biological electrical signals | |
CN110742603A (en) | Brain wave audible mental state detection method and system for realizing same | |
TWI418334B (en) | System for physiological signal and environmental signal detection, analysis and feedback | |
CN114917544B (en) | Visual method and device for assisting orbicularis stomatitis function training | |
Levy et al. | Smart cradle for baby using FN-M16P Module | |
CN103315767B (en) | Determining method and system for heart sound signals | |
TW202117683A (en) | Method for monitoring phonation and system thereof | |
KR20210076561A (en) | Recognition Training System For Preventing Dementia Using Virtual Reality Contents | |
KR102573959B1 (en) | Digital therapy system and method thereof | |
CN204814357U (en) | Blood oxygen monitoring snore relieving appearance | |
Dai et al. | Biologically-inspired auditory perception during robotic bone milling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |