CN114387849A - Language learning equipment and method for hearing disorder crowd before learning language - Google Patents

Language learning equipment and method for hearing disorder crowd before learning language Download PDF

Info

Publication number
CN114387849A
CN114387849A CN202210188831.0A CN202210188831A CN114387849A CN 114387849 A CN114387849 A CN 114387849A CN 202210188831 A CN202210188831 A CN 202210188831A CN 114387849 A CN114387849 A CN 114387849A
Authority
CN
China
Prior art keywords
user
tongue
sound
standard
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210188831.0A
Other languages
Chinese (zh)
Inventor
刘悦
田奕江
邵兵
袁存梁
赵倩茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN202210188831.0A priority Critical patent/CN114387849A/en
Publication of CN114387849A publication Critical patent/CN114387849A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides language learning equipment and a method for the crowd with hearing impairment before learning a language, wherein the equipment comprises a host panel, a sound vibration converter and an oral sensor, wherein the sound vibration converter and the oral sensor are electrically connected with the host panel, the host panel comprises a host screen and a camera, the host screen is used for displaying and man-machine interaction, the sound vibration converter is used for converting sound information output by the host panel into vibration information and transmitting the vibration information to the hands of a user, the oral sensor is used for collecting the position of the tongue in the oral cavity of the user in the speaking process, the mouth shape of the user in the speaking process is captured by the camera, the difference between the sounding of the patient and the standard sounding is displayed on the host screen at the same time, the difference between the position of the tongue of the patient and the position of the standard tongue, and the difference between the mouth shape of the patient and the standard mouth shape, so that the user is reminded to correct. The language learning equipment and method aiming at the crowd with hearing disorder before learning the language are simple in equipment, easy to wear and good in language learning effect.

Description

Language learning equipment and method for hearing disorder crowd before learning language
Technical Field
The invention relates to the technical field of language learning of hearing-impaired people, in particular to language learning equipment and method for hearing-impaired people before learning a language.
Background
According to the display of the national statistical office, currently, hearing language disabled people in China, namely deaf-dumb people, live at the head of five disabled people, such as vision disabled people, limb disabled people, intellectual disabled people and the like, wherein most of hearing language disabled people are 'dumb due to deafness', namely, the chances of learning speaking through voice simulation are lost due to the fact that sound cannot be received from the outside, but the fact does not mean that the hearing language disabled people do not have the ability of speaking, and the hearing language disabled people cannot learn speaking due to the fact that the sound cannot be heard during the period of learning.
At present, machines or equipment for speaking training aiming at people who have hearing loss before learning speech are not available in the market, hearing aids and artificial cochlea which help patients with weak hearing or who have no hearing loss completely hear sound are most popular in the market, or the patients can communicate in a sign language mode through a sign language teaching mode, or some auxiliary teaching aids help corresponding teachers to educate the patients. However, the existing language learning assisting device has high audience limitation, high cost and inconvenience in carrying, and the convenience of the sign language assisted learning mode is not high in language communication.
The prior patent application No. CN202110987380.2 discloses a method for collecting face data through videos, identifying faces and further identifying lip-reading mouth shapes of deaf-mute people through a target detection technology in computer vision, finally binding obtained corresponding results with face IDs and outputting reasoning results to achieve the purpose of communication between the deaf-mute people and normal people, however, the patent only corrects the lip-reading mouth shapes of the deaf-mute people, and the effect of language learning is not good.
Disclosure of Invention
In view of the defects in the prior art, the invention aims to provide a language learning device and method aiming at the people with hearing impairment before learning language, which has the advantages of simple device, easy wearing and good language learning effect.
In order to solve the problems, the technical scheme of the invention is as follows:
the utility model provides a language learning equipment to learning before language auditory disorder crowd, equipment include host computer panel and with host computer panel electric connection's sound vibrations converter, oral cavity sensor, host computer panel is including being used for showing and man-machine interaction's host computer screen and camera, sound vibrations converter is arranged in converting host computer panel output sound information into vibrations information and conducts user's hand, oral cavity sensor is arranged in gathering the position of user tongue at speaking in-process oral cavity, catches the mouth type of user's speaking in-process through the camera, will show the difference of patient's pronunciation and standard pronunciation simultaneously on the host computer screen, the difference of patient's tongue position and standard tongue position, the difference of patient's mouth type and standard mouth type to remind the user to correct.
Optionally, the host panel further includes a communication charging interface, a sound playing interface, a volume adjusting button, a power on/off button, a sound vibration converter interface, and an oral sensor interface.
Optionally, the sound vibration transducers include a left-hand sound vibration transducer and a right-hand sound vibration transducer, each sound vibration transducer including a palm center grip, a thumb grip, a forefinger grip, a middle finger grip, a ring finger grip, and a little finger grip.
Optionally, equipment still includes bone conduction sound amplifier, bone conduction sound amplifier is bone conduction earphone, is applicable to the hearing impaired crowd who listens the patient weakly and can only experience sound through bone conduction, bone conduction earphone includes bone conduction contactor and wear-type fixer, wear user's overhead to the wear-type fixer, bone conduction contactor card is around user's ear.
Optionally, the oral cavity sensor comprises a front sublingual pressing plate, a rear sublingual pressing plate, a front upper palate pressing plate and a rear upper palate pressing plate, the front sublingual pressing plate is adsorbed on the front half side of the tongue, the rear sublingual pressing plate is adsorbed on the rear half side of the tongue, the front upper palate pressing plate is adsorbed on the front half side of the upper palate and corresponds to the front sublingual pressing plate, and the rear upper palate pressing plate is adsorbed on the front half side of the upper palate and corresponds to the rear sublingual pressing plate through the saliva adsorption force in the oral cavity.
Optionally, the oral cavity sensor is made of food-grade silica gel, the front tongue pressing plate, the rear tongue pressing plate, the front upper palate pressing plate and the rear upper palate pressing plate are all circuits wrapped by light and thin silica gel films, voltage is transmitted between the upper pressing plate and the lower pressing plate through mutual inductance of coils, and the distance between the upper pressing plate and the lower pressing plate is calculated according to a voltage value to detect the position of the tongue in the oral cavity.
Further, the invention also provides a language learning method for the people with the hearing impairment before learning the language, which comprises the following steps:
enabling the user to learn vocal cord vibration by guiding the user to imitate the form;
detecting the position of the tongue by an oral sensor, comparing the position with a standard tongue position, and feeding back comparison information to a user;
capturing the mouth shape of a user by a camera and comparing the mouth shape with a standard mouth shape, and correcting the change of the mouth shape in the speaking process;
under the conditions of vocal cord vibration, correct tongue position and correct mouth shape, the user can make coordinated pronunciation.
Optionally, the step of inducing the user to learn vocal cord vibration in a simulated form specifically includes: give the user through equipment sound production conduction vibrations to can record the sound that the user studied the sound production, play out through the mode of vibrations afterwards, the intensity difference of host computer screen display user's sound production and standard sound production simultaneously lets the user draw close to standard sound through the continuous sound production of correcting oneself of contrast.
Optionally, the detecting the position of the tongue by the oral sensor and comparing the detected position with the standard tongue position, and the feeding back the comparison information to the user specifically includes: the real-time tongue position feedback map is displayed on the left side of the host screen, the standard tongue position capable of emitting the selected sound is displayed on the right side, and the patient adjusts the position of the tongue of the patient by comparing the standard tongue position until the equipment detects that the position reaches the standard position.
Optionally, the step of capturing the mouth shape of the user by the camera and comparing the mouth shape with the standard mouth shape, and correcting the change of the mouth shape in the speaking process includes: the host computer camera is through shooing patient's mouth type, discerns key point position, turns into corresponding real-time two-dimensional schematic diagram on host computer screen left side, shows the standard mouth type that can send out selected pronunciation on the screen right side, and the user can make the mouth type of oneself reach the mouth type of standard through the contrast.
Compared with the prior art, the invention creates a set of complete language learning scheme by utilizing the characteristic of high sensitivity of vision and touch of the people with hearing impairment, belongs to an innovative product in the field of deaf-mute articles, can effectively relieve the phenomenon of 'mute caused by deafness', and enables the people with hearing impairment to realize dreams early.
The invention has the following beneficial effects:
1. the patient is helped to learn vocal cord vibration by guiding the patient to simulate the form, and the patient is ensured to learn the vocal cord vibration of the pronunciation in advance;
2. on the basis of capturing and identifying the mouth shape of the user through the camera, the mouth shape of the user and the standard mouth shape are displayed on a screen of the host computer and are compared with the standard mouth shape, so that the user can make a correct mouth shape by contrasting the standard mouth shape to learn normal pronunciation;
3. the position of the tongue in the oral cavity of a user in the speaking process is collected through the oral sensor, so that the action of the tongue can simulate learning like four limbs, and the benefit brought by vision is fully exerted under the condition that the hearing of the patient is damaged;
4. the user can pronounce correctly on the premise of vocal cord vibration, correct mouth shape and correct tongue position, and the invention is provided with a complete learning course, a gradual learning mode from easy to difficult and a guiding mode from a primary stage, a middle stage to a high stage to assist the patient to actively speak in an opening way.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic structural diagram of a language learning apparatus for a pre-learning hearing-impaired population according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a host panel according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an acoustic vibration converter according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a bone conduction sound amplifier provided in accordance with an embodiment of the present invention;
FIG. 5 is a schematic view of an oral sensor provided in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram of a screen welcome interface according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a screen test page according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating screen mode selection according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating a primary mode of a screen according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating a mid-level mode of a screen according to an embodiment of the present invention;
FIG. 11 is a diagram illustrating a high level mode of a screen according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a screen pronunciation teaching page according to an embodiment of the present invention.
FIG. 13 is a schematic diagram of a screen tongue position teaching page provided by an embodiment of the present invention;
FIG. 14 is a schematic view of a screen mouth correction page according to an embodiment of the present invention;
FIG. 15 is a schematic diagram of a screen harmony pronunciation page provided by an embodiment of the present invention;
fig. 16 is a flowchart of a language learning method for a pre-learning hearing-impaired population according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Fig. 1 is a schematic structural diagram of a language learning apparatus for a pre-learning hearing-impaired person, according to an embodiment of the present invention, as shown in fig. 1, the apparatus includes a host panel 1, and a sound-vibration converter 2 and an oral sensor 4 electrically connected to the host panel 1, and in an alternative embodiment, the apparatus further includes a bone conduction sound amplifier 3, where a user of the bone conduction sound amplifier 3 is a hearing-impaired person who listens to a patient weakly or can feel sound through bone conduction, and a hearing-impaired person who cannot hear sound at all does not need the bone conduction sound amplifier 3.
As shown in fig. 2, the host panel 1 further includes a communication charging interface 101, a sound output port 102 for allowing the family members of the patient to hear the learning content, a volume adjusting button 103, a power on/off button 104, a camera 105 for taking a mouth shape, and a host screen 106 for displaying and man-machine interaction, and in addition, the host panel 1 further includes interfaces, i.e., a sound vibration converter interface 107, a bone conduction sound amplifier interface 108, and an oral sensor interface 109, respectively connected to the sound vibration converter 2, the bone conduction sound amplifier 3, and the oral sensor 4. The host panel 1 comprises functions of processing and generating data signals, a man-machine interaction function, a training scheme selection function, a data information display function, a function of guiding and correcting patient behaviors, a training scheme execution function and the like.
As shown in fig. 3, the sound vibration converter 2 is used for converting the sound information output by the host panel 1 into vibration information and transmitting the vibration information to the hand of the user to help the user learn vocal cord vibration, the sound vibration converter 2 is an external component for gripping both hands of the patient, and the patient can feel the vibration converted from the sound by gripping both hands, and includes a left-hand gripping device, i.e., a left-hand sound vibration converter 201, a right-hand gripping device, i.e., a right-hand sound vibration converter 202, specifically, illustrated as a right-hand sound vibration converter 202, and the right-hand sound vibration converter 202 includes a palm grip 203, a thumb grip 204, a forefinger grip 205, a middle finger grip 206, a ring finger grip 207, and a little finger grip 208.
As shown in fig. 4, the bone conduction sound amplifier 3 is a bone conduction earphone, the bone conduction earphone includes a bone conduction contact 301 and a head-mounted fixture 302, the patient wears the amplifier on the head through the head-mounted fixture 302, and the front and rear lobes of the bone conduction contact 301 are clamped in front of and behind the ears and fixed on the bones just in front of and behind the ears. The bone conduction earphone is only used for hearing-impaired people who have weak hearing patients or can feel sound through bone conduction, and the bone conduction sound amplifier 3 is not needed for hearing-impaired people who can not hear sound at all.
As shown in fig. 5, the oral sensor 4, also called a tongue position sampling instrument, is made of food-grade silica gel, the oral sensor 4 includes a front sublingual pressing plate 401, a rear sublingual pressing plate 402, a front palate upper pressing plate 403 and a rear palate upper pressing plate 404, the front sublingual pressing plate 401, the rear sublingual pressing plate 402, the front palate upper pressing plate 403 and the rear palate upper pressing plate 404 are all circuits wrapped by light and thin silica gel films, the front sublingual pressing plate 401 is adsorbed on the front half side of the tongue through saliva adsorption force, the rear sublingual pressing plate 402 is adsorbed on the rear half side of the tongue, the front palate upper pressing plate 403 is adsorbed on the front half side of the palate and the front sublingual pressing plate 401 for up-down comparison, and the rear palate upper pressing plate 404 is adsorbed on the front half side of the palate and the rear palate for up-down comparison. Specifically, voltage is transmitted between the upper pressing plate and the lower pressing plate through coil mutual inductance, and the distance between the upper pressing plate and the lower pressing plate is calculated according to a voltage value to detect the position of the tongue in the oral cavity.
The preparation work before using the language learning device of the invention comprises: the mouth sensor is first put in the mouth, and if the user is a patient who is a poor hearing or can sense a sound through bone conduction, the bone conduction sound amplifier 3 is connected to the host machine and worn on the head of the user. The host computer is then turned on, as shown in fig. 6, a test button is selected on the main interface, and the test page is entered, as shown in fig. 7, to test the normal operation of the acoustic shock converter 2, the bone conduction acoustic amplifier 3, and the oral cavity sensor 4. If the work is normal, returning to the main interface and entering a normal work interface.
Mode selection: after entering the normal working interface, as shown in fig. 8, the learning stage needs to be selected and divided into a primary stage, a middle stage and a high stage, where different stages correspond to different learning levels. First, a preliminary stage is selected, as shown in fig. 9, in which the pronunciation alphabet to be learned is selected; as shown in fig. 10, words and phrases to be learned may be selected in the middle stage; as shown in fig. 11, the high-level stage may select a learned four-word, small phrase, simple sentence.
A learning process: in the primary stage, four stages are needed for completing a learning target, including sounding teaching, tongue action teaching, mouth shape teaching and coordinated pronunciation.
Sounding teaching: as shown in fig. 12, give the user through equipment sound production conduction vibrations to can record the sound that the user studied the sound production, play out through the mode of vibrations afterwards equally, the intensity difference of screen display patient's sound production and standard sound production simultaneously lets the patient keep close to standard sound through the contrast continuous pronunciation of correcting oneself. When the device recognizes that the patient utterance and the standard utterance do not differ significantly, the device displays a pass.
Tongue action teaching: as shown in fig. 13, subsequently entering a tongue action teaching interface, selecting a pronunciation process of letters or words, wherein the tongue and the mouth shape are a changing process and are not a constant action, only very simple pronunciation can be realized, and pronunciation can be realized through a constant mouth shape and tongue position.
Mouth shape teaching: as shown in fig. 14, the user enters a mouth-type teaching interface, the screen is divided into left and right sides, the host camera recognizes key point positions by shooting the mouth shape of the patient, the left image of the screen is converted into a corresponding real-time two-dimensional schematic diagram, and the right side of the screen displays a standard mouth shape capable of emitting the selected pronunciation. The patient makes his mouth shape reach the standard mouth shape by comparison, and the host computer also prompts the patient which key positions do not reach the standard positions in the process.
Coordinating pronunciation: as shown in fig. 15, the device enters a coordinated pronunciation phase, and since the pronunciation, the mouth shape and the tongue position are performed simultaneously, the patient can learn three aspects of previous learning simultaneously, the screen can display the difference between the patient's pronunciation and the standard pronunciation, the difference between the patient's tongue position and the standard tongue position, and the difference between the patient's mouth shape and the standard mouth shape simultaneously, and the patient can know which details of the three key parts are not ready during the pronunciation process, so as to correct the pronunciation, and the system can automatically recognize the difference between the three parts, and the pronunciation is summarized to the patient to prompt the patient in the form of a graph.
Comparison of the primary stage with the intermediate and high stages: the preliminary stage mainly aims at letter sounds or character sounds with unchanged mouth shapes and tongue positions in the sound making process, and mainly helps a patient to be familiar with teaching procedures. Also, vocalization occurs only in the preliminary stage because it is common. And in the middle stage, the pronunciation which can be sent out only by changing the position of the mouth and the tongue is continuously taught on the basis of the primary stage. The tongue teaching interface firstly plays the process that the tongue changes in the pronunciation process, requires a user to simulate the tongue action played on a screen, and the equipment detects the real action of the user through the oral cavity sensor to confirm whether the action of the user reaches the standard or not, enters the mouth type teaching interface after reaching the standard, and is similar to the tongue teaching interface to detect whether the action of the mouth of the user reaches the standard or not. The mouth shape and the tongue position are a series of changing processes, but not a single invariable action, so that the coordination and the collocation of the tongue action and the lip action are particularly important in the middle-stage teaching, and not only can the standard tongue position and the corresponding mouth shape be displayed simultaneously, but also the change moving picture of the pronunciation tongue position and the corresponding mouth shape can be displayed on a coordination pronunciation interface, thereby improving the learning effect. The high-level stage is a teaching process of phrases or sentences after learning the pronunciation of a single letter or Chinese character. The stage is more variable and more difficult, in a tongue teaching interface, a user can select animation playing of complete tongue action or tongue action playing of a single Chinese character and can control the speed of the playing process, the above operations are completely based on the learning degree of the user on the tongue action, mouth shape teaching is carried out after the equipment detects that the tongue action meets the requirement, the mouth shape teaching process is similar to the tongue teaching process, after learning of tongue positions and mouth positions one by one is completed, the required coordinated pronunciation is more complex, the coordinated pronunciation interface prompts the user which specific mouth shape action and tongue action should be matched with each other, the user combines the mouth shape action and the tongue action according to the prompt, and the system detects whether the action of the user is matched correctly in the process, if the tongue action meets the standard, comprehensive whole sentence pronunciation practice is continued, the user can be prompted which character or word is not correctly pronounced, and the user can select the character with the wrong pronunciation to practice alone according to the prompt, so that the ability of mastering the pronunciation of the sentence is achieved.
Fig. 16 is a flowchart of a language learning method for a pre-learning hearing-impaired population according to an embodiment of the present invention, and as shown in fig. 16, the method includes the following steps:
s1: enabling the user to learn vocal cord vibration by guiding the user to imitate the form;
specifically, give the user through sound vibrations converter with equipment sound production conduction vibrations to can record the sound that the user learnt the sound production, play out through the mode of vibrations afterwards equally, the intensity difference of host computer screen display patient's sound production and standard sound production simultaneously lets the patient draw close to standard sound through the continuous vocal of correcting oneself of contrast, when equipment discerns that patient's sound production and standard sound production difference are not big, equipment display passes through.
S2: detecting the position of the tongue by an oral sensor, comparing the position with a standard tongue position, and feeding back comparison information to a user;
specifically, the oral cavity sensor is used for detecting the positions of the front tongue and the rear tongue of the user, and the specific position information is clearly fed back to the user, so that the user can properly adjust the action.
At tongue teaching interface, the host computer screen divide into two parts about, and the figure on the left side shows real-time tongue position feedback picture, can feed back on the screen along with the change of tongue position, and the standard tongue position that can send selected sound on the right side, and the patient is through comparing standard tongue position, adjusts the tongue position of oneself until equipment detects and reaches standard position.
S3: capturing the mouth shape of a user by a camera and comparing the mouth shape with a standard mouth shape, and correcting the change of the mouth shape in the speaking process;
specifically, the host computer camera is through shooing user's mouth type, discernment key point position turns into corresponding real-time two-dimensional schematic diagram on the left of the host computer screen, shows the standard mouth type that can send out selected pronunciation on the screen right side, and the user can make the mouth type of oneself reach the mouth type of standard through the contrast, and this in-process host computer also can indicate the user to have which key position does not reach the standard position, ensures that the user can adjust action and the mouth type of oneself according to the standard.
S4: under the conditions of vocal cord vibration, correct tongue position and correct mouth shape, the user can make coordinated pronunciation.
Specifically, because the sound is emitted by the sound production, the mouth shape and the tongue position at the same time, the difference between the sound production of the user and the standard sound production is displayed on the screen of the host computer at the same time, the difference between the tongue position of the user and the standard tongue position and the difference between the mouth shape of the user and the standard mouth shape are displayed on the screen of the host computer, the user can know which specific detail of the three key parts is not made in the process when the user pronounces, so that correction is performed, meanwhile, the system can automatically recognize the difference among the three parts, the summary is made, the user is prompted in a graphic form, the machine can record the pronunciation of the user, the pronunciation of the user is compared with the standard pronunciation, and the user can feel the difference between the three parts through the sound vibration converter.
Compared with the prior art, the invention creates a set of complete language learning scheme by utilizing the characteristic of high sensitivity of vision and touch of the people with hearing impairment, belongs to an innovative product in the field of deaf-mute articles, can effectively relieve the phenomenon of 'mute caused by deafness', and enables the people with hearing impairment to realize dreams early.
The invention has the following beneficial effects:
1. the patient is helped to learn vocal cord vibration by guiding the patient to simulate the form, and the patient is ensured to learn the vocal cord vibration of the pronunciation in advance;
2. on the basis of capturing and identifying the mouth shape of the user through the camera, the mouth shape of the user and the standard mouth shape are displayed on a screen of the host computer and are compared with the standard mouth shape, so that the user can make a correct mouth shape by contrasting the standard mouth shape to learn normal pronunciation;
3. the position of the tongue in the oral cavity of a user in the speaking process is collected through the oral sensor, so that the action of the tongue can simulate learning like four limbs, and the benefit brought by vision is fully exerted under the condition that the hearing of the patient is damaged;
4. the user can pronounce correctly on the premise of vocal cord vibration, correct mouth shape and correct tongue position, and the invention is provided with a complete learning course, a gradual learning mode from easy to difficult and a guiding mode from a primary stage, a middle stage to a high stage to assist the patient to actively speak in an opening way.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. The utility model provides a language learning equipment to learning before language hearing disorder crowd, a serial communication port, equipment include the host computer panel and with host computer panel electric connection's sound vibrations converter, oral cavity sensor, the host computer panel is including being used for showing and man-machine interactive host computer screen and camera, sound vibrations converter is arranged in converting host computer panel output sound information into vibrations information and conducts user's hand, oral cavity sensor is used for gathering the position of user tongue in the oral cavity of speaking in-process, catches the mouth type of the user in the process of speaking through the camera, will show the difference of patient's pronunciation and standard pronunciation simultaneously on the host computer screen, the difference of patient's tongue position and standard tongue position, the difference of patient's mouth type and standard mouth type to remind the user to correct.
2. The language learning device of claim 1, wherein the host panel further comprises a communication charging interface, a sound playback interface, a volume adjustment button, an on/off button, a sound vibration converter interface, and an oral sensor interface.
3. The language learning device of claim 1, wherein the audio vibration transducers comprise a left-hand audio vibration transducer and a right-hand audio vibration transducer, each audio vibration transducer comprising a palm-palm grip, a thumb grip, a forefinger grip, a middle finger grip, a ring finger grip, and a little finger grip.
4. The apparatus for learning language according to claim 1, wherein the apparatus further comprises a bone conduction sound amplifier, the bone conduction sound amplifier is a bone conduction earphone suitable for hearing impaired patients and hearing impaired people who can only sense sound through bone conduction, the bone conduction earphone comprises a bone conduction contactor and a head-mounted fixture, the head-mounted fixture is worn on the head of the user, and the bone conduction contactor is clamped in front of and behind the ears of the user.
5. The language learning device of claim 1, wherein the oral cavity sensor comprises a front sublingual pressure plate, a rear sublingual pressure plate, a front upper palate pressure plate and a rear upper palate pressure plate, the front sublingual pressure plate is adsorbed on the front half of the tongue, the rear sublingual pressure plate is adsorbed on the rear half of the tongue, the front upper palate pressure plate is adsorbed on the front half of the tongue and corresponds to the front sublingual pressure plate, and the rear upper palate pressure plate is adsorbed on the front half of the tongue and corresponds to the rear sublingual pressure plate.
6. The language learning device for the pre-learning hearing-impaired crowd according to claim 5, wherein the oral cavity sensor is made of food-grade silica gel, the front lower tongue pressing plate, the rear lower tongue pressing plate, the front upper palate pressing plate and the rear upper palate pressing plate are all circuits wrapped by light and thin silica gel films, voltage is transmitted between the upper pressing plate and the lower pressing plate through coil mutual inductance, and the distance between the upper pressing plate and the lower pressing plate is calculated according to the voltage value to detect the position of the tongue in the oral cavity.
7. A method of language learning for a population with pre-learning hearing impairment, the method comprising the steps of:
enabling the user to learn vocal cord vibration by guiding the user to imitate the form;
detecting the position of the tongue by an oral sensor, comparing the position with a standard tongue position, and feeding back comparison information to a user;
capturing the mouth shape of a user by a camera and comparing the mouth shape with a standard mouth shape, and correcting the change of the mouth shape in the speaking process;
under the conditions of vocal cord vibration, correct tongue position and correct mouth shape, the user can make coordinated pronunciation.
8. The method of claim 7, wherein the step of inducing the user to learn vocal cords to vibrate in the simulated form comprises: give the user through equipment sound production conduction vibrations to can record the sound that the user studied the sound production, play out through the mode of vibrations afterwards, the intensity difference of host computer screen display user's sound production and standard sound production simultaneously lets the user draw close to standard sound through the continuous sound production of correcting oneself of contrast.
9. The method of claim 7, wherein the step of detecting the position of the tongue by the oral sensor and comparing the detected position with a standard tongue position and feeding back the comparison information to the user comprises: the real-time tongue position feedback map is displayed on the left side of the host screen, the standard tongue position capable of emitting the selected sound is displayed on the right side, and the patient adjusts the position of the tongue of the patient by comparing the standard tongue position until the equipment detects that the position reaches the standard position.
10. The method for learning language of the pre-learning hearing impaired population according to claim 7, wherein the step of capturing the mouth shape of the user by the camera and comparing the mouth shape with a standard mouth shape to correct the mouth shape change during the speaking process comprises: the host computer camera is through shooing patient's mouth type, discerns key point position, turns into corresponding real-time two-dimensional schematic diagram on host computer screen left side, shows the standard mouth type that can send out selected pronunciation on the screen right side, and the user can make the mouth type of oneself reach the mouth type of standard through the contrast.
CN202210188831.0A 2022-02-28 2022-02-28 Language learning equipment and method for hearing disorder crowd before learning language Withdrawn CN114387849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210188831.0A CN114387849A (en) 2022-02-28 2022-02-28 Language learning equipment and method for hearing disorder crowd before learning language

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210188831.0A CN114387849A (en) 2022-02-28 2022-02-28 Language learning equipment and method for hearing disorder crowd before learning language

Publications (1)

Publication Number Publication Date
CN114387849A true CN114387849A (en) 2022-04-22

Family

ID=81204792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210188831.0A Withdrawn CN114387849A (en) 2022-02-28 2022-02-28 Language learning equipment and method for hearing disorder crowd before learning language

Country Status (1)

Country Link
CN (1) CN114387849A (en)

Similar Documents

Publication Publication Date Title
US6644973B2 (en) System for improving reading and speaking
JP4439740B2 (en) Voice conversion apparatus and method
US5340316A (en) Synthesis-based speech training system
Rosenblum et al. An audiovisual test of kinematic primitives for visual speech perception.
Summerfield Lipreading and audio-visual speech perception
CN1679371B (en) Microphone and communication interface system
CN107112026A (en) System, the method and apparatus for recognizing and handling for intelligent sound
US8423368B2 (en) Biofeedback system for correction of nasality
CN104537925B (en) Language barrier child language training auxiliary system and method
Nickerson et al. Teaching speech to the deaf: Can a computer help
JP3670180B2 (en) hearing aid
WO2015099464A1 (en) Pronunciation learning support system utilizing three-dimensional multimedia and pronunciation learning support method thereof
CN108320625A (en) Vibrational feedback system towards speech rehabilitation and device
CN113658584A (en) Intelligent pronunciation correction method and system
CN106572818B (en) Auditory system with user specific programming
de Vargas et al. Haptic speech communication using stimuli evocative of phoneme production
De Filippo Laboratory projects in tactile aids to lipreading
US20210319715A1 (en) Information processing apparatus, information processing method, and program
CN114387849A (en) Language learning equipment and method for hearing disorder crowd before learning language
Rathinavelu et al. Three dimensional articulator model for speech acquisition by children with hearing loss
Fletcher et al. Speech modification by a deaf child through dynamic orometric modeling and feedback
CN113192369A (en) Self-test and self-correction feedback system for spoken language training and application method thereof
CN101409022A (en) Language learning system with mouth shape comparison and method thereof
Athanasopoulos et al. King's speech: pronounce a foreign language with style
CN113593374A (en) Multi-modal speech rehabilitation training system combining oral muscle training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220422