WO2023071155A1 - 助听控制方法、装置、助听设备和存储介质 - Google Patents

助听控制方法、装置、助听设备和存储介质 Download PDF

Info

Publication number
WO2023071155A1
WO2023071155A1 PCT/CN2022/093543 CN2022093543W WO2023071155A1 WO 2023071155 A1 WO2023071155 A1 WO 2023071155A1 CN 2022093543 W CN2022093543 W CN 2022093543W WO 2023071155 A1 WO2023071155 A1 WO 2023071155A1
Authority
WO
WIPO (PCT)
Prior art keywords
hearing aid
user
hearing
voice information
information
Prior art date
Application number
PCT/CN2022/093543
Other languages
English (en)
French (fr)
Inventor
梁祥龙
吴斐
陆希炜
张立
李然
程志远
张建明
娄身强
Original Assignee
北京亮亮视野科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京亮亮视野科技有限公司 filed Critical 北京亮亮视野科技有限公司
Publication of WO2023071155A1 publication Critical patent/WO2023071155A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/30Monitoring or testing of hearing aids, e.g. functioning, settings, battery power
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/13Hearing devices using bone conduction transducers

Definitions

  • the present disclosure relates to the technical field of wearable devices, and in particular to a hearing aid control method and device, a hearing aid device, and a computer-readable storage medium.
  • the current hearing-aid devices can combine the functions of wearable devices to collect sound from the user's gaze area
  • the core method of this hearing-aid solution is still to amplify the collected sound and then play it to the hearing-impaired user. Because it is not effective for patients with severe deafness or complete hearing loss, the program has certain limitations.
  • the purpose of the present disclosure is to provide a hearing aid control method, device, hearing aid device and computer-readable storage medium, which can at least to a certain extent improve the problem of limited use of hearing aid solutions in the related art.
  • a hearing aid control method including: being applied to a hearing aid device, the hearing aid device including AR glasses, and a sound collection module and an in-ear broadcasting module arranged on the AR glasses,
  • the sound collection module is used to collect voice
  • the in-ear broadcast module is used to play audio
  • the hearing aid control method includes: playing hearing detection audio, so as to obtain the user wearing the hearing aid device based on the hearing detection audio feedback signal, and determine the hearing evaluation result of the user based on the feedback signal; when it is determined based on the hearing evaluation result that the hearing aid device needs to perform a display operation, collect voice information that requires auxiliary processing; collect the Voice information, converting the voice information into text information, and displaying the text information in the window area of the AR glasses.
  • the hearing aid device further includes a bone conduction vibration sensor placed on the AR glasses, the bone conduction vibration sensor can be in contact with the bone area of the user's head, and the collected voice information , converting the voice information into text information, and displaying the text information in the window area of the AR glasses, specifically including: collecting the voice information, and detecting the vibration of the vocal cords of the user based on the bone conduction vibration sensor Signal; detect the vocal cord vibration signal based on the feature comparison model, to judge whether the user is the sound source of the voice information based on the detection result; determine the corresponding rendering method based on the judgment result, so as to convert the voice information into the voice information
  • the text information is described, based on the corresponding rendering method, the text information is displayed in the window area of the AR glasses, wherein, based on at least one of color, font, display scale and display speed, different configurations of the Rendering method.
  • the corresponding rendering method is determined based on the judgment result, so that when the voice information is converted into the text information, based on the corresponding rendering method, the displayed information is displayed in the window area of the AR glasses.
  • the above text information specifically includes: when it is determined that the user is not the sound source of the voice information, performing a display operation based on the first rendering method; when it is determined that the user is the sound source of the voice information, based on the second rendering The display operation is performed in a manner, wherein when the text information is displayed based on the second rendering manner, feedback information from the user is received, so as to determine the pronunciation level of the user based on the feedback information.
  • the converting the voice information into text information and displaying the text information in the window area of the AR glasses specifically includes: determining the sound source direction of the voice information; The source direction performs face recognition on the window area image of the AR glasses to identify the sounding object of the voice information; converts the voice information into text information, and communicates with the sounding object on the hearing aid device The corresponding window area displays the text information.
  • the converting the speech information into text information and displaying the text information in the window area of the AR glasses further includes: detecting the spectral parameters of the speech information; based on the spectral parameters Distinguishing the gender of the sound source of the voice information; determining a corresponding rendering method based on the gender of the sound source; determining a display style of the text information based on the corresponding rendering method, and displaying it in the window area.
  • the converting the speech information into text information and displaying the text information in the window area of the AR glasses further includes: detecting the sounding object based on the visual feature detection of the sounding object Based on the detected distance from the sounding object, the size of the text box of the text message is adjusted synchronously.
  • the collecting the voice information, converting the voice information into text information, and displaying the text information in the window area of the AR glasses further includes: after detecting the collected When the voice information is the voice information to be translated, call the translation model of the target language, translate the voice information to be translated, and obtain the translated text; display the translated text as the text information on the AR glasses window area.
  • the playing the hearing detection audio to obtain a feedback signal based on the hearing detection audio from the user wearing the hearing aid device, and determining the hearing evaluation result of the user based on the feedback signal specifically includes : display a hearing test image in the window of the AR glasses, the hearing test image includes multiple groups of different long-short combination graphics, and characters corresponding to each group of long-short combination images; in one evaluation, based on the specified sound volume and /or the sound tone plays at least one of multiple groups of long and short tones, as the hearing detection audio, each group of long and short tones corresponds to a set of long and short combined images; receiving the feedback result of the user's recognition of the long and short tones, as the feedback signal; determine the hearing assessment result of the user based on the feedback result, wherein the sound volume includes bass, middle and treble, and the sound tone includes low frequency, middle frequency and high frequency, based on different Multiple evaluations of the sound volume and/or different pitches of the sound are performed.
  • the receiving the user's feedback result on the recognition of the long and short sounds specifically includes: after playing the long and short sounds, displaying in the window the correct option of the character corresponding to the long and short sounds and Incorrect option: receiving the user's selection results for the correct option and the incorrect option, and determining the selection result as the feedback result.
  • the receiving the feedback result of the user's recognition of the long and short sounds specifically includes: collecting the user's recognition voice of the character corresponding to the long and short sound, and determining the recognition voice as the Feedback results.
  • the determining the user's hearing assessment result based on the feedback result specifically includes: determining the feedback character fed back by the user based on the feedback result; detecting whether the feedback character is correct; based on the detection result A volume region recognizable by the user and a tone loss type of the user are evaluated as a hearing evaluation result of the user.
  • the method when it is determined that the hearing aid device needs to perform a display operation based on the hearing assessment result, before collecting the voice information requiring auxiliary processing, the method further includes: when the volume range is the first volume range, Performing the sound amplification operation on the collected voice information; when the volume area is the second volume area, performing the sound amplification operation and the display operation on the collected voice information; when the volume area is In the third volume zone, a display operation is performed on the collected voice information.
  • the amplifying operation on the collected voice information specifically includes: detecting the intensity parameter and the frequency parameter of the voice information, adopting a dynamic amplifier to automatically amplify the intensity parameter and the frequency parameter Adjusting the gain, adjusting the intensity parameter and the frequency parameter to a comfortable listening range.
  • the performing the voice amplification operation on the collected voice information further includes: when detecting that the user has pitch loss, performing the voice amplification operation on the user according to the type of the user's pitch loss The speech information performs the compensation operation of the missing frequency.
  • a hearing aid control device including: a detection module, configured to play hearing detection audio, so as to obtain a feedback signal based on the hearing detection audio from a user wearing the hearing aid device, and based on The feedback signal determines the hearing evaluation result of the user; the collection module is configured to collect voice information requiring auxiliary processing when it is determined based on the hearing evaluation result that the hearing aid device needs to perform a display operation; the display module is configured to The voice information is collected, the voice information is converted into text information, and the text information is displayed in the window area of the AR glasses.
  • a hearing aid device including: AR glasses; an in-ear broadcasting module arranged on the AR glasses, used to play hearing detection audio;
  • the user of the listening device detects an audio feedback signal based on the hearing, and determines the user's hearing evaluation result based on the feedback signal;
  • the sound collection module arranged on the AR glasses is used to perform the hearing evaluation based on the hearing evaluation result
  • the AR glasses are also used to: collect the voice information, convert the voice information into text information, and The window area of the glasses displays the text information.
  • it further includes: a bone conduction vibration sensor arranged on the AR glasses, the bone conduction vibration sensor can be in contact with the user's vocal cord area, and the bone conduction vibration sensor is used to detect the user's vocal cord vibration signal; the processor is also used to: detect the vocal cord vibration signal based on a feature comparison model to determine whether the user is the sound source of the voice information; the processor is also used to: determine the user When it is not the sound source of the voice information, convert the voice information into text information.
  • a bone conduction vibration sensor arranged on the AR glasses, the bone conduction vibration sensor can be in contact with the user's vocal cord area, and the bone conduction vibration sensor is used to detect the user's vocal cord vibration signal
  • the processor is also used to: detect the vocal cord vibration signal based on a feature comparison model to determine whether the user is the sound source of the voice information
  • the processor is also used to: determine the user When it is not the sound source of the voice information, convert the voice information into text information.
  • a hearing aid device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the above-mentioned another aspect by executing the executable instructions The hearing aid control method.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, any one of the above-mentioned hearing aid control methods is implemented.
  • the hearing aid control scheme detects the hearing level of the user wearing the hearing aid device in advance, so that when it is determined based on the hearing level that a hearing aid operation needs to be performed, the collected voice information is converted into text information, and Displayed in the window area of the AR glasses of the hearing aid device, on the one hand, by detecting the hearing level of the wearing user and determining whether to perform the hearing aid operation based on the hearing level, the reliability of the hearing aid operation can be guaranteed. On the other hand, realizing the hearing aid transformed from auditory sense to visual direction, so as to improve the effect of hearing aid operation.
  • FIG. 1 shows a schematic structural diagram of a hearing aid device in an embodiment of the present disclosure
  • FIG. 2 shows a flow chart of a hearing aid control method in an embodiment of the present disclosure
  • FIG. 3 shows a flow chart of another hearing aid control method in an embodiment of the present disclosure
  • Fig. 4 shows a flowchart of another hearing aid control method in an embodiment of the present disclosure
  • Fig. 5 shows a schematic diagram of a hearing detection character in an embodiment of the present disclosure
  • FIG. 6 shows a flowchart of another hearing aid control method according to an embodiment of the present disclosure
  • Fig. 7 shows a flowchart of another hearing aid control method in an embodiment of the present disclosure
  • Fig. 8 shows a flowchart of another hearing aid control method in an embodiment of the present disclosure
  • FIG. 9 shows a flowchart of another hearing aid control method in an embodiment of the present disclosure.
  • Fig. 10 shows a schematic diagram of a hearing aid control device in an embodiment of the present disclosure
  • Fig. 11 shows a schematic diagram of a hearing aid device in an embodiment of the present disclosure.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments may, however, be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of example embodiments to those skilled in the art.
  • the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • a hearing aid device including:
  • the in-ear broadcasting module 104 arranged on the AR glasses 102 is used for playing hearing detection audio.
  • the processor (not shown in the figure) is configured to acquire a feedback signal based on hearing detection audio of the user wearing the hearing aid device, and determine a hearing evaluation result of the user based on the feedback signal.
  • the sound collection module 106 provided on the AR glasses 102 is configured to collect voice information requiring auxiliary processing when it is determined based on the hearing assessment result that the hearing aid device needs to perform a display operation.
  • the AR glasses 102 are also used to convert the voice information into text information when the voice information is collected, and display the text information in the window area of the AR glasses 102 .
  • the bone conduction vibration sensor 108 disposed on the AR glasses 102 can be in contact with the user's vocal cord area, and the bone conduction vibration sensor is used to detect the user's vocal cord vibration signal.
  • the processor is also used for: detecting the vocal cord vibration signal based on the feature comparison model to determine whether the user is the sound source of the voice information.
  • the processor is also used for converting the voice information into text information when it is determined that the user is not the sound source of the voice information.
  • the image acquisition module 110 arranged on the AR glasses 102 is used to acquire the image of the window range
  • the processor is also used for: performing face recognition on the window area image of the AR glasses 102 based on the direction of the sound source, so as to identify the sounding object of the voice information;
  • the processor is also used for: converting voice information into text information, and displaying the text information on the window area corresponding to the sounding object on the hearing aid device.
  • Fig. 2 shows a flowchart of a hearing aid control method in an embodiment of the present disclosure.
  • the hearing aid device implements the hearing aid control method.
  • the hearing aid device includes AR glasses, and a sound collection module and an in-ear broadcast module arranged on the AR glasses.
  • the sound collection module can be one or more, and the sound collection module
  • the module is used to collect voice, and the in-ear broadcast module is used to play audio, including the following steps:
  • Step S202 playing the hearing detection audio, so as to obtain the feedback signal based on the hearing detection audio of the user wearing the hearing aid device, and determine the hearing evaluation result of the user based on the feedback signal.
  • a feedback signal of the detection audio from the user wearing the hearing aid device is received, and the feedback signal may be the user's voice signal, eye blink signal or a touch signal to a designated area on the hearing aid device.
  • the hearing detection audio can be played through the in-ear broadcasting module 104 .
  • Step S204 when it is determined based on the hearing assessment result that the hearing aid device needs to perform a display operation, collect voice information requiring auxiliary processing.
  • the user wearing the hearing aid device is a hearing-impaired user
  • the hearing impairment includes but not limited to intensity loss and/or tone loss.
  • the intensity is divided into a high-intensity area, a medium-intensity area, and a low-intensity area
  • the tone is divided into a high-frequency area, a middle-frequency area, and a low-frequency area.
  • the intensity of a sound is the loudness.
  • the size of a sound (often referred to as volume) is determined by the "amplitude" and the distance from the sound source. The larger the amplitude, the smaller the distance between the sound source and the person, and the louder it is.
  • Tone The pitch (treble and bass) is determined by the "frequency", the higher the frequency, the higher the pitch (the unit of frequency is hertz, hertz, the human hearing range is 20-20000 Hz).
  • the second claim is below 20Hz, and the ultrasonic is called above 20000Hz.
  • Step S206 when the voice information is collected, convert the voice information into text information, and display the text information in the window area of the AR glasses.
  • the collected voice information is converted into text information and displayed on the AR of the hearing aid device
  • the window area of the glasses detects the hearing level of the wearer and determines whether to perform a display-based hearing aid operation based on the hearing level, which can ensure the reliability of the hearing aid operation. It is a hearing aid in the visual direction, thereby improving the effect of hearing aid operation.
  • the hearing aid device further includes a bone conduction vibration sensor placed on the AR glasses.
  • the bone conduction vibration sensor can specifically be a bone conduction microphone or an audio accelerator, and the bone conduction vibration sensor can be in contact with the bone region of the user's head.
  • the hearing aid control method specifically includes:
  • Step S302 playing the hearing detection audio, so as to obtain the feedback signal based on the hearing detection audio of the user wearing the hearing aid device, and determine the hearing evaluation result of the user based on the feedback signal.
  • Step S304 when it is determined based on the hearing assessment result that the hearing aid device needs to perform a display operation, collect voice information requiring auxiliary processing.
  • Step S306 when the voice information is collected, the user's vocal cord vibration signal is detected based on the bone conduction vibration sensor.
  • Step S308 detecting the vocal cord vibration signal based on the feature comparison model, so as to determine whether the user is the sound source of the voice information based on the detection result.
  • step S310 a corresponding rendering method is determined based on the judgment result, so that when the voice information is converted into text information, the text information is displayed in the window area of the AR glasses based on the corresponding rendering method.
  • the corresponding rendering method is determined based on the judgment result, so that when the speech information is converted into text information, based on the corresponding rendering method, the text information is displayed in the window area of the AR glasses, specifically including:
  • the display operation is performed based on the first rendering manner.
  • the first rendering manner may be displayed based on the rendering manner described in FIG. 8 and/or FIG. 9 .
  • the display operation is performed based on the second rendering manner.
  • fonts of different sizes and colors can be used, and different display methods can also be used.
  • the first rendering method adopts a scrolling display method
  • the second The rendering method adopts the method of full-area static display.
  • the user's feedback information is received, so as to determine the user's pronunciation level based on the feedback information.
  • the hearing aid device by setting the bone conduction vibration sensor used to detect the user's vocal cord vibration signal, when the hearing aid device receives the voice information, based on the vocal cord vibration signal to detect whether the voice information user is the sound source of the voice information, When it is detected that the user is not the sound source of the voice information, by converting the voice information into text information, the hearing aid in the visual direction can be realized.
  • This solution can reduce the probability of the hearing aid device converting the user's own words into text, and improve the visual aid. Listen to the reliability of operation.
  • the hearing aid device When it is detected that the user is the sound source of the voice information, the hearing aid device is used to detect the user's language expression ability at this time, which is beneficial to assist in improving the user's articulation and pronunciation, thereby improving the level of language communication.
  • step S202 the hearing detection audio is played to obtain a feedback signal based on the hearing detection audio of the user wearing the hearing aid device, and one of the hearing evaluation results of the user is determined based on the feedback signal
  • the specific implementation methods include:
  • Step S402 displaying the hearing test image in the window of the AR glasses, the hearing test image includes multiple groups of different length combinations of graphics, and characters corresponding to each group of length combination images.
  • Step S404 in one evaluation, play at least one of multiple groups of long and short sounds based on the specified sound volume and/or sound pitch, as hearing test audio, and each group of long and short sounds corresponds to a group of long and short combined images.
  • each number is represented by a combination of long and short ticks, and the combination of long and short ticks is used as the hearing detection audio, and one of them is played each time.
  • the sound volume includes bass, middle and treble
  • the sound tone includes low frequency, middle frequency and high frequency
  • multiple evaluations are performed based on different sound volumes and/or different sound tones.
  • Step S406 receiving the feedback result of long and short sound recognition from the user as a feedback signal.
  • Step S408 determining the hearing evaluation result of the user based on the feedback result.
  • a plurality of characters that detect the user's hearing are displayed, and then at least one corresponding long and short sound combination audio is played through the earphones in the plurality of characters to receive the user's long and short sound combination audio
  • the recognition result of the corresponding character is used to evaluate the user's hearing level through the recognition result, and the hearing evaluation result is obtained, which realizes the hearing evaluation function of the hearing aid device, so that the hearing aid device can perform targeted assistance based on the hearing evaluation result. Listening operation to improve the effect of hearing aids.
  • the results of the hearing evaluation pass the intensity threshold and tone threshold in Table 1, set ⁇ 30 and ⁇ 60dB as the first volume range, and ⁇ 60 and ⁇ 90dB as the second volume range, and the wearer describes the click sound corresponding to Whether the number is accurate, >90 is the third volume area, and the wearer's hearing feedback is integrated to determine the wearer's hearing level.
  • Tone Threshold for Ticks 250Hz 500Hz 1000Hz Intensity Threshold (30dB) first volume zone first volume zone first volume zone Intensity Threshold (60dB) second volume zone second volume zone Intensity Threshold (90dB) third volume zone third volume zone third volume zone
  • a specific implementation of receiving the user's feedback on long and short sound recognition includes: after playing the long and short sound, displaying the correct option and the wrong option of the character corresponding to the long and short sound in the window; The selection result of the user for the correct option and the wrong option is received, and the selection result is determined as a feedback result.
  • the feedback result is obtained by receiving the user's selection result of the correct option and the wrong option, and the selection operation of the correct option and the wrong option can be realized by receiving the user's touch operation on different areas of the hearing aid device.
  • step S406 another specific implementation manner of receiving the user's feedback result on the long and short sound recognition includes: collecting the user's recognized voice of the character corresponding to the long and short sound, and determining the recognized voice as the feedback result.
  • the user's voice recognition of characters is received as a feedback result.
  • a specific implementation manner of determining the user's hearing assessment result based on the feedback result includes:
  • Step S602 determine the feedback character fed back by the user based on the feedback result.
  • Step S604 checking whether the feedback characters are correct.
  • Step S606 based on the detection result, evaluate the volume region that the user can identify and the type of tone loss of the user, as the hearing evaluation result of the user.
  • the hearing aid control method specifically includes:
  • Step S702 playing the hearing detection audio to obtain the feedback signal based on the hearing detection audio from the user wearing the hearing aid device, and determine the hearing evaluation result of the user based on the feedback signal.
  • Step S704 when the volume range that can be recognized by the user is the first volume range, perform a voice amplification operation on the collected voice information.
  • the volume range that can be recognized by the user is the first volume range, which indicates that the user's hearing is slightly impaired.
  • Step S706 when the volume range that can be recognized by the user is the second volume range, perform amplifying operation and display operation on the collected voice information.
  • the volume range that can be recognized by the user is the second volume range, which indicates that the user's hearing is moderately impaired.
  • Step S708 when the volume range that can be recognized by the user is the third volume range, perform a display operation on the collected voice information.
  • the volume region that the user can recognize is the third volume region, which indicates that the user's hearing is severely impaired.
  • Step S710 when it is determined based on the hearing assessment result that the hearing aid device needs to perform a display operation, collect voice information requiring auxiliary processing.
  • Step S712 when the voice information is collected, convert the voice information into text information, and display the text information in the window area of the AR glasses.
  • ⁇ 30 and ⁇ 60 dB are the first volume range
  • ⁇ 60 and ⁇ 90 dB are the second volume range
  • >90 are the third volume range.
  • the corresponding hearing aid method is determined, so that different hearing-impaired users can get an adapted hearing aid solution.
  • the hearing aid solution includes separate sound amplification, separate display conversion text, and the combination of text display and sound reinforcement, etc.
  • performing the voice amplification operation on the collected voice information in step S704 and step S706 specifically includes: detecting the intensity parameter and the frequency parameter of the voice information, adopting a dynamic amplifier to automatically adjust the gain of the intensity parameter and the frequency parameter, Adjust the intensity parameter and frequency parameter to the comfortable listening range.
  • the intensity parameter and the frequency parameter are adjusted based on the above-mentioned hearing evaluation result, so as to improve the hearing aid effect of the sound amplification operation.
  • performing the voice amplification operation on the collected voice information further includes: when detecting that the user has a tone loss, performing a compensation operation for the missing frequency of the voice information for which the voice amplification operation is performed according to the type of the user's tone loss .
  • frequency compensation is performed on the voice information according to the type of tone loss, so as to improve the comfort of the amplified voice information received by the user.
  • the voice information he hears will be sharper. If things go on like this, the user's hearing impairment may become more serious.
  • By compensating for the lost frequency not only can the user hear voice information with normal tones, but also It is beneficial to prevent further deterioration of the user's hearing impairment.
  • the default sound source is not the user himself, and in step S206, the voice information is converted
  • Step S802 determining the sound source direction of the voice information.
  • Step S804 performing face recognition on the window area image of the AR glasses based on the direction of the sound source, so as to identify the utterance object of the voice information.
  • Step S806 converting the speech information into text information, and displaying the text information in the window area corresponding to the sounding object on the hearing aid device.
  • the utterance object that is speaking at this time is recognized.
  • the text information is displayed in the window area of the window, and the text information can be further communicated with the voice object, so as to enhance the user's sense of interaction.
  • another specific implementation of converting voice information into text information and displaying the text information in the window area of AR glasses includes:
  • Step S902 determining the sound source direction of the voice information.
  • step S904 face recognition is performed on the window area image of the AR glasses based on the direction of the sound source, so as to identify the utterance object of the voice information.
  • Step S906 converting the voice information into text information.
  • Step S908 detecting spectrum parameters of the speech information.
  • Step S910 distinguishing the gender of the sound source of the speech information based on the spectral parameters.
  • Step S912 determining a corresponding rendering method based on the gender of the sound source.
  • Step S914 determine the display style of the text information based on the corresponding rendering method, and display it in the window area.
  • the gender of the sound source object is detected based on the spectral parameters of the voice information, so as to further match the rendering method based on the gender, so that the text information is rendered and displayed on the near-eye display device based on the rendering method.
  • the personalized display of text information and on the other hand, realizes the optimization of the display of text information on AR glasses, which is conducive to improving the user's viewing experience of text.
  • step S206 converting the voice information into text information and displaying the text information in the window area of the AR glasses is a further supplement, further comprising: detecting the distance between the utterance object and the utterance object based on the visual features of the utterance object , detect the distance between the object and the sounding object, and adjust the size of the text box of the text information synchronously.
  • the distance between the near-eye display device and the information source is determined according to the depth camera or some distance mapping algorithm, so as to determine the size of the text box displayed by the text information according to the distance, for example, the distance from the information source If it is far away, the area occupied by the information source in the window is small. At this time, the text box can be enlarged. The distance between the information source and the information source is close, and the area occupied by the information source in the window is relatively large. At this time, it can be appropriately reduced. The text box prevents the information source from being blocked, which is conducive to improving the interaction between the user and the information source when consulting the text information.
  • step S206 when the voice information is collected, converting the voice information into text information, and displaying a further supplement of the text information in the window area of the AR glasses, further includes: when the collected voice information is detected When the information is voice information to be translated, the translation model of the target language is called to translate the voice information to be translated to obtain the translated text; the translated text is displayed as text information in the window area of the AR glasses.
  • the translation model of the target language is called to translate the received information to be translated and obtain the translated text, which realizes the translation of the hearing aid device. Function extension.
  • a hearing aid control device 1000 according to this embodiment of the present disclosure is described below with reference to FIG. 10 .
  • the hearing aid control device 1000 shown in FIG. 10 is only an example, and should not limit the functions and scope of use of the embodiments of the present disclosure.
  • the hearing aid control device 1000 is expressed in the form of a hardware module.
  • the components of the hearing aid control device 1000 may include, but are not limited to: a detection module 1002, configured to play hearing detection audio, so as to obtain a feedback signal based on the hearing detection audio from a user wearing a hearing aid device, and determine the user's hearing evaluation result based on the feedback signal
  • the collection module 1004 is used to collect voice information that requires auxiliary processing when it is determined that the hearing aid device needs to perform a display operation based on the hearing assessment results;
  • the display module 1006 is used to convert the voice information into text information when the voice information is collected , and display text information in the window area of the AR glasses.
  • the hearing aid device further includes a bone conduction vibration sensor placed on the AR glasses.
  • the bone conduction vibration sensor can be in contact with the user's vocal cord area.
  • the detection module 1002 is also used to: The conduction vibration sensor detects the vocal cord vibration signal of the user; the detection module 1002 is also used to: detect the vocal cord vibration signal based on the feature comparison model to determine whether the user is the sound source of the voice information; the hearing aid control device 1000 also includes: a conversion module 1008, When it is determined that the user is not the sound source of the voice information, the voice information is converted into text information.
  • the detection module 1002 is also used to: display the hearing detection image in the window of the AR glasses, the hearing detection image includes multiple groups of different length combination graphics, and characters corresponding to each group of length combination images; the hearing aid control device 1000 It also includes: a playback module 1010, which is used to play at least one group of multiple groups of long and short sounds based on the specified sound volume and/or sound pitch in an evaluation, as the audiometry audio, and each group of long and short sounds corresponds to a group Long and short combined images; the receiving module 1012 is used to receive the feedback result of the user's long and short sound recognition as a feedback signal; the detection module 1002 is also used to: determine the user's hearing evaluation result based on the feedback result, wherein the sound volume includes bass, medium Tones and high tones, sound tones including low, mid and high frequencies, perform multiple assessments based on different sound volumes and/or different sound tones.
  • the sound volume includes bass, medium Tones and high tones, sound tones including low, mid and high frequencies, perform multiple assessments based
  • the receiving module 1012 is also used for: after playing the long and short sounds, display the correct options and wrong options of the characters corresponding to the long and short sounds in the window; Determined as a feedback result.
  • the receiving module 1012 is further configured to: collect the user's recognition speech of characters corresponding to the long and short sounds, and determine the recognition speech as the feedback result.
  • the detection module 1002 is also used to: determine the feedback character fed back by the user based on the feedback result; detect whether the feedback character is correct; evaluate the volume region that the user can recognize and the user's tone loss type based on the detection result, as the user's hearing evaluation result.
  • it also includes: a voice information processing module 1014, configured to perform an amplifying operation on the collected voice information when the volume range is the first volume range; Perform amplifying operation and display operation on the received voice information; when the volume area is the third volume area, perform display operation on the collected voice information.
  • a voice information processing module 1014 configured to perform an amplifying operation on the collected voice information when the volume range is the first volume range; Perform amplifying operation and display operation on the received voice information; when the volume area is the third volume area, perform display operation on the collected voice information.
  • the voice information processing module 1014 is also used to: detect the intensity parameter and frequency parameter of the voice information, adopt a dynamic amplifier to automatically adjust the gain of the intensity parameter and the frequency parameter, and adjust the intensity parameter and the frequency parameter to a comfortable listening range .
  • the voice information processing module 1014 when detecting that the user has pitch loss, performs a missing frequency compensation operation on the voice information for which the voice amplification operation is performed according to the type of the user's pitch loss.
  • the display module 1006 is also used to: determine the sound source orientation of the voice information; perform face recognition on the window area image of the AR glasses based on the sound source orientation, so as to identify the voice object of the voice information; convert the voice information It is text information, and the text information is displayed on the window area corresponding to the sounding object on the hearing aid device.
  • the display module 1006 is also used to: detect the spectral parameters of the speech information; distinguish the gender of the sound source of the speech information based on the spectral parameters; determine the corresponding rendering method based on the gender of the sound source; determine the text information based on the corresponding rendering method
  • the style is displayed and displayed in the window area.
  • the display module 1006 is further configured to: detect the distance between the vocalizing object and the vocalizing object based on the visual features of the vocalizing object, detect the distance to the vocalizing object, and synchronously adjust the size of the text box of the text information.
  • the display module 1006 is further configured to: when detecting that the collected voice information is voice information to be translated, call the translation model of the target language to translate the voice information to be translated to obtain the translated text; The text is displayed in the window area of the AR glasses as text information.
  • a hearing aid device 1100 according to this embodiment of the present disclosure is described below with reference to FIG. 11 .
  • the hearing aid device 1100 shown in FIG. 11 is only an example, and should not limit the functions and scope of use of the embodiments of the present disclosure.
  • assistive-listening device 1100 takes the form of a general-purpose computing device.
  • the components of the hearing aid device 1100 may include but not limited to: the at least one processing unit 1110 mentioned above, the at least one storage unit 1120 mentioned above, and the bus 1130 connecting different system components (including the storage unit 1120 and the processing unit 1110 ).
  • the storage unit stores program codes, and the program codes can be executed by the processing unit 1110, so that the processing unit 1110 executes the steps according to various exemplary embodiments of the present disclosure described in the “Exemplary Methods” section of this specification.
  • the processing unit 1110 may execute steps S202, S204 and S206 as shown in FIG. 2, and other steps defined in the hearing aid control method of the present disclosure.
  • the storage unit 1120 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 11201 and/or a cache storage unit 11202 , and may further include a read-only storage unit (ROM) 11203 .
  • RAM random access storage unit
  • ROM read-only storage unit
  • Storage unit 1120 may also include programs/utilities 11204 having a set (at least one) of program modules 11205, such program modules 11205 including but not limited to: an operating system, one or more application programs, other program modules, and program data, Implementations of networked environments may be included in each or some combination of these examples.
  • Bus 1130 may represent one or more of several types of bus structures, including a memory cell bus or memory cell controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local area using any of a variety of bus structures. bus.
  • the hearing aid device 1100 may also communicate with one or more external devices 1160 (e.g., keyboards, pointing devices, Bluetooth devices, etc.), and may also communicate with one or more devices that enable the user to interact with the hearing aid device, and/or Communicate with any device (eg, router, modem, etc.) that enables the assistive-listening device 1100 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interface 1150 .
  • the hearing aid device 1100 can also communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet) through the network adapter 1150 .
  • networks such as a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet
  • network adapter 1150 communicates with other modules of assistive-hearing device 1100 via bus 1130 .
  • other hardware and/or software modules may be used in conjunction with the hearing aid device, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage system, etc.
  • the example implementations described here can be implemented by software, or by combining software with necessary hardware. Therefore, the technical solutions according to the embodiments of the present disclosure can be embodied in the form of software products, and the software products can be stored in a non-volatile storage medium (which can be CD-ROM, U disk, mobile hard disk, etc.) or on the network , including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present disclosure.
  • a computing device which may be a personal computer, a server, a terminal device, or a network device, etc.
  • a computer-readable storage medium on which a program product capable of implementing the above-mentioned method in this specification is stored.
  • various aspects of the present disclosure can also be implemented in the form of a program product, which includes program code.
  • the program product runs on the terminal device, the program code is used to make the terminal device execute the above-mentioned Steps according to various exemplary embodiments of the present disclosure described in the "Exemplary Methods" section.
  • a program product for implementing the above method according to the embodiment of the present disclosure, it may adopt a portable compact disk read only memory (CD-ROM) and include program codes, and may run on a terminal device such as a personal computer.
  • CD-ROM compact disk read only memory
  • the program product of the present disclosure is not limited thereto.
  • a readable storage medium may be any tangible medium containing or storing a program, and the program may be used by or in combination with an instruction execution system, apparatus or device.
  • a computer readable signal medium may include a data signal carrying readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a readable signal medium may also be any readable medium other than a readable storage medium that can transmit, propagate, or transport a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Program code for performing the operations of the present disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural Programming language - such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server to execute.
  • the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (e.g., using an Internet service provider). business to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service provider e.g., a wide area network
  • steps of the methods of the present disclosure are depicted in the drawings in a particular order, there is no requirement or implication that the steps must be performed in that particular order, or that all illustrated steps must be performed to achieve the desired result. Additionally or alternatively, certain steps may be omitted, multiple steps may be combined into one step for execution, and/or one step may be decomposed into multiple steps for execution, etc.
  • the technical solutions according to the embodiments of the present disclosure can be embodied in the form of software products, and the software products can be stored in a non-volatile storage medium (which can be CD-ROM, U disk, mobile hard disk, etc.) or on the network , including several instructions to make a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) execute the method according to the embodiments of the present disclosure.
  • a non-volatile storage medium which can be CD-ROM, U disk, mobile hard disk, etc.
  • a computing device which may be a personal computer, a server, a mobile terminal, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Neurosurgery (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开提供了一种助听控制方法、装置、助听设备和存储介质,涉及穿戴设备技术领域。其中,助听控制方法包括:播放听力检测音频,以获取佩戴所述助听设备的用户基于所述听力检测音频的反馈信号,并基于所述反馈信号确定所述用户的听力评估结果;在基于所述听力评估结果确定需要所述助听设备执行显示操作时,采集需要辅助处理的语音信息;采集到所述语音信息,将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息。通过本公开的技术方案,实现了通过由听觉转化为视觉方向的助听,从而能够提升助听操作的效果。 (图2)

Description

助听控制方法、装置、助听设备和存储介质
本公开要求于2021年10月25日提交的申请号为202111252280.1、名称为“助听控制方法、装置、助听设备和存储介质”的中国专利申请的优先权,该中国专利申请的全部内容通过引用全部并入本文。
技术领域
本公开涉及穿戴设备技术领域,尤其涉及一种助听控制方法、装置、助听设备和计算机可读存储介质。
背景技术
相关技术中,虽然目前的助听设备能够结合穿戴设备的功能实现对用户注视区域进行收音,但是这种助听方案的核心方式仍是将收集到的声音放大后,再播放给听障用户,由于对于重度耳聋或完全丧失听力的患者无效,导致该方案存在一定的使用局限性。
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本公开的目的在于提供一种助听控制方法、装置、助听设备和计算机可读存储介质,至少在一定程度上能够改善相关技术中的助听方案存在一定的使用局限性的问题。
本公开的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本公开的实践而习得。
根据本公开的一个方面,提供一种助听控制方法,包括:应用于助听设备,所述助听设备包括AR眼镜,以及设置在所述AR眼镜上的声音采集模块和入耳式播音模块,所述声音采集模块用于采集语音,所述入耳式播音模块用于播放音频,所述助听控制方法包括:播放听力检测音频,以获取佩戴所述助听设备的用户基于所述听力检测音频的反馈信号,并基于所述反馈信号确定所述用户的听力评估结果;在基于所述听力评估结果确定需要所述助听设备执行显示操作时,采集需要辅助处理的语音信息;采集到所述语音信息,将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息。
在一个实施例中,所述助听设备还包括置在所述AR眼镜上的骨传导振动传感器,所述骨传导振动传感器能够与用户的头部骨骼区域接触,所述采集到所述语音信息,将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息,具体包括: 采集到所述语音信息,基于所述骨传导振动传感器检测所述用户的声带振动信号;基于特征对比模型检测所述声带振动信号,以基于检测结果判断所述用户是否为所述语音信息的声源;基于判断结果确定对应的渲染方式,以在将所述语音信息转化为所述文本信息时,基于对应的所述渲染方式,在所述AR眼镜的视窗区域显示所述文本信息,其中,基于颜色、字体、显示比例和显示速度中的至少一项,配置不同的所述渲染方式。
在一个实施例中,所述基于判断结果确定对应的渲染方式,以在将所述语音信息转化为所述文本信息时,基于对应的所述渲染方式,在所述AR眼镜的视窗区域显示所述文本信息,具体包括:在确定所述用户不是所述语音信息的声源时,基于第一渲染方式进行显示操作;在确定所述用户是所述语音信息的声源时,基于第二渲染方式进行显示操作,其中,在基于所述第二渲染方式显示所述文本信息时,接收所述用户的反馈信息,以基于所述反馈信息判断所述用户的发音水平。
在一个实施例中,所述将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息,具体包括:确定所述语音信息的声源方位;基于所述声源方位对所述AR眼镜的视窗范围图像进行人脸识别,以识别出所述语音信息的发声对象;将所述语音信息转化为文本信息,并在所述助听设备上与所述发声对象对应的所述视窗区域显示所述文本信息。
在一个实施例中,所述将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息,还包括:检测所述语音信息的频谱参数;基于所述频谱参数区分所述语音信息的声源性别;基于所述声源性别确定对应的渲染方式;基于对应的所述渲染方式确定所述文本信息的显示风格,并显示在所述视窗区域。
在一个实施例中,所述将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息,还包括:基于所述发声对象的视觉特征检测与所述发声对象之间的距离,基于监测到的与所述发声对象之间的距离,同步调整所述文本信息的文本框大小。
在一个实施例中,所述采集到所述语音信息,将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息,还包括:在检测到采集到的所述语音信息为待翻译的语音信息时,调用目标语种的翻译模型,对所述待翻译的语音信息进行翻译,得到翻译文本;将所述翻译文本作为所述文本信息显示在所述AR眼镜的视窗区域。
在一个实施例中,所述播放听力检测音频,以获取佩戴所述助听设备的用户基于所述听力检测音频的反馈信号,并基于所述反馈信号确定所述用户的听力评估结果,具体包括:在所述AR眼镜的视窗内显示听力检测图像,所述听力检测图像包括多组不同长短组合图 形,以及每组所述长短组合图像对应的字符;在一次评估中,基于指定的声音音量和/或声音音调播放多组长短音中的至少一组,作为所述听力检测音频,每组所述长短音对应于一组所述长短组合图像;接收用户对所述长短音识别的反馈结果,以作为所述反馈信号;基于所述反馈结果确定所述用户的听力评估结果,其中,所述声音音量包括低音、中音和高音,所述声音音调包括低频、中频和高频,基于不同的所述声音音量和/或不同的所述声音音调执行多次评估。
在一个实施例中,所述接收用户对所述长短音识别的反馈结果,具体包括:在播放所述长短音之后,在所述视窗内显示所述长短音对应的所述字符的正确选项和错误选项;接收用户针对所述正确选项和所述错误选项的选择结果,将所述选择结果确定为所述反馈结果。
在一个实施例中,所述接收用户对所述长短音识别的反馈结果,具体包括:采集所述用户对所述长短音对应的所述字符的识别语音,将所述识别语音确定为所述反馈结果。
在一个实施例中,所述基于所述反馈结果确定所述用户的听力评估结果,具体包括:基于所述反馈结果确定所述用户反馈的反馈字符;检测所述反馈字符是否正确;基于检测结果评估所述用户能够识别的音量区域和所述用户的音调损失类型,作为所述用户的听力评估结果。
在一个实施例中,在基于所述听力评估结果确定需要所述助听设备执行显示操作时,采集需要辅助处理的语音信息之前,还包括:在所述音量区域为第一音量区域时,对采集到的所述语音信息执行扩音操作;在所述音量区域为第二音量区域时,对采集到的所述语音信息执行所述扩音操作和所述显示操作;在所述音量区域为第三音量区域时,对采集到的所述语音信息执行显示操作。
在一个实施例中,所述对采集到的所述语音信息执行扩音操作,具体包括:检测所述语音信息的强度参数和频率参数,采取动态放大器对所述强度参数和所述频率参数自动调整增益,将所述强度参数和所述频率参数调整至舒适听音区间。
在一个实施例中,所述对采集到的所述语音信息执行扩音操作,还包括:在检测所述用户存在音调损失时,根据所述用户的音调损失类型对执行扩音操作的所述语音信息进行缺失频率的补偿操作。
根据本公开的另一方面,提供一种助听控制装置,包括:检测模块,用于播放听力检测音频,以获取佩戴所述助听设备的用户基于所述听力检测音频的反馈信号,并基于所述反馈信号确定所述用户的听力评估结果;采集模块,用于在基于所述听力评估结果确定需 要所述助听设备执行显示操作时,采集需要辅助处理的语音信息;显示模块,用于采集到所述语音信息,将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息。
根据本公开的再一方面,提供一种助听设备,包括:AR眼镜;设置在所述AR眼镜上的入耳式播音模块,用于播放听力检测音频;处理器,用于获取佩戴所述助听设备的用户基于所述听力检测音频的反馈信号,并基于所述反馈信号确定所述用户的听力评估结果;设置在所述AR眼镜上的声音采集模块,用于在基于所述听力评估结果确定需要所述助听设备执行显示操作时,采集需要辅助处理的语音信息;所述AR眼镜还用于:采集到所述语音信息,将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息。
在一个实施例中,还包括:设置在所述AR眼镜上的骨传导振动传感器,所述骨传导振动传感器能够与用户的声带区域接触,所述骨传导振动传感器用于检测所述用户的声带振动信号;所述处理器还用于:基于特征对比模型检测所述声带振动信号,以确定所述用户是否为所述语音信息的声源;所述处理器还用于:在确定所述用户不是所述语音信息的声源时,将所述语音信息转化为文本信息。
根据本公开的又一方面,提供一种助听设备,包括:处理器;以及存储器,用于存储处理器的可执行指令;其中,处理器配置为经由执行可执行指令来执行上述另一个方面所述的助听控制方法。
根据本公开的又一方面,提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述任意一项的助听控制方法。
本公开的实施例所提供的助听控制方案,通过预先检测助听设备佩戴用户的听力水平,从而在基于听力水平确定需要执行助听操作时,将采集到的语音信息转化为文本信息,并显示在助听设备的AR眼镜的视窗区域,一方面,通过检测佩戴用户的听力水平,并基于听力水平确定是否执行基于显示的助听操作,能够保证助听操作执行的可靠性,另一方面,实现了通过由听觉转化为视觉方向的助听,从而能够提升助听操作的效果。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。显而易见地,下面描述中的附图仅仅是本公开的 一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出本公开实施例中一种助听设备的结构示意图;
图2示出本公开实施例中一种助听控制方法的流程图;
图3示出本公开实施例中另一种助听控制方法的流程图;
图4示出本公开实施例中再一种助听控制方法的流程图;
图5示出本公开实施例中一种听力检测字符的示意图;
图6示出本公开实施例的又一种助听控制方法的流程图;
图7示出本公开实施例中又一种助听控制方法的流程图;
图8示出本公开实施例中又一种助听控制方法的流程图;
图9示出本公开实施例中又一种助听控制方法的流程图;
图10示出本公开实施例中一种助听控制装置的示意图;
图11示出本公开实施例中一种助听设备的示意图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
如图1所示,根据本公开的再一方面,提供一种助听设备,包括:
AR眼镜102。
设置在AR眼镜102上的入耳式播音模块104,用于播放听力检测音频。
处理器(图中未示出),用于获取佩戴助听设备的用户基于听力检测音频的反馈信号,并基于反馈信号确定用户的听力评估结果。
设置在AR眼镜102上的声音采集模块106,用于在基于听力评估结果确定需要助听设备执行显示操作时,采集需要辅助处理的语音信息。
AR眼镜102还用于:在采集到语音信息时,将语音信息转化为文本信息,并在AR眼镜102的视窗区域显示文本信息。
设置在AR眼镜102上的骨传导振动传感器108,骨传导振动传感器能够与用户的声带区域接触,骨传导振动传感器用于检测用户的声带振动信号。
处理器还用于:基于特征对比模型检测声带振动信号,以确定用户是否为语音信息的声源。
处理器还用于:在确定用户不是语音信息的声源时,将语音信息转化为文本信息。
设置在AR眼镜102上的图像采集模块110,用于采集视窗范围图像;
处理器还用于:基于声源方位对AR眼镜102的视窗范围图像进行人脸识别,以识别出语音信息的发声对象;
处理器还用于:将语音信息转化为文本信息,并在助听设备上与发声对象对应的视窗区域显示文本信息。
下面基于图1,将结合其它附图及实施例对本示例实施方式中的助听控制方法中的各个步骤进行更详细的说明。
图2示出本公开实施例中一种助听控制方法流程图。
如图2所示,助听设备执行助听控制方法,助听设备包括AR眼镜,以及设置在AR眼镜上的声音采集模块和入耳式播音模块,声音采集模块可以为一个或多个,声音采集模块用于采集语音,入耳式播音模块用于播放音频,包括以下步骤:
步骤S202,播放听力检测音频,以获取佩戴助听设备的用户基于听力检测音频的反馈信号,并基于反馈信号确定用户的听力评估结果。
其中,在播放听力检测音频后,接收助听设备佩戴用户对检测音频的反馈信号,反馈信号可以为用户的语音信号、眨眼信号或对助听设备上的指定区域的触控信号。
具体地,听力检测音频可以通过入耳式播音模块104播放。
步骤S204,在基于听力评估结果确定需要助听设备执行显示操作时,采集需要辅助处理的语音信息。
通常情况下,助听设备的佩戴用户为听力缺失用户,听力缺失包括但不限于强度缺失和/或音调缺失。
在本公开中,将强度划分为高强度区域、中强度区间和低强度区间,将音调划分为高频区间、中频区间和低频区间。
具体地,声音的强度就是响度,主观上,声音的大小(通常称为音量)由“振幅”和 距离声源的距离决定。振幅越大,声源与人之间的距离越小,响度越大。
音调:音高(高音和低音)由“频率”决定,频率越高,音调越高(频率单位为赫兹,赫兹,人耳听力范围为20-20000赫兹)。次声称为20Hz以下,超声波称为20000Hz以上。
步骤S206,在采集到语音信息时,将语音信息转化为文本信息,并在AR眼镜的视窗区域显示文本信息。
在该实施例中,通过预先检测助听设备佩戴用户的听力水平,从而在基于听力水平确定需要执行助听操作时,将采集到的语音信息转化为文本信息,并显示在助听设备的AR眼镜的视窗区域,一方面,通过检测佩戴用户的听力水平,并基于听力水平确定是否执行基于显示的助听操作,能够保证助听操作执行的可靠性,另一方面,实现了通过由听觉转化为视觉方向的助听,从而能够提升助听操作的效果。
在一个实施例中,助听设备还包括置在AR眼镜上的骨传导振动传感器,骨传导振动传感器具体可以为骨传导麦克风或音频加速器,骨传导振动传感器能够与用户的头部骨骼区域接触。
如图3所示,根据本公开的另一个实施例的助听控制方法,具体包括:
步骤S302,播放听力检测音频,以获取佩戴助听设备的用户基于听力检测音频的反馈信号,并基于反馈信号确定用户的听力评估结果。
步骤S304,在基于听力评估结果确定需要助听设备执行显示操作时,采集需要辅助处理的语音信息。
步骤S306,在采集到语音信息时,基于骨传导振动传感器检测用户的声带振动信号。
步骤S308,基于特征对比模型检测声带振动信号,以基于检测结果判断用户是否为语音信息的声源。
步骤S310,基于判断结果确定对应的渲染方式,以在将语音信息转化为文本信息时,基于对应的渲染方式,在AR眼镜的视窗区域显示文本信息。
其中,基于颜色、字体、显示比例和显示速度中的至少一项,配置不同的渲染方式。
具体的,在一个实施例中,基于判断结果确定对应的渲染方式,以在将语音信息转化为文本信息时,基于对应的渲染方式,在AR眼镜的视窗区域显示文本信息,具体包括:
在确定用户不是语音信息的声源时,基于第一渲染方式进行显示操作。
具体地,第一渲染方式可以基于图8和/或图9描述的渲染方式进行显示。
在确定用户是语音信息的声源时,基于第二渲染方式进行显示操作。
具体地,为了区分第一渲染方式和第二渲染方式,可以在使用不同的大小和颜色的字 体的同时,还可以采用不同的显示方式,比如,第一渲染方式采用滚动显示的方式,第二渲染方式采用满区静止显示的方式。
另外,在基于第二渲染方式显示文本信息时,接收用户的反馈信息,以基于反馈信息判断用户的发音水平。
在该实施例中,通过设置用于检测用户的声带振动信号的骨传导振动传感器,在助听设备接收到语音信息时,基于对声带振动信号检测语音信息用户是否为该语音信息的声源,在检测到用户不是该语音信息的声源时,通过将语音信息转化为文本信息,实现视觉方向的助听,该方案能够降低助听设备将用户自己的话语进行文本转化的概率,提升视觉助听操作的可靠性。
而在检测到用户是该语音信息的声源时,此时助听设备用于检测用户的语言表达能力,有利于辅助改善用户的咬字发音,进而提升语言交流水平。
如图4所示,在一个实施例中,步骤S202中,播放听力检测音频,以获取佩戴助听设备的用户基于听力检测音频的反馈信号,并基于反馈信号确定用户的听力评估结果的一种具体实现方式,包括:
步骤S402,在AR眼镜的视窗内显示听力检测图像,听力检测图像包括多组不同长短组合图形,以及每组长短组合图像对应的字符。
步骤S404,在一次评估中,基于指定的声音音量和/或声音音调播放多组长短音中的至少一组,作为听力检测音频,每组长短音对应于一组长短组合图像。
如图5所示,以数字为例,对于0至9十个数字,每个数字采用一种长短滴答声组合表示,以长短滴答声组合作为听力检测音频,每次播放其中的一种。
其中,声音音量包括低音、中音和高音,声音音调包括低频、中频和高频,基于不同的声音音量和/或不同的声音音调执行多次评估。
步骤S406,接收用户对长短音识别的反馈结果,以作为反馈信号。
步骤S408,基于反馈结果确定用户的听力评估结果。
在该实施例中,通过使用AR眼镜的视窗显示功能,显示检测用户听力的多个字符,然后通过耳机播放多个字符中的至少一个对应的长短音组合音频,以接收用户对长短音组合音频对应的字符的识别结果,通过识别结果对用户的听力水平进行评估,得到听力评估结果,实现了助听设备的听力评估功能,从而能够使助听设备能够基于听力评估结果有针对性的执行助听操作,以提升助听效果。
具体地,听力评估结果通过表1中的强度阈值和音调阈值,设定≥30以及≤60dB为 第一音量区域,≥60以及≤90dB为第二音量区域,佩戴者描述听到的滴答声对应的数字是否准确,>90为第三音量区域,综合佩戴者的听音反馈,确定佩戴者听力等级。
表1
滴答声的音调阈值 250Hz 500Hz 1000Hz
强度阈值(30dB) 第一音量区域 第一音量区域 第一音量区域
强度阈值(60dB) 第二音量区域 第二音量区域 第二音量区域
强度阈值(90dB) 第三音量区域 第三音量区域 第三音量区域
在一个实施例中,步骤S406中,接收用户对长短音识别的反馈结果的一种具体实现方式,包括:在播放长短音之后,在视窗内显示长短音对应的字符的正确选项和错误选项;接收用户针对正确选项和错误选项的选择结果,将选择结果确定为反馈结果。
在该实施例中,通过接收用户对正确选项和错误选项的选择结果,得到反馈结果,正确选项和错误选项的选择操作,可以通过接收用户对助听设备不同区域的触控操作实现。
在一个实施例中,步骤S406中,接收用户对长短音识别的反馈结果的另一种具体实现方式,包括:采集用户对长短音对应的字符的识别语音,将识别语音确定为反馈结果。
在该实施例中,通过接收用户对字符的识别语音,作为反馈结果。
如图6所示,在一个实施例中,步骤S408中,基于反馈结果确定用户的听力评估结果的一种具体实现方式,包括:
步骤S602,基于反馈结果确定用户反馈的反馈字符。
步骤S604,检测反馈字符是否正确。
步骤S606,基于检测结果评估用户能够识别的音量区域和用户的音调损失类型,作为用户的听力评估结果。
如图7所示,根据本公开的再一个实施例的助听控制方法,具体包括:
步骤S702,播放听力检测音频,以获取佩戴助听设备的用户基于听力检测音频的反馈信号,并基于反馈信号确定用户的听力评估结果。
步骤S704,在用户能够识别的音量区域为第一音量区域时,对采集到的语音信息执行扩音操作。
用户能够识别的音量区域为第一音量区域,表明用户听力为轻度受损。
步骤S706,在用户能够识别的音量区域为第二音量区域时,对采集到的语音信息执行扩音操作和显示操作。
用户能够识别的音量区域为第二音量区域,表明用户听力为中度受损。
步骤S708,在用户能够识别的音量区域为第三音量区域时,对采集到的语音信息执行显示操作。
用户能够识别的音量区域为第三音量区域,表明用户听力为重度受损。
步骤S710,在基于听力评估结果确定需要助听设备执行显示操作时,采集需要辅助处理的语音信息。
步骤S712,在采集到语音信息时,将语音信息转化为文本信息,并在AR眼镜的视窗区域显示文本信息。
具体地,≥30以及≤60dB为第一音量区域,≥60以及≤90dB为第二音量区域,>90为第三音量区域。
在该实施例中,通过确定用户能够识别的音量区间,确定对应的助听方式,从而能够使不同的听力受损用户得到适配的助听方案,助听方案包括单独扩音、单独显示转化文本,以及文本显示和扩音相结合等。
在一个实施例中,步骤S704和步骤S706中的对采集到的语音信息执行扩音操作,具体包括:检测语音信息的强度参数和频率参数,采取动态放大器对强度参数和频率参数自动调整增益,将强度参数和频率参数调整至舒适听音区间。
在该实施例中,在执行扩音操作时,基于上述的听力评估结果,对强度参数和频率参数进行调整,以提升扩音操作的助听效果。
在一个实施例中,对采集到的所述语音信息执行扩音操作,还包括:在检测用户存在音调损失时,根据用户的音调损失类型对执行扩音操作的语音信息进行缺失频率的补偿操作。
在该实施例中,在进一步检测到用户的音调缺失时,通过音调缺失类型对语音信息进行频率补偿,以提升用户接收到的扩音后的语音信息的舒适性。
例如,如果用户存在低音损失时,则听到的语音信息会比较尖锐,长此以往,有可能导致用户听障更加严重,通过进行损失频率的补偿,不但能够使用户听到正常音调的语音信息,还有利于防止用户听障进一步恶化。
如图8所示,在一个实施例中,在未设置骨传导振动传感器,或骨传导振动传感器不参与助听控制过程时,此时默认声源不是用户本身,步骤S206中,将语音信息转化为文本信息,并在AR眼镜的视窗区域显示文本信息的一种具体实现方式,包括:
步骤S802,确定语音信息的声源方位。
步骤S804,基于声源方位对AR眼镜的视窗范围图像进行人脸识别,以识别出语音信息的发声对象。
步骤S806,将语音信息转化为文本信息,并在助听设备上与发声对象对应的视窗区域显示文本信息。
在该实施例中,在与人交流的使用环境中,通过使用AR眼镜的声源定位功能和人脸识别功能,识别出此时正在说话的发声对象,基于识别结果,不但能够在发声对象对应的视窗区域显示文本信息,还可以进一步基于对文本信息与发声对象进行进一步交流,从而提升用户的交互感。
如图9所示,在一个实施例中,将语音信息转化为文本信息,并在AR眼镜的视窗区域显示文本信息的另一种具体实现方式,包括:
步骤S902,确定语音信息的声源方位。
步骤S904,基于声源方位对AR眼镜的视窗范围图像进行人脸识别,以识别出语音信息的发声对象。
步骤S906,将语音信息转化为文本信息。
步骤S908,检测语音信息的频谱参数。
步骤S910,基于频谱参数区分语音信息的声源性别。
步骤S912,基于声源性别确定对应的渲染方式。
步骤S914,基于对应的渲染方式确定文本信息的显示风格,并显示在视窗区域。
在该实施例中,通过基于语音信息的频谱参数检测声源对象的性别,以进一步基于性别进行渲染方式的匹配,以基于该渲染方式对文本信息进行渲染后显示在近眼显示设备上,一方面,实现了文本信息的个性化显示,另一方面,实现了文本信息在AR眼镜上的显示优化,从而有利于提升用户对文本的观看体验。
在一个实施例中,作为步骤S206中,将语音信息转化为文本信息,并在AR眼镜的视窗区域显示文本信息的进一步补充,还包括:基于发声对象的视觉特征检测与发声对象之间的距离,检测到与发声对象之间的距离远近,同步调整文本信息的文本框大小。
在该实施例中,通过根据深度摄像头或某种距离映射算法确定近眼显示设备和信息源之间的距离,以根据距离,确定文本信息显示文字框的大小,比如,与信息源之间的距离较远,则信息源在视窗中所占面积较小,此时可以扩大文本框,与信息源之间的距离较近,则信息源在视窗中所占面积较大,此时可以适当减小文本框,以防止遮挡信息源,从而有利于在查阅文本信息时,提升用户与信息源之间的交互感。
在一个实施例中,作为步骤S206中,在采集到语音信息时,将语音信息转化为文本信息,并在AR眼镜的视窗区域显示文本信息的进一步补充,还包括:在检测到采集到的语音信息为待翻译的语音信息时,调用目标语种的翻译模型,对待翻译的语音信息进行翻译,得到翻译文本;将翻译文本作为文本信息显示在AR眼镜的视窗区域。
在该实施例中,在检测到接收的语音信息为待翻译的语音信息时,通过调用目标语种的翻译模型对接收到的待翻译的信息进行翻译并得到翻译文本,实现了对助听设备的功能扩展。
需要注意的是,上述附图仅是根据本公开示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。
所属技术领域的技术人员能够理解,本公开的各个方面可以实现为系统、方法或程序产品。因此,本公开的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。
下面参照图10来描述根据本公开的这种实施方式的助听控制装置1000。图10所示的助听控制装置1000仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
助听控制装置1000以硬件模块的形式表现。助听控制装置1000的组件可以包括但不限于:检测模块1002,用于播放听力检测音频,以获取佩戴助听设备的用户基于听力检测音频的反馈信号,并基于反馈信号确定用户的听力评估结果;采集模块1004,用于在基于听力评估结果确定需要助听设备执行显示操作时,采集需要辅助处理的语音信息;显示模块1006,用于在采集到语音信息时,将语音信息转化为文本信息,并在AR眼镜的视窗区域显示文本信息。
在一个实施例中,助听设备还包括置在AR眼镜上的骨传导振动传感器,骨传导振动传感器能够与用户的声带区域接触,检测模块1002还用于:在采集到语音信息时,基于骨传导振动传感器检测用户的声带振动信号;检测模块1002还用于:基于特征对比模型检测声带振动信号,以确定用户是否为语音信息的声源;助听控制装置1000还包括:转化模块1008,用于在确定用户不是语音信息的声源时,将语音信息转化为文本信息。
在一个实施例中,检测模块1002还用于:在AR眼镜的视窗内显示听力检测图像,听力检测图像包括多组不同长短组合图形,以及每组长短组合图像对应的字符;助听控制 装置1000还包括:播放模块1010,用于在一次评估中,基于指定的声音音量和/或声音音调播放多组长短音中的至少一组,作为所述听力检测音频,每组长短音对应于一组长短组合图像;接收模块1012,用于接收用户对长短音识别的反馈结果,以作为反馈信号;检测模块1002还用于:基于反馈结果确定用户的听力评估结果,其中,声音音量包括低音、中音和高音,声音音调包括低频、中频和高频,基于不同的声音音量和/或不同的声音音调执行多次评估。
在一个实施例中,接收模块1012还用于:在播放长短音之后,在视窗内显示长短音对应的字符的正确选项和错误选项;接收用户针对正确选项和错误选项的选择结果,将选择结果确定为反馈结果。
在一个实施例中,接收模块1012还用于:采集用户对长短音对应的字符的识别语音,将识别语音确定为反馈结果。
在一个实施例中,检测模块1002还用于:基于反馈结果确定用户反馈的反馈字符;检测反馈字符是否正确;基于检测结果评估用户能够识别的音量区域和用户的音调损失类型,作为用户的听力评估结果。
在一个实施例中,还包括:语音信息处理模块1014,用于在音量区域为第一音量区域时,对采集到的语音信息执行扩音操作;在音量区域为第二音量区域时,对采集到的语音信息执行扩音操作和显示操作;在音量区域为第三音量区域时,对采集到的语音信息执行显示操作。
在一个实施例中,语音信息处理模块1014还用于:检测语音信息的强度参数和频率参数,采取动态放大器对强度参数和频率参数自动调整增益,将强度参数和频率参数调整至舒适听音区间。
在一个实施例中,语音信息处理模块1014:在检测用户存在音调损失时,根据用户的音调损失类型对执行扩音操作的语音信息进行缺失频率的补偿操作。
在一个实施例中,显示模块1006还用于:确定语音信息的声源方位;基于声源方位对AR眼镜的视窗范围图像进行人脸识别,以识别出语音信息的发声对象;将语音信息转化为文本信息,并在助听设备上与发声对象对应的视窗区域显示文本信息。
在一个实施例中,显示模块1006还用于:检测语音信息的频谱参数;基于频谱参数区分语音信息的声源性别;基于声源性别确定对应的渲染方式;基于对应的渲染方式确定文本信息的显示风格,并显示在视窗区域。
在一个实施例中,显示模块1006还用于:基于发声对象的视觉特征检测与发声对象 之间的距离,检测到与发声对象之间的距离远近,同步调整文本信息的文本框大小。
在一个实施例中,显示模块1006还用于:在检测到采集到的语音信息为待翻译的语音信息时,调用目标语种的翻译模型,对待翻译的语音信息进行翻译,得到翻译文本;将翻译文本作为文本信息显示在AR眼镜的视窗区域。
下面参照图11来描述根据本公开的这种实施方式的助听设备1100。图11显示的助听设备1100仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图11所示,助听设备1100以通用计算设备的形式表现。助听设备1100的组件可以包括但不限于:上述至少一个处理单元1110、上述至少一个存储单元1120、连接不同系统组件(包括存储单元1120和处理单元1110)的总线1130。
其中,存储单元存储有程序代码,程序代码可以被处理单元1110执行,使得处理单元1110执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施方式的步骤。例如,处理单元1110可以执行如图2中所示的步骤S202、S204与S206,以及本公开的助听控制方法中限定的其他步骤。
存储单元1120可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)11201和/或高速缓存存储单元11202,还可以进一步包括只读存储单元(ROM)11203。
存储单元1120还可以包括具有一组(至少一个)程序模块11205的程序/实用工具11204,这样的程序模块11205包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
总线1130可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。
助听设备1100也可以与一个或多个外部设备1160(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该助听设备交互的设备通信,和/或与使得该助听设备1100能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口1150进行。并且,助听设备1100还可以通过网络适配器1150与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器1150通过总线1130与助听设备1100的其它模块通信。应当明白,尽管图中未示出,可以结合助听设备使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵 列、RAID系统、磁带驱动器以及数据备份存储系统等。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开实施方式的方法。
在本公开的示例性实施例中,还提供了一种计算机可读存储介质,其上存储有能够实现本说明书上述方法的程序产品。在一些可能的实施方式中,本公开的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当程序产品在终端设备上运行时,程序代码用于使终端设备执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施方式的步骤。
根据本公开的实施方式的用于实现上述方法的程序产品,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本公开的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本公开操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服 务提供商来通过因特网连接)。
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
此外,尽管在附图中以特定顺序描述了本公开中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、移动终端、或者网络设备等)执行根据本公开实施方式的方法。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由所附的权利要求指出。

Claims (19)

  1. 一种助听控制方法,应用于助听设备,所述助听设备包括AR眼镜,以及设置在所述AR眼镜上的声音采集模块和入耳式播音模块,所述声音采集模块用于采集语音,所述入耳式播音模块用于播放音频,所述助听控制方法包括:
    播放听力检测音频,以获取佩戴所述助听设备的用户基于所述听力检测音频的反馈信号,并基于所述反馈信号确定所述用户的听力评估结果;
    在基于所述听力评估结果确定需要所述助听设备执行显示操作时,采集需要辅助处理的语音信息;
    采集到所述语音信息,将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息。
  2. 根据权利要求1所述的助听控制方法,其中,所述助听设备还包括置在所述AR眼镜上的骨传导振动传感器,所述骨传导振动传感器能够与用户的头部骨骼区域接触,
    所述采集到所述语音信息,将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息,具体包括:
    采集到所述语音信息,基于所述骨传导振动传感器检测所述用户的声带振动信号;
    基于特征对比模型检测所述声带振动信号,以基于检测结果判断所述用户是否为所述语音信息的声源;
    基于判断结果确定对应的渲染方式,以在将所述语音信息转化为所述文本信息时,基于对应的所述渲染方式,在所述AR眼镜的视窗区域显示所述文本信息,
    其中,基于颜色、字体、显示比例和显示速度中的至少一项,配置不同的所述渲染方式。
  3. 根据权利要求2所述的助听控制方法,其中,所述基于判断结果确定对应的渲染方式,以在将所述语音信息转化为所述文本信息时,基于对应的所述渲染方式,在所述AR眼镜的视窗区域显示所述文本信息,具体包括:
    在确定所述用户不是所述语音信息的声源时,基于第一渲染方式进行显示操作;
    在确定所述用户是所述语音信息的声源时,基于第二渲染方式进行显示操作,
    其中,在基于所述第二渲染方式显示所述文本信息时,接收所述用户的反馈信息,以基于所述反馈信息判断所述用户的发音水平。
  4. 根据权利要求1所述的助听控制方法,其中,所述将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息,具体包括:
    确定所述语音信息的声源方位;
    基于所述声源方位对所述AR眼镜的视窗范围图像进行人脸识别,以识别出所述语音信息的发声对象;
    将所述语音信息转化为所述文本信息,并在所述助听设备上与所述发声对象对应的所述视窗区域显示所述文本信息。
  5. 根据权利要求1所述的助听控制方法,其中,所述将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息,还包括:
    检测所述语音信息的频谱参数;
    基于所述频谱参数区分所述语音信息的声源性别;
    基于所述声源性别确定对应的渲染方式;
    基于对应的所述渲染方式确定所述文本信息的显示风格,并显示在所述视窗区域。
  6. 根据权利要求4所述的助听控制方法,其中,所述将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息,还包括:
    基于所述发声对象的视觉特征检测与所述发声对象之间的距离;
    基于监测到的与所述发声对象之间的距离,同步调整所述文本信息的文本框大小。
  7. 根据权利要求1至6中任一项所述的助听控制方法,其中,所述采集到所述语音信息,将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息,还包括:
    在检测到采集到的所述语音信息为待翻译的语音信息时,调用目标语种的翻译模型,对所述待翻译的语音信息进行翻译,得到翻译文本;
    将所述翻译文本作为所述文本信息显示在所述AR眼镜的视窗区域。
  8. 根据权利要求1至7中任一项所述的助听控制方法,其中,所述播放听力检测音频,以获取佩戴所述助听设备的用户基于所述听力检测音频的反馈信号,并基于所述反馈信号确定所述用户的听力评估结果,具体包括:
    在所述AR眼镜的视窗内显示听力检测图像,所述听力检测图像包括多组不同长短组合图形,以及每组所述长短组合图像对应的字符;
    在一次评估中,基于指定的声音音量和/或声音音调播放多组长短音中的至少一组,作为所述听力检测音频,每组所述长短音对应于一组所述长短组合图像;
    接收用户对所述长短音识别的反馈结果,以作为所述反馈信号;
    基于所述反馈结果确定所述用户的听力评估结果,
    其中,所述声音音量包括低音、中音和高音,所述声音音调包括低频、中频和高频,基于不同的所述声音音量和/或不同的所述声音音调执行多次评估。
  9. 根据权利要求8所述的助听控制方法,其中,所述接收用户对所述长短音识别的反馈结果,具体包括:
    在播放所述长短音之后,在所述视窗内显示所述长短音对应的所述字符的正确选项和错误选项;
    接收用户针对所述正确选项和所述错误选项的选择结果,将所述选择结果确定为所述反馈结果。
  10. 根据权利要求8所述的助听控制方法,其中,所述接收用户对所述长短音识别的反馈结果,具体包括:
    采集所述用户对所述长短音对应的所述字符的识别语音,将所述识别语音确定为所述反馈结果。
  11. 根据权利要求8所述的助听控制方法,其中,所述基于所述反馈结果确定所述用户的听力评估结果,具体包括:
    基于所述反馈结果确定所述用户反馈的反馈字符;
    检测所述反馈字符是否正确;
    基于检测结果评估所述用户能够识别的音量区域和所述用户的音调损失类型,作为所述用户的听力评估结果。
  12. 根据权利要求11所述的助听控制方法,其中,在基于所述听力评估结果确定需要所述助听设备执行显示操作时,采集需要辅助处理的语音信息之前,还包括:
    在所述音量区域为第一音量区域时,对采集到的所述语音信息执行扩音操作;
    在所述音量区域为第二音量区域时,对采集到的所述语音信息执行所述扩音操作和所述显示操作;
    在所述音量区域为第三音量区域时,对采集到的所述语音信息执行显示操作。
  13. 根据权利要求12所述的助听控制方法,其中,所述对采集到的所述语音信息执行扩音操作,具体包括:
    检测所述语音信息的强度参数和频率参数,采取动态放大器对所述强度参数和所述频率参数自动调整增益,将所述强度参数和所述频率参数调整至舒适听音区间。
  14. 根据权利要求12所述的助听控制方法,其中,所述对采集到的所述语音信息执行扩音操作,还包括:
    在检测所述用户存在音调损失时,根据所述用户的音调损失类型对执行扩音操作的所述语音信息进行缺失频率的补偿操作。
  15. 一种助听控制装置,应用于助听设备,包括:
    检测模块,用于播放听力检测音频,以获取佩戴所述助听设备的用户基于所述听力检测音频的反馈信号,并基于所述反馈信号确定所述用户的听力评估结果;
    采集模块,用于在基于所述听力评估结果确定需要所述助听设备执行显示操作时,采集需要辅助处理的语音信息;
    显示模块,用于采集到所述语音信息,将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息。
  16. 一种助听设备,包括:
    AR眼镜;
    设置在所述AR眼镜上的入耳式播音模块,用于播放听力检测音频;
    处理器,用于获取佩戴所述助听设备的用户基于所述听力检测音频的反馈信号,并基于所述反馈信号确定所述用户的听力评估结果;
    设置在所述AR眼镜上的声音采集模块,用于在基于所述听力评估结果确定需要所述助听设备执行显示操作时,采集需要辅助处理的语音信息;
    所述AR眼镜还用于:采集到所述语音信息,将所述语音信息转化为文本信息,并在所述AR眼镜的视窗区域显示所述文本信息。
  17. 根据权利要求16所述的助听设备,其中,还包括:
    设置在所述AR眼镜上的骨传导振动传感器,所述骨传导振动传感器能够与用户的声带区域接触,所述骨传导振动传感器用于检测所述用户的声带振动信号;
    所述处理器还用于:基于特征对比模型检测所述声带振动信号,以确定所述用户是否为所述语音信息的声源;
    所述处理器还用于:在确定所述用户不是所述语音信息的声源时,将所述语音信息转化为文本信息。
  18. 一种助听设备,包括:
    处理器;以及
    存储器,用于存储所述处理器的可执行指令;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1~14中任意一项所述的助听控制方法。
  19. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1~14中任意一项所述的助听控制方法。
PCT/CN2022/093543 2021-10-25 2022-05-18 助听控制方法、装置、助听设备和存储介质 WO2023071155A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111252280.1A CN114007177B (zh) 2021-10-25 2021-10-25 助听控制方法、装置、助听设备和存储介质
CN202111252280.1 2021-10-25

Publications (1)

Publication Number Publication Date
WO2023071155A1 true WO2023071155A1 (zh) 2023-05-04

Family

ID=79924459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093543 WO2023071155A1 (zh) 2021-10-25 2022-05-18 助听控制方法、装置、助听设备和存储介质

Country Status (2)

Country Link
CN (1) CN114007177B (zh)
WO (1) WO2023071155A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114007177B (zh) * 2021-10-25 2024-01-26 北京亮亮视野科技有限公司 助听控制方法、装置、助听设备和存储介质
CN115064036A (zh) * 2022-04-26 2022-09-16 北京亮亮视野科技有限公司 基于ar技术的危险预警方法和装置
CN115079833B (zh) * 2022-08-24 2023-01-06 北京亮亮视野科技有限公司 基于体感控制的多层界面与信息可视化呈现方法及系统

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150319546A1 (en) * 2015-04-14 2015-11-05 Okappi, Inc. Hearing Assistance System
WO2016167877A1 (en) * 2015-04-14 2016-10-20 Hearglass, Inc Hearing assistance systems configured to detect and provide protection to the user harmful conditions
CN207612422U (zh) * 2017-12-07 2018-07-13 杭州蓝斯特科技有限公司 一种可视化助听装置
CN108702580A (zh) * 2016-02-19 2018-10-23 微软技术许可有限责任公司 具有自动语音转录的听力辅助
CN108877407A (zh) * 2018-06-11 2018-11-23 北京佳珥医学科技有限公司 用于辅助交流的方法、装置和系统及增强现实眼镜
EP3409319A1 (en) * 2017-06-02 2018-12-05 Advanced Bionics AG System for neural hearing stimulation integrated with a pair of glasses
US20200265839A1 (en) * 2019-02-20 2020-08-20 John T. McAnallan Glasses with subtitles
CN111640448A (zh) * 2020-06-03 2020-09-08 山西见声科技有限公司 基于语音增强的视听辅助方法及系统
CN114007177A (zh) * 2021-10-25 2022-02-01 北京亮亮视野科技有限公司 助听控制方法、装置、助听设备和存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011158506A1 (ja) * 2010-06-18 2011-12-22 パナソニック株式会社 補聴器、信号処理方法及びプログラム
CN103297889B (zh) * 2013-06-03 2017-04-12 瑞声科技(南京)有限公司 入耳式耳机
US10257619B2 (en) * 2014-03-05 2019-04-09 Cochlear Limited Own voice body conducted noise management
CN110719558B (zh) * 2018-07-12 2021-07-09 深圳市智听科技有限公司 助听器验配方法、装置、计算机设备及存储介质
CN111447539B (zh) * 2020-03-25 2021-06-18 北京聆通科技有限公司 一种用于听力耳机的验配方法和装置
CN214205842U (zh) * 2020-12-30 2021-09-14 苏州迈麦精密科技有限公司 一种入耳式骨传导助听器

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150319546A1 (en) * 2015-04-14 2015-11-05 Okappi, Inc. Hearing Assistance System
WO2016167877A1 (en) * 2015-04-14 2016-10-20 Hearglass, Inc Hearing assistance systems configured to detect and provide protection to the user harmful conditions
CN108702580A (zh) * 2016-02-19 2018-10-23 微软技术许可有限责任公司 具有自动语音转录的听力辅助
EP3409319A1 (en) * 2017-06-02 2018-12-05 Advanced Bionics AG System for neural hearing stimulation integrated with a pair of glasses
CN207612422U (zh) * 2017-12-07 2018-07-13 杭州蓝斯特科技有限公司 一种可视化助听装置
CN108877407A (zh) * 2018-06-11 2018-11-23 北京佳珥医学科技有限公司 用于辅助交流的方法、装置和系统及增强现实眼镜
US20200265839A1 (en) * 2019-02-20 2020-08-20 John T. McAnallan Glasses with subtitles
CN111640448A (zh) * 2020-06-03 2020-09-08 山西见声科技有限公司 基于语音增强的视听辅助方法及系统
CN114007177A (zh) * 2021-10-25 2022-02-01 北京亮亮视野科技有限公司 助听控制方法、装置、助听设备和存储介质

Also Published As

Publication number Publication date
CN114007177B (zh) 2024-01-26
CN114007177A (zh) 2022-02-01

Similar Documents

Publication Publication Date Title
WO2023071155A1 (zh) 助听控制方法、装置、助听设备和存储介质
CN111447539B (zh) 一种用于听力耳机的验配方法和装置
US10652674B2 (en) Hearing enhancement and augmentation via a mobile compute device
Chern et al. A smartphone-based multi-functional hearing assistive system to facilitate speech recognition in the classroom
US20040162722A1 (en) Speech quality indication
Flynn et al. Hearing performance benefits of a programmable power baha® sound processor with a directional microphone for patients with a mixed hearing loss
JP2018191296A (ja) スマートヘッドフォン装置のパーソナライズシステム及びその使用方法
US20220201404A1 (en) Self-fit hearing instruments with self-reported measures of hearing loss and listening
US20190141462A1 (en) System and method for performing an audiometric test and calibrating a hearing aid
Ricketts et al. Directional hearing aid benefit in listeners with severe hearing loss: Beneficio de los auxiliares auditivos direccionales en personas con hipoacusia severa
Chen et al. Effects of wireless remote microphone on speech recognition in noise for hearing aid users in China
Walker et al. Thresholds of audibility for bone-conduction headsets
JP2003319497A (ja) 検査センター装置、端末装置、聴力補償方法、聴力補償方法プログラム記録媒体、聴力補償方法のプログラム
WO2024001463A1 (zh) 音频信号的处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品
KR101232357B1 (ko) 파라미터가 적용된 음원을 이용한 보청기 피팅 방법 및 그 방법을 이용한 보청기
Chong-White et al. Evaluation of Apple AirPods Pro with Conversation Boost and Ambient Noise Reduction for People with Hearing Loss in Noisy Environments.
CN111417062A (zh) 一种助听器验配处方
US9204226B2 (en) Method for adjusting a hearing device as well as an arrangement for adjusting a hearing device
Pumford Considerations in real-ear measurement: Points to Ponder
Kuk Preferred insertion gain of hearing aids in listening and reading-aloud situations
KR102350890B1 (ko) 휴대용 청력검사장치
WO2006032101A1 (en) Hearing testing
Portelli et al. Functional outcomes for speech-in-noise intelligibility of NAL-NL2 and DSL v. 5 prescriptive fitting rules in hearing aid users
CN217162112U (zh) 一种听力计主控电路及听力计
McPherson Self‐Reported Benefit and Satisfaction with a Beamforming Body‐Worn Hearing Aid for Elderly Adults

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22885068

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE