CN117597941A - Respiration monitoring method, device, earphone and storage medium - Google Patents

Respiration monitoring method, device, earphone and storage medium Download PDF

Info

Publication number
CN117597941A
CN117597941A CN202280004450.0A CN202280004450A CN117597941A CN 117597941 A CN117597941 A CN 117597941A CN 202280004450 A CN202280004450 A CN 202280004450A CN 117597941 A CN117597941 A CN 117597941A
Authority
CN
China
Prior art keywords
respiratory
audio signal
frequency
frequency range
waveform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280004450.0A
Other languages
Chinese (zh)
Inventor
周岭松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Beijing Xiaomi Pinecone Electronic Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Beijing Xiaomi Pinecone Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd, Beijing Xiaomi Pinecone Electronic Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Publication of CN117597941A publication Critical patent/CN117597941A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the disclosure provides a respiration monitoring method, a respiration monitoring device, an earphone and a storage medium. The respiration monitoring method is applied to a headset, the headset comprises a feedback microphone, and the method comprises the following steps: collecting an audio signal in an auditory canal through a feedback microphone to obtain an auditory canal audio signal (S100); filtering the auditory canal audio signal to obtain a respiratory audio signal; wherein, the respiratory audio signal is an audio signal generated when vibration generated by the user during respiration is transmitted to the auditory canal through bone conduction when the earphone is in a state of being worn by the user (S200); determining a breathing frequency of the user from the breathing audio signal (S300); based on the breathing frequency and the reference frequency range, it is determined whether the breathing frequency is abnormal (S400).

Description

Respiration monitoring method, device, earphone and storage medium Technical Field
The present disclosure relates to the field of information processing technology, and in particular, to a respiration monitoring method, apparatus, earphone, and storage medium.
Background
With the development of technology, more and more electronic devices appear in each application scene, and different electronic devices can realize different functions in corresponding application scenes.
With the widespread use of health monitoring devices, the health status of a detected subject can be detected by the health monitoring device. Wherein, respiration is an important indicator of health condition and plays an important role in determining health condition. By monitoring the respiration, it can be determined whether the respiration of the monitored subject is normal.
Disclosure of Invention
Embodiments of the present disclosure provide a respiration monitoring device, an earphone, and a storage medium.
A first aspect of an embodiment of the present disclosure provides a respiration monitoring method, wherein the method is applied to a headset, the headset including a feedback microphone, the method comprising:
collecting an audio signal in an auditory canal through the feedback microphone to obtain an auditory canal audio signal;
filtering the auditory canal audio signal to obtain a respiratory audio signal; the respiratory audio signal is generated when the earphone is in a state of being worn by a user, and vibration generated by the user in the respiratory process is transmitted to an auditory canal in a bone conduction mode;
determining a respiratory rate of the user from the respiratory audio signal;
and determining whether the breathing frequency of the user is abnormal according to the breathing frequency and the reference frequency range.
A second aspect of embodiments of the present disclosure provides a respiration monitoring device for use with a headset that includes a feedback microphone; the device comprises:
the auditory canal audio signal acquisition module is configured to acquire an audio signal in an auditory canal through the feedback microphone to obtain an auditory canal audio signal;
the respiratory audio signal determining module is configured to perform filtering processing on the auditory canal audio signal to obtain a respiratory audio signal; the respiratory audio signal is generated when the earphone is in a state of being worn by a user, and vibration generated by the user in the respiratory process is transmitted to an auditory canal in a bone conduction mode;
a respiratory frequency determination module configured to determine a respiratory frequency of the user from the respiratory audio signal;
an anomaly determination module configured to determine whether the respiratory frequency is anomalous based on the respiratory frequency and a reference frequency range.
A third aspect of the disclosed embodiments provides an earphone comprising a housing, a controller disposed on the housing, a feedback microphone, a feedforward microphone, and a speaker; the feedforward microphone is connected with the controller and is used for collecting audio data outside the auditory canal and sending the audio data to the controller; the feedback microphone is connected with the controller and used for collecting audio data in the auditory canal and sending the audio data to the controller; the controller comprises a memory having stored thereon executable computer instructions and a processor capable of invoking the computer instructions stored thereon to perform the respiration monitoring method as provided in the first aspect described above when the program is executed.
A fourth aspect of the disclosed embodiments provides a computer storage medium storing an executable program; the executable program, when executed by the processor, is capable of implementing the respiration monitoring method provided in the first aspect.
The respiration monitoring method provided by the embodiment of the disclosure can be applied to the earphone, and the respiration frequency of the user can be determined through the earphone without other monitoring sensors, so that the convenience for monitoring the respiration frequency of the user is improved, and the use experience of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the embodiments of the invention.
FIG. 1 is a schematic diagram illustrating a respiration monitoring method according to an exemplary embodiment;
FIG. 2 is a schematic diagram of an earphone according to an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating an earphone in a state worn by a user according to an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating another method of determining respiratory monitoring according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating one determination of respiratory rate according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating one method of determining a target amplitude value according to an exemplary embodiment;
FIG. 7 is a schematic diagram illustrating one determination of a target frequency range and a reference frequency range, according to an example embodiment;
FIG. 8 is a schematic diagram illustrating one determination of a target frequency range, according to an example embodiment;
FIG. 9 is a schematic diagram illustrating one determination of a reference frequency range, according to an example embodiment;
FIG. 10 is a schematic diagram of a respiratory monitoring device, according to an exemplary embodiment;
fig. 11 is a schematic diagram showing a structure of an electronic device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with embodiments of the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of embodiments of the invention.
The terminology used in the embodiments of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the disclosure. As used in this disclosure, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In general, the respiratory rate may reflect the physical condition, and professional monitoring equipment is required to monitor the respiratory rate. For example, when a user visits a hospital, a doctor may monitor the user's respiratory rate, such as piezoelectric, thermal and/or infrared sensors, through devices within the hospital for monitoring respiration. The sensors are deployed at corresponding locations, and when the human body breathes, some floating occurs at the locations where the sensors are deployed, and the sensors can generate electrical signals according to the floating, so that the breathing frequency can be determined according to the electrical signals. Different respiration rates can be identified by resolving the signals sensed by the different sensors. And the associated operations in this process require specialized healthcare personnel to complete.
When monitoring the respiratory rate of a user by these sensors, the user is required to wear a plurality of sensors at a specific place and in a specific state to monitor the respiratory rate. Such as in a corresponding department in a hospital, the user is done in a non-moving state, such as a stationary state. Therefore, the respiratory rate cannot be monitored anytime and anywhere, and the convenience of monitoring the respiratory rate is reduced, so that the experience of a user is reduced.
Referring to fig. 1, a schematic diagram of a respiration monitoring method according to an embodiment of the present disclosure is shown, where the method may be applied to at least a headset, and the headset may include at least a feedback microphone.
As shown in fig. 1, the method includes:
step S100, collecting an audio signal in an auditory canal through a feedback microphone to obtain a respiratory audio signal;
step S200, filtering the auditory canal audio signal to obtain a respiratory audio signal; the respiratory audio signal is an audio signal generated when vibration generated by a user in the respiratory process is transmitted to the auditory canal in a bone conduction mode when the earphone is in a state of being worn by the user.
Step S300, determining the breathing frequency of the user according to the breathing audio signal.
Step S400, determining whether the breathing frequency is abnormal according to the breathing frequency and the reference frequency range.
The headphones may be different shaped headphones, including in-ear, semi-in-ear, and head-mounted headphones of different forms. The communication means of the headset may include wired and wireless headsets, and the wireless headset may include a bluetooth headset, such as a truly wireless stereo headset (TrueWireless Stereo, TWS). The earpiece may also comprise a device such as a hearing aid with a feedback microphone and capable of implementing the solution.
The feedback microphone in the earphone can be positioned near the sound outlet channel of the earphone, and when the earphone is in a wearing state, the feedback microphone is positioned in the auditory canal and can collect audio signals in the auditory canal, such as in-ear earphone. When the earphone is in other forms, such as a semi-in-ear earphone, a headset and the like, the feedback microphone can collect audio signals in the auditory canal when the earphone is in a wearing state.
Referring to fig. 2, fig. 2 is a schematic diagram of an earphone, including a feedback microphone a, which may be located in an ear canal when the earphone is in a worn state. The earphone can further comprise a feedforward microphone b, the feedforward microphone b can be located on the earphone handle, when the earphone is in a wearing state, the feedforward microphone is located outside the ear canal, and can collect environmental audio signals of an external environment. The earphone may further include a conversation microphone c for collecting an audio signal emitted from the user in a conversation state. The feedback microphone a has a higher signal-to-noise ratio than the audio signal collected by the feedforward microphone b, so that the auditory canal audio signal with less noise and higher quality is collected by the feedback microphone a.
When the earphone is a headphone, a certain degree of ear blocking effect is formed, and when the earphone is in a wearing state, the feedback microphone a can be positioned outside or towards the auditory canal, and can collect auditory canal audio signals.
Referring to fig. 3, fig. 3 is a schematic diagram of the earphone in a state of being worn by a user, where the earphone 1 forms a certain degree of blockage to the ear canal 2, thereby forming an ear blockage effect. The ear canal audio signal collected by the feedback microphone comprises: when the earphone is in a state of being worn by a user, vibration generated by the user during breathing is transmitted to an audio signal generated by the auditory canal in a bone conduction mode, namely an audio signal 3.
For step S100, when the earphone is in the wearing state, a certain blockage is generated on the auditory canal, so as to form a certain ear blockage effect, and the generation reason is that part of sound is transmitted to the inner ear through the bones of the human body, for example, vibration is generated between the air flow and the respiratory tract when the nasal cavity is close to the auditory canal, and the vibration is transmitted to the audio signal of the auditory canal in a bone conduction manner. When the earphone is not in a wearing state, a part of bone conduction sound is outwards diffused through the outer ear, but when the earphone is in the wearing state, the auditory canal is blocked to a certain extent, so that the diffusion quantity of bone conduction sound outwards diffused through the auditory canal is reduced, and a certain degree of ear blocking effect, namely an occlusion effect, is formed. The sound characteristics produced by the occlusion effect are manifested as low frequency signal emphasis and high frequency signal attenuation.
Because the earphone produces certain jam to the auditory canal, forms the stifled ear effect of different degree, after producing stifled ear effect, the earphone has stopped external audio signal to get into the auditory canal, has reduced external audio signal's influence to the audio signal in the auditory canal. The feedback microphone may collect audio signals in the ear canal to obtain an ear canal audio signal.
For step S200, the auditory canal audio signal is subjected to filtering processing to obtain a respiratory audio signal. Since the ear canal audio signal may include other audio signals in addition to the respiratory audio signal, filtering the ear canal audio signal may result in a respiratory audio signal. The filtering may be performed by a filtering algorithm according to frequency, and may also be performed by a filter.
The respiratory audio signal is an audio signal generated by transmitting vibration generated by breathing to the auditory canal through bone conduction during breathing of the user when the earphone is in a state of being worn by the user. The vibrations of the user due to breathing during breathing may include vibrations of the airflow in the respiratory tract and/or vibrations of the respiratory tract, which may include the nasal cavity.
After forming the stifled ear effect of certain degree, vibrations that the user produced in breathing process can pass through bone conduction to the ear canal, produces breathing audio signal, for example audio signal in 0.1Hz to 10Hz within range, and the stifled ear effect can amplify the audio signal of this frequency range to can be convenient for feedback microphone collection breathing audio signal. The range may be adjusted according to the user's motion state.
For step S300, after the respiratory audio signal is obtained, the respiratory frequency of the user may be determined according to the respiratory audio signal. The respiratory audio signal can reflect the respiratory condition of the user, the respiratory audio signal can be an audio signal in a time domain, the respiratory audio signal can be subjected to time-frequency conversion, and the conversion result is analyzed to obtain the respiratory frequency of the user. The manner in which the respiratory frequency of the user is derived from the respiratory audio signal is within the scope of this embodiment, and reference is made to the subsequent embodiments for detailed determination.
For step S400, after determining the respiratory rate, it may be determined whether the respiratory rate is abnormal according to the respiratory rate and the reference frequency range. When the respiratory rate exceeds the reference frequency range, the respiratory rate abnormality is determined, and when the respiratory rate is within the reference frequency range, the respiratory rate is determined to be normal.
The reference frequency range may be determined according to the actual usage scenario and the state in which the user's body is located. The state in which the user's body is in, i.e., the physical state, may include a moving state and a non-moving state. The non-motion state may include a state in which the user is sitting, standing, or sleeping, in which the user corresponds to a first reference frequency range, and a second reference frequency range, in which the user is sleeping. The exercise state may include running, playing, swimming, etc. different exercise states, where a third reference frequency range corresponds to the running state and a fourth reference frequency range corresponds to the playing.
According to the matching of the breathing frequency and the corresponding reference frequency range, when the breathing frequency is positioned in the corresponding reference frequency range, the breathing frequency is normal, and when the breathing frequency is positioned outside the corresponding reference frequency range, the breathing frequency is abnormal.
According to the method and the device, the breathing audio signals of the user can be obtained through the earphone, then the breathing frequency of the user is detected according to the breathing audio signals, whether the breathing frequency of the user is matched with the corresponding reference frequency range or not is further determined, and whether the breathing frequency of the user is abnormal or not can be determined. The existing built-in microphone is utilized, new hardware cost is not needed, monitoring equipment such as other various sensors and the like are not needed, and the breathing frequency can be determined through the earphone. The difficulty and inconvenience of monitoring the breathing frequency of the user are reduced, the convenience of monitoring the breathing frequency of the user is improved, and the use experience of the user is improved.
In one embodiment, referring to fig. 4, fig. 4 is a schematic diagram of a method of deriving a respiratory audio signal. The method comprises the following steps:
step S201, framing the auditory canal audio signal to obtain an auditory canal audio signal after multi-frame framing.
Step S202, filtering the auditory canal audio signals obtained after multi-frame framing to obtain multi-frame respiratory audio signals.
After the audio signals in the auditory canal are collected by the feedback microphone in the earphone, the auditory canal audio signals can be obtained, and in order to facilitate subsequent processing, the audio signals in the auditory canal collected by the feedback microphone are subjected to framing, so that the auditory canal audio signals after multi-frame framing are obtained. The multi-frame auditory canal audio signals obtained after framing are more stable and continuous, and are convenient for subsequent processing.
The frame length of the ear canal audio signal after framing can be determined according to practical requirements, for example, 16ms, 32ms, 64ms, etc. In order to make a smooth transition between frames of an audio signal, and maintain its continuity, there may be a frame shift between two adjacent frames during framing, which may be half the frame length. The frame length of each frame of the audio signal after framing may be the same.
Since other sounds can also be transmitted into the ear canal by way of bone conduction, the feedback microphone can also pick up audio signals other than respiratory audio signals, such as audio signals of running, walking, speaking, tooth collision, head collision with other things, etc. The auditory canal audio signal collected by the feedback microphone may include a respiratory audio signal, and may further include other audio signals besides the respiratory audio signal, where the other audio signals except the respiratory audio signal are noise signals corresponding to the respiratory audio signal, and these noise signals need to be filtered before the respiratory frequency is determined according to the respiratory audio signal.
Because the frequency of the respiratory audio signal is different from the frequency of other audio signals, the auditory canal audio signal can be filtered to remove other audio signals except the respiratory audio signal. For example, the frequency range of the respiratory audio signal may be in the range of 0.1Hz to 5Hz, centered around 0.5 Hz. The frequency of other audio signals such as running, walking, talking, tooth collision and head collision with other things is not within this range or less intersected by this frequency range, the frequency of audio signals such as running, walking or talking is generally greater than the frequency of respiratory audio signals, and the amplitude of audio signals such as running, walking or talking is also greater than the amplitude of respiratory audio signals. The noise signal may be filtered out according to frequency. The frequency range of the respiratory audio signal may be adjusted according to the physical state of the user.
The filtered auditory canal audio signal is the respiratory audio signal, so that the interference of the noise signal on the respiratory audio signal can be reduced, the interference on the determined respiratory frequency is reduced, and the accuracy of the determined respiratory frequency is convenient to improve.
The filtering mode may include various modes, for example, filtering the audio signal after framing each frame through fourier transform to obtain a multi-frame respiratory audio signal. The audio signals after framing of each frame can be filtered through wavelet transformation, and multi-frame respiratory audio signals are obtained. The method of filtering the audio signals after framing of each frame and filtering other audio signals except the respiratory audio signals to obtain multi-frame respiratory audio signals is within the protection scope of the embodiment.
In one embodiment, referring to fig. 5, fig. 5 is a schematic diagram of a determination of respiratory rate. Step S300, determining a respiratory rate of the user according to the respiratory audio signal, including:
step S301, determining the frequency spectrum information of the respiratory audio signal;
step S302, detecting a target amplitude value in the respiratory audio signal in a target frequency range based on the frequency spectrum information; the target amplitude is the maximum peak value of the fundamental wave of the respiratory audio signal;
step S303, determining the breathing frequency according to the frequency corresponding to the target amplitude.
After the respiratory audio signal is determined, the respiratory audio signal is processed, and the frequency spectrum information of the respiratory audio signal is determined. Because the respiratory audio signal acquired by the feedback microphone is an audio signal in the time domain, the respiratory audio signal in the time domain can be converted to obtain an audio signal in the frequency domain, so that corresponding frequency spectrum information is obtained. For example, the respiratory audio signal in the time domain is converted into the respiratory audio signal in the frequency domain by means of fourier transformation, so that the spectral information of the respiratory audio signal can be obtained.
The frequency spectrum information comprises corresponding information of frequency and amplitude, and the target amplitude of the respiratory audio signal corresponding to the frequency spectrum information can be detected in a target frequency range according to the frequency spectrum information. The target amplitude is in a target frequency range, and the spectrum information corresponds to the maximum peak-to-peak value of the fundamental wave of the respiratory audio signal, namely, the maximum amplitude of the fundamental wave corresponds to the amplitude value. The fundamental wave of the respiratory audio signal can reflect the frequency spectrum information of the respiratory audio signal, and the respiratory frequency is determined according to the frequency corresponding to the maximum peak value of the fundamental wave of the respiratory audio signal, so that the accuracy of the respiratory frequency can be improved. The target frequency range may be preset, or the corresponding target frequency range may be determined according to the physical state of the user.
In the spectral information, the larger the amplitude, the greater the intensity of the corresponding audio signal. The intensity of the respiratory audio signal is greatest at the target amplitude, so the accuracy of determining the respiratory frequency is higher according to the frequency corresponding to the target amplitude in the spectral information. The larger the amplitude, the higher the peak value of the peak, and the maximum peak value of the fundamental wave in the target frequency range corresponds to the amplitude of the maximum amplitude of the fundamental wave. The respiratory rate is determined according to the frequency corresponding to the target amplitude, so that the accuracy of determining the respiratory rate can be improved.
In another embodiment, in step S301, each frame of respiratory audio signal in the pair may be processed in units of frames to determine spectral information of each frame of respiratory audio signal, so that the spectral information of each frame of respiratory audio signal may be obtained, thereby facilitating the determination of respiratory frequency according to the spectral information of at least one frame of respiratory audio signal.
In step S302, a target amplitude corresponding to each frame of respiratory audio signal may be determined within the target frequency range according to the spectral information of each frame of respiratory audio signal.
Step S303 may further determine a frequency corresponding to the target amplitude corresponding to each frame of respiratory audio signal according to the frequency spectrum information corresponding to each frame of respiratory audio signal, and determine a respiratory frequency according to the frequency corresponding to the target amplitude corresponding to each frame of respiratory audio signal.
In another embodiment, referring to fig. 6, fig. 6 is a schematic diagram of determining a target amplitude. Step S302, detecting a target amplitude in the respiratory audio signal in a target frequency range based on the spectral information, comprising:
s3021, detecting a maximum peak-to-peak value in a target frequency range based on the spectrum information.
S3022, determining whether a waveform with a second waveform frequency exists in a target frequency range according to a first waveform frequency of the waveform corresponding to the maximum peak value; wherein the first waveform frequency is an integer multiple of the second waveform frequency.
S3023, when it is determined that the waveform of the second waveform frequency exists, the waveform of the second waveform frequency is determined as the fundamental wave.
S3024, when it is determined that the waveform of the second waveform frequency does not exist, determining the waveform of the first waveform frequency as the fundamental wave;
s3025, determining the maximum peak-to-peak value of the fundamental wave as the target amplitude.
After the frequency spectrum information of the respiratory audio signal is obtained, the waveform of the respiratory audio signal can be determined according to the frequency spectrum information, and as the waveform has a peak, the maximum peak value is detected in the target frequency range according to the waveform in the frequency spectrum information, and the intensity of the respiratory audio signal is maximum at the maximum peak value in the frequency spectrum information, so that the respiratory audio signal can be reflected most. According to the frequency spectrum information, the frequency of the waveform corresponding to the maximum peak value can be determined in the target frequency range, and the frequency is recorded as the frequency of the first waveform. Then detecting whether a waveform with a second waveform frequency exists in the target frequency range, wherein the first waveform frequency is an integral multiple of the second waveform frequency, if the waveform with the second waveform frequency is detected, determining that the waveform with the second waveform frequency exists, and determining that the waveform with the first waveform frequency is not a fundamental wave of the respiratory audio signal, for example, the waveform with the first waveform frequency is a harmonic wave of the fundamental wave of the respiratory audio signal, the frequency of the harmonic wave is an integral multiple of the fundamental wave frequency, the frequency of the harmonic wave is larger than the frequency of the fundamental wave, and the harmonic wave can have adverse effects on determining the target amplitude, thereby affecting the determination of the respiratory frequency.
If no waveform of the second waveform frequency is detected, it is determined that no waveform of the second waveform frequency exists, indicating that the waveform of the first waveform frequency is a fundamental wave of the respiratory audio signal in the target frequency range, and the waveform of the first waveform frequency is determined as the fundamental wave.
For example, in the frequency spectrum information of the respiratory audio signal, two identical target amplitude values exist, the peak values of the two wave peaks are identical, the frequency corresponding to each target amplitude value is different, the frequency corresponding to one target amplitude value is 1Hz, the frequency corresponding to the other target amplitude value is 1.5Hz, and the frequency 1Hz is determined as the respiratory frequency.
In another embodiment, step S303, determining the respiratory rate according to the frequency corresponding to the target amplitude includes:
when the number of frames of the respiratory audio signal is multiple, determining the average frequency of the frequency corresponding to the target amplitude in the multiple frames of respiratory audio signals as the respiratory frequency.
And determining the target amplitude value in each frame of respiratory audio signal according to the frequency spectrum information of each frame of respiratory audio signal in the multi-frame respiratory audio signal, and then determining the average value of the frequencies corresponding to the target amplitude values in the multi-frame respiratory audio signal as the respiratory frequency. This reduces the impact of individual target amplitude differences on the accuracy of determining the respiratory rate and thereby improves the accuracy of determining the respiratory rate.
For example, if the number of frames of the respiratory audio signal is 10 frames, determining the target amplitude of each frame of respiratory audio signal frame according to the frequency spectrum information of each frame of respiratory audio signal in the 10 frames of respiratory audio signal. According to the frequency spectrum information of the first frame of respiration audio signal, the target amplitude value 1 of the first frame of respiration audio signal is determined, according to the frequency spectrum information of the second frame of respiration audio signal, the target amplitude value 2 … of the second frame of respiration audio signal is determined, and according to the frequency spectrum information of the tenth frame of respiration audio signal, the target amplitude value 10 of the tenth frame of respiration audio signal is determined. Then, in the target frequency range, frequencies corresponding to the target amplitude 1 to the target amplitude 10 are determined, and the average frequency of the ten frequencies is determined as the respiratory frequency.
In another embodiment, referring to fig. 7, a schematic diagram of determining a target frequency range and a reference frequency range is shown. The method comprises the following steps:
step A, acquiring the physical state of a user; the physical state comprises a motion state and a non-motion state, the motion state comprises at least one state corresponding to motion, and the non-motion state comprises at least one state corresponding to non-motion;
and B, determining a target frequency range and a reference frequency range according to the body state. Different physical states respectively correspond to respective target frequency ranges, and different physical states respectively correspond to respective reference frequency ranges; in the same physical state, the reference frequency range is included in the target frequency range.
The body state acquisition can be achieved through an external device connected with the earphone, and the external device sends the body state information to the earphone after determining the body state, and can also be determined through the earphone. The process of determining the physical state is not limited, but the result of the acquired physical state is used here.
The target frequency range and the reference frequency range may be determined according to a physical state of the user, and when the physical state of the user is different, the corresponding target frequency range and reference frequency range may be different. The physical state of the user may include an exercise state, which may include states corresponding to various exercises such as walking, running, playing a ball, and swimming, and a non-exercise state, which may include states corresponding to non-exercises such as sitting, standing, sleeping, and the like.
A target frequency range and a reference frequency range matching the current physical state of the user may be determined from the physical state. The target frequency range and the reference frequency range corresponding to different physical states can be determined according to actual use requirements. When the ages of the users are different, the target frequency range and the reference frequency range may be different in the same physical state. The target frequency range and the reference frequency range may also be different in different exercise states, for example, the target frequency range and the reference frequency range corresponding to the walking state are different from the target frequency range and the reference frequency range corresponding to the running state. The target frequency range and the reference frequency range may also be different in different non-motion states, e.g., the corresponding target frequency range and reference frequency range in a sleep state may be different from the corresponding target frequency range and reference frequency range in a sitting state.
In the same physical state, the reference frequency range is included in the target frequency range, the maximum value of the reference frequency range is smaller than the maximum value of the target frequency range, and the minimum value of the reference frequency range is smaller than the minimum value of the target frequency range. If the respiratory rate determined in the target frequency range is always in the reference frequency range, the condition that the respiratory rate exceeds the reference frequency range is avoided, and the condition that the respiratory rate is abnormal can be reduced. The problem that the accuracy of determining whether the respiratory frequency is abnormal is low due to the fact that the probability of the respiratory frequency in a section which is not intersected with the target frequency in the reference frequency range is high due to the fact that the reference frequency range and the target frequency range are partially intersected can be solved, and the accuracy of determining whether the respiratory frequency is abnormal is improved.
In another embodiment, referring to FIG. 8, a schematic diagram of determining a target frequency range is shown. In step B, determining a target frequency range according to the physical state of the user, including:
and B1, determining a first compensation value of a maximum value and a second compensation value of a minimum value in a target frequency range corresponding to different body states.
And step B2, determining a target frequency range according to the first reference frequency range, the first compensation value and the second compensation value.
The target frequency range has a first reference frequency range for determining the target frequency range in different body states, the different body states corresponding to a first compensation value having a maximum value and a second compensation value having a minimum value in the target frequency range. The first compensation value and the second compensation value corresponding to different physical states are different. And determining a target frequency range matched with the body state according to the first reference frequency range, the first compensation value and the second compensation value.
Referring to fig. 9, a schematic diagram of determining a reference frequency range is shown. In step B, determining a reference frequency range according to the physical state of the user, including:
and B3, determining a third compensation value of the maximum value and a fourth compensation value of the minimum value in the reference frequency range corresponding to different body states.
And step B4, determining a reference frequency range according to the second reference frequency range, the third compensation value and the fourth compensation value.
The reference frequency range has a second reference frequency range for determining the reference frequency range in different body states, the different body states corresponding to a third compensation value and a fourth compensation value of a minimum value of the maximum values in the reference frequency range. The third compensation value and the fourth compensation value corresponding to different physical states are different. And determining the reference frequency range matched with the physical state according to the second reference frequency range, the third compensation value and the fourth compensation value.
The first reference frequency range and the second reference frequency range may be preset, for example, the second reference frequency range is 0.3Hz to 3Hz, different effects may be generated on the respiratory frequency in different physical states, the target frequency range may be dynamically adjusted according to the first compensation value and the second compensation value, and the reference frequency range may be dynamically adjusted according to the third compensation value and the fourth compensation value. The first compensation value and the second compensation value may be the same, and the third compensation value and the fourth compensation value may be the same.
For example, if the first compensation value and the second compensation value are both 0.2Hz, the second reference frequency range is adjusted to 0.5Hz to 3.2Hz. When the first and second compensation values are the same, the first and second compensation values may be noted as delta_f and the second reference frequency range may be adjusted to delta_f+0.3Hz to delta_f+3Hz.
By adjusting the reference frequency spectrum range and the target frequency range corresponding to different body states, the respiratory rate in the current body state can be determined according to the body states, and the accuracy of the respiratory rate is improved.
In another embodiment, the method further comprises:
and when the abnormal respiratory rate is determined, sending prompt information to preset equipment which is in communication connection with the earphone. The preset device may be a terminal device held by a user with a monitored respiratory rate, such as a mobile phone, a tablet computer, or a device held by a user with a social relationship with the user with the monitored respiratory rate.
When it is determined that the respiratory rate of the user with monitored respiration is abnormal, that is, the respiratory rate exceeds the reference frequency range, prompt information is sent to preset equipment connected with the earphone so as to inform the user or a person with an association relationship with the user, so that the user or the person with the association relationship with the user can know the physical condition of the user with monitored respiration, and medical treatment in time is facilitated.
The prompt information can be a popup message, a voice prompt message or a short message, etc.
In another embodiment, referring to fig. 10, a schematic diagram of a respiration monitoring device is shown, which may be applied to headphones that include a feedback microphone. The device comprises:
the auditory canal audio signal acquisition module 1 is configured to acquire audio signals in an auditory canal through the feedback microphone to obtain respiratory audio signals;
a respiratory audio signal determining module 2 configured to perform filtering processing on the ear canal audio signal to obtain a respiratory audio signal; the respiratory audio signal is generated when the earphone is in a state of being worn by a user, and vibration generated by the user in the respiratory process is transmitted to an auditory canal in a bone conduction mode;
A respiratory frequency determination module 3 configured to determine a respiratory frequency of the user from the respiratory audio signal;
an anomaly determination module 4 configured to determine whether the respiratory frequency is anomalous based on the respiratory frequency and a reference frequency range.
In another embodiment, the respiratory rate determination module includes:
a spectral information determination unit configured to determine spectral information of the respiratory audio signal;
a target amplitude determination unit configured to detect a target amplitude in the respiratory audio signal within a target frequency range based on the spectral information; wherein the target amplitude is a maximum peak-to-peak value of a fundamental wave of the respiratory audio signal;
and the respiratory frequency determining unit is configured to determine the respiratory frequency according to the frequency corresponding to the target amplitude.
In another embodiment, the target amplitude determining unit includes:
a peak detection subunit configured to detect a maximum peak-to-peak value within the target frequency range based on the spectrum information;
a waveform detection subunit configured to determine whether a waveform of a second waveform frequency exists within the target frequency range according to a first waveform frequency of the waveform corresponding to the maximum peak-to-peak value; wherein the first waveform frequency is an integer multiple of the second waveform frequency;
A fundamental wave determination subunit configured to determine, when determining that a waveform of the second waveform frequency exists, the waveform of the second waveform frequency as the fundamental wave; when it is determined that there is no waveform of the second waveform frequency, then the waveform of the first waveform frequency is determined as the fundamental wave.
In another embodiment, the respiratory rate determination unit includes:
a first respiratory frequency determination subunit configured to determine, when the number of frames of the respiratory audio signal is one frame, a frequency corresponding to the target amplitude as the respiratory frequency;
and the second respiratory frequency determining subunit is configured to determine, when the number of frames of the respiratory audio signal is multiple frames, an average frequency of frequencies corresponding to the target amplitude values in multiple frames of the respiratory audio signal as the respiratory frequency.
In another embodiment, the apparatus further comprises:
a physical state determination module configured to acquire a physical state of the user; the physical state comprises a motion state and a non-motion state, the motion state comprises at least one motion corresponding state, and the non-motion state comprises at least one non-motion corresponding state;
a frequency range determination module configured to determine the target frequency range and the reference frequency range from the body state;
Wherein different physical states respectively correspond to the respective target frequency ranges, and different physical states respectively correspond to the respective reference frequency ranges; the reference frequency range is included in the target frequency range under the same physical state.
In another embodiment, the frequency range determination module comprises:
a first determination unit configured to determine a first compensation value of a maximum value and a second compensation value of a minimum value in a target frequency range corresponding to different physical states;
a second determination unit configured to determine a third compensation value of a maximum value and a fourth compensation value of a minimum value in a reference frequency range corresponding to different physical states;
a target frequency range determining unit configured to determine the target frequency range from a first reference frequency range, the first compensation value, and the second compensation value;
a reference frequency range determining unit configured to determine the reference frequency range based on a second reference frequency range, the third compensation value and the fourth compensation value.
In another embodiment, the respiratory audio signal determination module includes:
the framing unit is configured to frame the auditory canal audio signal to obtain the auditory canal audio signal after multi-frame framing;
And the filtering unit is configured to filter the auditory canal audio signals obtained after the multi-frame framing to obtain multi-frame respiratory audio signals.
In another embodiment, the apparatus further comprises:
the prompt information sending module is configured to send prompt information to preset equipment when the breathing frequency abnormality is determined;
and communication connection is established between the preset equipment and the earphone.
In another embodiment, an embodiment of the present disclosure provides an earphone including a housing and a controller, a feedback microphone, a feedforward microphone, and a speaker disposed on the housing;
the feedforward microphone is connected with the controller and is used for collecting audio data outside the auditory canal and sending the audio data to the controller;
the feedback microphone is connected with the controller and used for collecting audio data in the auditory canal and sending the audio data to the controller;
the controller includes a memory having stored thereon executable computer instructions and a processor capable of invoking the computer instructions stored thereon to perform the method of any of the embodiments.
In another embodiment, a computer storage medium storing an executable program is provided; the executable program, when executed by a processor, is capable of implementing the method provided in any of the embodiments described above.
Fig. 11 is a block diagram of an electronic device 800, according to an example embodiment.
Referring to fig. 11, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to generate all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,4G, or 5G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer-readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of electronic device 800 to generate the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (18)

  1. A respiration monitoring method applied to a headset comprising a feedback microphone, the method comprising:
    collecting an audio signal in an auditory canal through the feedback microphone to obtain an auditory canal audio signal;
    filtering the auditory canal audio signal to obtain a respiratory audio signal; the respiratory audio signal is generated when the earphone is in a state of being worn by a user, and vibration generated by the user in the respiratory process is transmitted to an auditory canal in a bone conduction mode;
    Determining a respiratory rate of the user from the respiratory audio signal;
    and determining whether the breathing frequency of the user is abnormal according to the breathing frequency and the reference frequency range.
  2. The method of claim 1, wherein the determining the respiratory rate of the user from the respiratory audio signal comprises:
    determining spectral information of the respiratory audio signal;
    detecting a target amplitude in the respiratory audio signal within a target frequency range based on the spectral information; wherein the target amplitude is a maximum peak-to-peak value of a fundamental wave of the respiratory audio signal;
    and determining the respiratory frequency according to the frequency corresponding to the target amplitude.
  3. The method of claim 2, wherein the detecting a target amplitude in the respiratory audio signal over a target frequency range based on the spectral information comprises:
    detecting a maximum peak-to-peak value within the target frequency range based on the spectral information;
    determining whether a waveform with a second waveform frequency exists in the target frequency range according to the first waveform frequency of the waveform corresponding to the maximum peak value; wherein the first waveform frequency is an integer multiple of the second waveform frequency;
    Upon determining that a waveform of the second waveform frequency exists, determining the waveform of the second waveform frequency as the fundamental wave;
    when it is determined that there is no waveform of the second waveform frequency, then the waveform of the first waveform frequency is determined as the fundamental wave.
  4. The method of claim 2, wherein the determining the respiratory rate from the frequency corresponding to the target amplitude comprises:
    when the frame number of the respiratory audio signal is one frame, determining the frequency corresponding to the target amplitude as the respiratory frequency;
    and when the number of frames of the respiratory audio signal is multiple, determining the average frequency of the frequency corresponding to the target amplitude in the multiple frames of respiratory audio signals as the respiratory frequency.
  5. The method of claim 2, wherein the method further comprises:
    acquiring the physical state of the user; the physical state comprises a motion state and a non-motion state, the motion state comprises at least one motion corresponding state, and the non-motion state comprises at least one non-motion corresponding state;
    determining the target frequency range and the reference frequency range according to the physical state;
    Wherein different physical states respectively correspond to the respective target frequency ranges, and different physical states respectively correspond to the respective reference frequency ranges; the reference frequency range is included in the target frequency range under the same physical state.
  6. The method of claim 5, wherein the determining the target frequency range and the reference frequency range according to the physical state of the user comprises:
    determining a first compensation value of a maximum value and a second compensation value of a minimum value in a target frequency range corresponding to different physical states;
    determining a third compensation value of a maximum value and a fourth compensation value of a minimum value in a reference frequency range corresponding to different physical states;
    determining the target frequency range according to a first reference frequency range, the first compensation value and the second compensation value;
    and determining the reference frequency range according to the second reference frequency range, the third compensation value and the fourth compensation value.
  7. The method of claim 1, wherein the filtering the ear canal audio signal to obtain a respiratory audio signal comprises:
    framing the auditory canal audio signal to obtain the auditory canal audio signal after multi-frame framing;
    And filtering the auditory canal audio signals obtained after multi-frame framing to obtain multi-frame respiratory audio signals.
  8. The method of claim 1, wherein the method further comprises:
    when the respiratory rate abnormality is determined, prompt information is sent to preset equipment;
    and communication connection is established between the preset equipment and the earphone.
  9. A respiration monitoring device for use with a headset comprising a feedback microphone; the device comprises:
    the auditory canal audio signal acquisition module is configured to acquire an audio signal in an auditory canal through the feedback microphone to obtain an auditory canal audio signal;
    the respiratory audio signal determining module is configured to perform filtering processing on the auditory canal audio signal to obtain a respiratory audio signal; the respiratory audio signal is generated when the earphone is in a state of being worn by a user, and vibration generated by the user in the respiratory process is transmitted to an auditory canal in a bone conduction mode;
    a respiratory frequency determination module configured to determine a respiratory frequency of the user from the respiratory audio signal;
    an anomaly determination module configured to determine whether the respiratory rate of the user is anomalous based on the respiratory rate and a reference frequency range.
  10. The apparatus of claim 9, wherein the respiratory rate determination module comprises:
    a spectral information determination unit configured to determine spectral information of the respiratory audio signal;
    a target amplitude determination unit configured to detect a target amplitude in the respiratory audio signal within a target frequency range based on the spectral information; wherein the target amplitude is a maximum peak-to-peak value of a fundamental wave of the respiratory audio signal;
    and the respiratory frequency determining unit is configured to determine the respiratory frequency according to the frequency corresponding to the target amplitude.
  11. The apparatus of claim 10, wherein the target amplitude determination unit comprises:
    a peak detection subunit configured to detect a maximum peak-to-peak value within the target frequency range based on the spectrum information;
    a waveform detection subunit configured to determine whether a waveform of a second waveform frequency exists within the target frequency range according to a first waveform frequency of the waveform corresponding to the maximum peak-to-peak value; wherein the first waveform frequency is an integer multiple of the second waveform frequency;
    a fundamental wave determination subunit configured to determine, when determining that a waveform of the second waveform frequency exists, the waveform of the second waveform frequency as the fundamental wave; when it is determined that there is no waveform of the second waveform frequency, then the waveform of the first waveform frequency is determined as the fundamental wave.
  12. The apparatus of claim 10, wherein the respiratory rate determination unit comprises:
    a first respiratory frequency determination subunit configured to determine, when the number of frames of the respiratory audio signal is one frame, a frequency corresponding to the target amplitude as the respiratory frequency;
    and the second respiratory frequency determining subunit is configured to determine, when the number of frames of the respiratory audio signal is multiple frames, an average frequency of frequencies corresponding to the target amplitude values in multiple frames of the respiratory audio signal as the respiratory frequency.
  13. The apparatus of claim 10, wherein the apparatus further comprises:
    a physical state determination module configured to acquire a physical state of the user; the physical state comprises a motion state and a non-motion state, the motion state comprises at least one motion corresponding state, and the non-motion state comprises at least one non-motion corresponding state;
    a frequency range determination module configured to determine the target frequency range and the reference frequency range from the body state;
    wherein different physical states respectively correspond to the respective target frequency ranges, and different physical states respectively correspond to the respective reference frequency ranges; the reference frequency range is included in the target frequency range under the same physical state.
  14. The apparatus of claim 13, wherein the frequency range determination module comprises:
    a first determination unit configured to determine a first compensation value of a maximum value and a second compensation value of a minimum value in a target frequency range corresponding to different physical states;
    a second determination unit configured to determine a third compensation value of a maximum value and a fourth compensation value of a minimum value in a reference frequency range corresponding to different physical states;
    a target frequency range determining unit configured to determine the target frequency range from a first reference frequency range, the first compensation value, and the second compensation value;
    a reference frequency range determining unit configured to determine the reference frequency range based on a second reference frequency range, the third compensation value and the fourth compensation value.
  15. The apparatus of claim 9, wherein the respiratory audio signal determination module comprises:
    the framing unit is configured to frame the auditory canal audio signal to obtain the auditory canal audio signal after multi-frame framing;
    and the filtering unit is configured to filter the auditory canal audio signals obtained after the multi-frame framing to obtain multi-frame respiratory audio signals.
  16. The apparatus of claim 9, wherein the apparatus further comprises:
    the prompt information sending module is configured to send prompt information to preset equipment when the breathing frequency abnormality is determined;
    and communication connection is established between the preset equipment and the earphone.
  17. An earphone comprising a housing, a controller disposed on the housing, a feedback microphone, a feedforward microphone, and a speaker;
    the feedforward microphone is connected with the controller and is used for collecting audio data outside the auditory canal and sending the audio data to the controller;
    the feedback microphone is connected with the controller and used for collecting audio data in the auditory canal and sending the audio data to the controller;
    the controller comprising a memory having stored thereon executable computer instructions and a processor capable of invoking the computer instructions stored thereon to perform the method of any of claims 1 to 8.
  18. A computer storage medium storing an executable program; the executable program, when executed by a processor, is capable of implementing the method as provided in any one of claims 1 to 8.
CN202280004450.0A 2022-06-15 2022-06-15 Respiration monitoring method, device, earphone and storage medium Pending CN117597941A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/099022 WO2023240510A1 (en) 2022-06-15 2022-06-15 Respiratory monitoring method and apparatus, earphone and storage medium

Publications (1)

Publication Number Publication Date
CN117597941A true CN117597941A (en) 2024-02-23

Family

ID=89192748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280004450.0A Pending CN117597941A (en) 2022-06-15 2022-06-15 Respiration monitoring method, device, earphone and storage medium

Country Status (2)

Country Link
CN (1) CN117597941A (en)
WO (1) WO2023240510A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2606171C2 (en) * 2011-01-05 2017-01-10 Конинклейке Филипс Электроникс Н.В. Evaluation of insulating properties of obturator for ear canal
WO2018205013A1 (en) * 2017-05-10 2018-11-15 Ecole De Technologie Superieure System and method for determining cardiac rhythm and/or respiratory rate
US10682491B2 (en) * 2017-07-20 2020-06-16 Bose Corporation Earphones for measuring and entraining respiration
US11540743B2 (en) * 2018-07-05 2023-01-03 Starkey Laboratories, Inc. Ear-worn devices with deep breathing assistance
CN113440127B (en) * 2020-03-25 2022-10-18 华为技术有限公司 Method and device for acquiring respiratory data and electronic equipment
AU2021102658A4 (en) * 2021-05-18 2021-07-08 Rudra Sankar Dhar Intelligent earphone system for remote health monitoring using artificial intelligence

Also Published As

Publication number Publication date
WO2023240510A1 (en) 2023-12-21

Similar Documents

Publication Publication Date Title
US9380374B2 (en) Hearing assistance systems configured to detect and provide protection to the user from harmful conditions
US20150319546A1 (en) Hearing Assistance System
JP2020508616A (en) Off-head detection for in-ear headsets
US11223915B2 (en) Detecting user's eye movement using sensors in hearing instruments
US20130343584A1 (en) Hearing assist device with external operational support
EP3095252A2 (en) Hearing assistance system
WO2016167877A1 (en) Hearing assistance systems configured to detect and provide protection to the user harmful conditions
JP2017021737A (en) Program, terminal and system for giving emotional identifier to application by using myoelectrical signal
CN112037825B (en) Audio signal processing method and device and storage medium
CN116324969A (en) Hearing enhancement and wearable system with positioning feedback
CN117597941A (en) Respiration monitoring method, device, earphone and storage medium
CN113596662B (en) Method for suppressing howling, device for suppressing howling, earphone, and storage medium
CN115065921A (en) Method and device for preventing hearing aid from howling
CN115278441A (en) Voice detection method, device, earphone and storage medium
CN114040309A (en) Wind noise detection method and device, electronic equipment and storage medium
KR102138772B1 (en) Dental patient’s hearing protection device through noise reduction
CN114830692A (en) System comprising a computer program, a hearing device and a stress-assessing device
WO2023240512A1 (en) Fall detection method and device, earphone, and storage medium
US20230300511A1 (en) Method and apparatus for controlling headphones, headphones and storage medium
CN113825081B (en) Hearing aid method and device based on masking treatment system
EP4322548A1 (en) Earphone controlling method and apparatus, and storage medium
EP4290886A1 (en) Capture of context statistics in hearing instruments
WO2023245372A1 (en) Step counting method and apparatus, and earphones and storage medium
WO2024075434A1 (en) Information processing system, device, information processing method, and program
CN114979889A (en) Method and device for reducing occlusion effect of earphone, earphone and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination